I am a human computer interaction researcher studying how people interact with and understand computing systems that bring together people, data, and machine learning technologies. These systems are “black boxes”: the people who use them can see and experience the inputs and outputs, but not the inner logic. This makes it very difficult to understand and reason about how they work, and to envision what the consequences of using them might be for individuals and society.
For example, it is hard enough for end users of computing systems to be aware of the data that is collected about them, but it’s even harder to understand how that data can be used to categorize their personal characteristics or activities, to make predictions about their future behavior and interests, and to infer sensitive, private information. I’m working to discover ways to help people take back some agency over the data they provide to the apps and platforms they use, so that they will have a way to influence what these systems can do and how they affect the world.
I am an Associate Professor in the Department of Media and Information, of the College of Communication Arts and Sciences at Michigan State University. Some keywords to describe my research are: digital privacy, inferences, social norms, algorithms, big data, sociotechnical systems.
Some things I’ve been up to recently…
I published a paper at the 2022 Symposium on Usable Privacy and Security (SOUPS) in August, titled Normative and Non-Social Beliefs about Sensor Data: Implications for Collective Privacy Management.
Many of the results from my 2012 paper with Rick Wash and Brandon Brooks, Stories as Information Lessons About Security, were replicated by another research group. I’m a co-author on the paper – I provided our 2012 data and analyses and helped with the writing – which also appears at SOUPS 2022: Replication: Stories as Informal Lessons about Security.