AIResearch AIResearch
Back to articles
Ethics

Privacy Harms Go Beyond Legal Definitions

A new study reveals that most digital privacy harms are psychological, not tangible, exposing a gap in how we protect people online.

AI Research
March 26, 2026
4 min read
Privacy Harms Go Beyond Legal Definitions

When we think about privacy violations, we often imagine clear-cut cases like identity theft or financial fraud—harms that are easy to quantify and address in court. But a new study shows that the real impact of digital privacy incidents is far more subtle and pervasive, rooted in fear and psychological distress rather than tangible losses. Researchers from Brigham Young University and other institutions analyzed 369 privacy incidents reported by the general public, finding that existing legal frameworks, which focus on acute, provable harms, miss the majority of what people actually experience. This gap highlights a critical need to rethink how we define and address privacy in an increasingly digital world, where corporations and individuals, not just governments, are the main actors in privacy violations.

The key finding from the research is that the majority of reported privacy harms are not based on tangible outcomes like economic loss or physical injury, but on fear and loss of psychological safety. Out of the 369 incidents analyzed, 50% involved a loss of psychological safety, including anxiety about future events, persistent paranoia regarding technology, and a general feeling of being unsafe. In contrast, tangible harms such as economic harms were reported in only 0.5% of cases, and physical harms in just 0.3%. This indicates that people are more affected by the anticipation of harm—like fearing that their personal data could lead to blackmail or scams—than by actual, quantifiable damage. The study also found that corporations were the preeminent actors in privacy incidents, involved in about half of the reports, while government actors were cited only once, challenging the legal focus on governmental violations.

To uncover these insights, the researchers deployed an online survey in January 2025, recruiting 164 participants from the U.S. through the platform Prolific, with a final sample of 151 participants after filtering out hypothetical incidents. The survey asked participants about their experiences with 16 types of online privacy incidents based on Solove’s Taxonomy of Privacy, a legal framework, and collected open-ended responses on three randomly assigned incidents. Participants described what happened, what caused it, where it occurred, and how it caused discomfort or harm. The researchers then qualitatively coded these responses, using Solove’s taxonomy for incidents and Solove and Citron’s Typology of Privacy Harms for harms, while also allowing for new themes to emerge through open coding. This approach enabled them to compare lived experiences against legal frameworks and identify gaps, such as the prevalence of psychological fears over tangible harms.

Analysis reveals a detailed picture of privacy incidents and their impacts. For example, 18% of responses involved targeted ads, where participants felt surveilled by corporations for advertising purposes, leading to feelings of discomfort and loss of control. Another 17% reported incidents related to the general collection and sale of data, often involving corporations surveilling online activity and selling it to other parties. In terms of harms, fear of potential loss was reported in 20% of responses, such as participants worrying about future economic or physical harm after their data was breached. Persistent paranoia, like feeling that phones are constantly listening, appeared in 19% of responses, while a general feeling of being unsafe online was noted in 10%. The study also mapped actors and motives, showing that corporations were involved in incidents with motives like targeted ads and data sale, while individual actors, including known and unknown persons, were involved in motives like posting with malintent or intent to scam.

These have significant for both research and policy. For researchers, the updated taxonomies developed in the study—adapting Solove’s Taxonomy of Privacy and Solove and Citron’s Typology of Privacy Harms—provide a more human-centered framework that can be applied across human-computer interaction (HCI) studies. For instance, the adapted taxonomy expands definitions to include corporate actors in surveillance and decisional interference, and adds new incident types like deceptive data collection and spread of personal accessibility information. For policymakers, the study highlights holes in current legislation, such as the European Union’s General Data Protection Regulation (GDPR), which may not cover privacy incidents enacted by individuals without commercial connections. The emphasis on psychological harms suggests that laws need to account for the temporal dimension of privacy, where anticipated harms cause long-term fear, potentially requiring new approaches to negligence in privacy-related lawsuits.

However, the study has limitations that should be considered. The survey was deployed at a single time point in January 2025, prior to potential political changes in the U.S. that might increase concerns about government-involved privacy incidents, so represent a snapshot rather than a longitudinal view. Additionally, the sample was limited to U.S. participants, recruited through Prolific with quotas for gender and income but not nationally representative, and did not include vulnerable or underrepresented populations who might experience additional incidents and harms. The reliance on self-report data also introduces potential biases, such as recall or social desirability bias, though the anonymous survey design aimed to encourage candid disclosures. Future work could expand to broader samples, different cultures, and longitudinal deployments to better understand how privacy incidents and harms evolve over time and across contexts.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn