AIResearch AIResearch
Back to articles
Security

Privacy as a Fairness Tool: How Local Differential Privacy Can Reduce Bias in AI

In the high-stakes world of algorithmic decision-making, where machine learning models determine everything from loan approvals to hiring, two critical concerns often seem at odds: fairness and privac…

AI Research
March 26, 2026
4 min read
Privacy as a Fairness Tool: How Local Differential Privacy Can Reduce Bias in AI

In the high-stakes world of algorithmic decision-making, where machine learning models determine everything from loan approvals to hiring, two critical concerns often seem at odds: fairness and privacy. While central differential privacy (DP) has been shown to exacerbate unfairness, a groundbreaking new study reveals that local differential privacy (LDP) can actually serve as a powerful tool for bias mitigation. Researchers Hrad Ghoukasian and Shahab Asoodeh from McMaster University have developed optimal LDP mechanisms that systematically reduce data unfairness while preserving privacy, offering a novel pre-processing approach that outperforms existing fairness interventions. Their work, detailed in a November 2025 arXiv preprint, demonstrates that carefully designed randomization of sensitive attributes—like race or gender—can lead to less discriminatory classifiers without sacrificing accuracy, challenging conventional wisdom about the privacy-fairness trade-off.

At the heart of this research lies a sophisticated mathematical framework that optimizes how sensitive data is perturbed. For binary sensitive attributes (like gender), the researchers derived a closed-form optimal mechanism: when the probability of a positive outcome differs between groups, the optimal LDP mechanism asymmetrically perturbs the sensitive attribute with probabilities that depend on both the privacy parameter ε and the data distribution. For multi-valued attributes (like race with multiple categories), they formulated the problem as a min-max linear fractional program solvable via branch-and-bound s. This optimization minimizes data unfairness metrics—specifically Δ and Δ′, which measure how much label probabilities depend on sensitive attributes—while satisfying ε-LDP constraints and maintaining utility through error rate bounds. The mechanisms outperform standard LDP approaches like generalized randomized response (GRR) and subset selection (SS), which the paper notes have shown promise but weren't optimized for fairness.

The empirical across multiple real-world datasets are compelling. On the Adult dataset (predicting income above $26K) with gender as the binary sensitive attribute, the optimal mechanism maintained accuracy around 81.6%—comparable to non-private baselines—while reducing statistical parity gaps from approximately 0.38 to 0.22 at ε=10. For multi-valued attributes like race (5 categories) on the same dataset, it achieved similar accuracy improvements while cutting statistical parity gaps from about 0.30 to 0.20. The Law School Admissions Council (LSAC) dataset showed even more dramatic fairness gains, with the mechanism nearly halving both statistical parity and equalized opportunity gaps. Crucially, these improvements came without the accuracy penalties typically associated with fairness interventions, and the mechanisms proved adaptable to existing fairness frameworks, such as replacing randomized response in Mozannar et al.'s 2020 approach for training fair classifiers with privatized attributes.

The theoretical underpinnings of this work are equally significant. The researchers established a formal link between data unfairness and classification unfairness through discrimination-accuracy (DA) optimal classifiers. Their Theorem 3 proves that for DA-optimal classifiers with equal accuracy, reducing data unfairness (Δ′) necessarily leads to lower classification unfairness as measured by statistical parity gaps. This provides rigorous justification for using LDP as a pre-processing fairness intervention, contrasting sharply with central DP where privacy often worsens fairness. The paper also includes Lemma 1, showing that applying GRR to sensitive attributes reduces data unfairness (Δ′(D_GRR) ≤ Δ′(D)), which motivated the search for optimal mechanisms beyond standard LDP approaches.

Despite these advances, the approach has limitations worth noting. For non-binary attributes, the optimal mechanism requires solving a complex optimization problem rather than offering a simple closed-form solution, increasing computational cost—especially for attributes with many categories like race-gender (10 categories). The mechanisms also depend on knowing or estimating data distributions, which may be challenging in practice. Furthermore, while the paper compares favorably against pre-processing s like FairBalance and post-processing s like FairProjection—showing better accuracy-fairness trade-offs on datasets including COMPAS, Bank Marketing, and German Credit—it doesn't address all fairness definitions beyond statistical parity, equalized opportunity, and equalized odds. The researchers acknowledge that fairness is context-dependent, and their mechanisms are designed for group fairness metrics rather than individual fairness notions.

Looking forward, this research positions LDP not just as a privacy tool but as a principled fairness intervention that could reshape how sensitive data is handled in machine learning pipelines. By demonstrating that local randomization can systematically reduce bias while protecting individual privacy, it offers a practical alternative to s that require trusted curators or compromise utility. As algorithms increasingly influence societal outcomes, such approaches that harmonize competing ethical requirements may prove essential for building trustworthy AI systems that are both private and fair.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn