AIResearch AIResearch
Back to articles
Security

AI Could Make Online Dating Safer by Reading Nonverbal Cues

A new research agenda proposes using computer vision to detect discomfort and disinterest in video dates, aiming to close a communication gap that disproportionately harms women and vulnerable users.

AI Research
April 01, 2026
4 min read
AI Could Make Online Dating Safer by Reading Nonverbal Cues

Online dating has become the primary way people meet romantic partners, but current platforms strip away the nonverbal cues—like gaze, facial expressions, and body posture—that humans rely on to signal comfort, disinterest, and consent. This communication gap is not just a technical oversight; it has real safety consequences, disproportionately affecting women and marginalized groups who often use subtle nonverbal signals to manage unwanted advances without confrontation. The paper argues that this gap represents both a technical opportunity and a moral responsibility for the computer vision community, which has developed advanced tools for affective analysis but has largely ignored the dating domain as a research context.

The researchers propose a fairness-first research agenda focused on four key capability areas to address this nonverbal signal gap. First, discomfort and disinterest detection would use real-time analysis of facial action units, such as brow lowering or lip corner depression, combined with gaze aversion to identify negative affect during video interactions. Second, engagement asymmetry modeling would track behavioral alignment between partners, detecting when one person is significantly more invested or attentive than the other, which can signal imbalance in romantic contexts. Third, consent-aware interaction design would leverage these insights to create platform affordances that help users disengage from uncomfortable interactions without confrontation, such as surfacing easy exit options when disengagement is detected. Fourth, longitudinal interaction summarization would provide users with post-hoc analysis of their video dates, highlighting patterns like moments of high mutual engagement or withdrawal to support self-awareness.

Ologically, the agenda builds on established computer vision techniques, including facial action unit detection through tools like OpenFace 2.0, which processes facial behavior in real time at 30 frames per second on consumer hardware, and gaze estimation, which achieves under 4 degrees mean angular error in unconstrained settings. The researchers define discomfort as a sustained co-occurrence of specific facial action patterns persisting beyond two seconds, combined with gaze aversion exceeding individual baselines by more than 1.5 standard deviations, with labels validated against participant self-report. For engagement asymmetry, they propose using temporal windows of 30 to 60 seconds to balance sensitivity and noise, with parameters established empirically through correlation with self-reported comfort ratings. A critical component is the need for new purpose-built datasets collected under dyadic consent protocols, as existing datasets like RECOLA and SEWA DB lack annotations for consent-relevant behaviors in romantic contexts.

The paper highlights that the absence of nonverbal signals in current dating platforms creates measurable harms, with research showing that women disproportionately rely on cues like gaze aversion and reduced smiling to communicate disinterest safely. For example, studies cited in the paper demonstrate that men systematically misattribute friendly behavior as sexual interest in text-based communication, a misperception that nonverbal cues can partially correct. The proposed affective vision tools could mitigate these issues by providing user-controlled awareness, such as private end-of-interaction summaries that highlight moments of detected discomfort, processed only on the user's device to prevent surveillance. However, the researchers emphasize that these systems must demonstrate equitable performance across demographic groups, as biases in existing affective computing have been documented, such as emotion classifiers rating Black faces as expressing more negative emotion than White faces with equivalent expressions.

In terms of , this research agenda could empower users by giving them more legible information about their interactions, potentially reducing the burden on vulnerable individuals to communicate refusal explicitly in unsafe situations. It aligns with broader efforts in participatory safety research, which has documented women's design priorities for safer dating interfaces. However, the paper also outlines significant limitations and risks, including the need for rigorous fairness evaluation across race, gender identity, neurotype, and cultural background to avoid biased outputs. For instance, neurodivergent users may display atypical facial affect that confounds model assumptions, and cultural variations in expression norms mean that gaze aversion might signal discomfort in one context but politeness in another. Additionally, privacy concerns are paramount, with the proposal advocating only for user-facing tools that process data on-device to prevent emotional information from becoming platform surveillance infrastructure.

The researchers acknowledge several open s and limitations that must be addressed before responsible deployment. Key issues include the dyadic consent problem, where one party's agreement to video processing may not cover the other's data, requiring granular, revocable consent mechanisms. There is also a dual-use risk, as affective tools could be weaponized in intimate partner abuse contexts, though architectural safeguards like on-device processing aim to reduce this. Regulatory gaps persist, with the EU AI Act prohibiting emotion recognition in workplaces but not explicitly governing dating contexts, highlighting the need for new legal frameworks. Ultimately, the paper does not claim that affective computer vision will solve dating safety alone; it must integrate with platform policies, legal frameworks, and cultural norms, and current models face technical limits like imperfect accuracy and cultural bias that require human oversight and ethical responsibility.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn