As artificial intelligence becomes a daily companion for millions, a fundamental question emerges: why do people trust AI? A new study suggests the answer might not lie in the technology's brilliance, but in human fallibility. Researchers have identified a cognitive mechanism they call 'deferred trust,' where distrust in human agents—like peers, adults, or religious figures—redirects reliance toward AI systems perceived as more neutral or competent. This finding s the notion that AI trust is built solely on performance metrics, revealing it as a complex social and psychological compensation. For a society increasingly leaning on chatbots and voice assistants for guidance, understanding this shift is crucial to navigating the risks of over-reliance and ensuring these tools augment rather than undermine human judgment.
The study, involving 55 undergraduate students from Colombia, presented participants with 30 decision-making scenarios spanning factual, emotional, and moral dilemmas. Participants chose a guiding agent from options including AI (like ChatGPT), voice assistants (like Alexa), peers, adults, or priests. Overall, adults were the most selected agents at 35.05%, but AI came in a strong second at 28.29%, indicating its significant role as an alternative to human advisors. This distribution highlights AI's integration into decision-making processes, yet the slight preference for adults suggests persistent human biases in trust dynamics. The researchers used clustering analyses to uncover patterns, finding that AI dominated in factual scenarios, such as historical inquiries or recipe preparation, while humans prevailed in social or moral contexts. This context-specific preference underscores that trust in AI is not uniform but shaped by the nature of the problem at hand.
Ologically, the study employed a novel experiment named 'Trust in AI: Situations by Specific Nature,' designed to assess internal consistency and external validity. Participants completed a sociodemographic questionnaire capturing variables like age, technology use, socioeconomic level, and prior trust in various agents. The experiment's 30 scenarios, presented in Spanish with graphical agent descriptions, required selections that were analyzed using K-Modes clustering for scenarios and K-Means clustering for participant profiles. To predict AI selection, the researchers applied eXtreme Gradient Boosting (XGBoost) models with SHAP interpretations, incorporating sociodemographic and prior trust variables. This computational approach allowed for rigorous identification of factors influencing trust, with models tuned via Bayesian optimization and evaluated through metrics like average precision, which reached up to 0.863. ology's strength lies in its blend of behavioral data and machine learning, providing a nuanced view of trust dynamics beyond simple surveys.
Revealed compelling insights into what drives AI trust. SHAP analyses for scenarios where AI was preferred—such as situations 24, 26, and 9—showed that lower prior trust in human agents (priests, peers, and adults) consistently predicted higher AI selection, with inverse relationships evident in the data. For example, in situation 26, trust in adults had the strongest inverse link to AI choice, with SHAP values ranging from -0.3 to 0.2. This supports the deferred trust mechanism, where distrust in humans compensatorily shifts reliance to AI. Additionally, age and technology use showed inverse associations with AI selection, suggesting older participants and those more familiar with technology exhibited lower trust, possibly due to heightened vigilance. The clustering of participants into three profiles further distinguished those with higher AI trust, who were characterized by human distrust, lower technology use, and higher socioeconomic status. These , backed by model performance metrics like an average precision of 0.7961 for cluster prediction, provide empirical evidence for deferred trust as a key driver in AI adoption.
Of this research extend beyond academic circles, touching on real-world interactions with AI. By framing deferred trust as a compensatory mechanism, the study suggests that AI trust often arises reactively from human shortcomings rather than proactive endorsement of machine reliability. This has significant consequences for how AI is designed and deployed; for instance, transparency and explainability become critical to prevent over-reliance when users turn to AI out of distrust in human alternatives. The study also highlights risks like fluency effects, where the smooth, authoritative responses of large language models can erode epistemic vigilance, leading to uncritical deference. In practical terms, this means educators and developers need to foster AI literacy to help users calibrate trust appropriately, ensuring AI complements human judgment rather than replacing it. As AI systems like ChatGPT become ubiquitous, understanding these trust dynamics is essential for mitigating societal shifts where interpersonal trust may erode in favor of algorithmic dependence.
Despite its insights, the study has limitations that warrant caution. The sample consisted exclusively of university students, a homogeneous group with high educational attainment and technology familiarity, which may limit generalizability to broader populations like older adults or diverse cultural contexts. Additionally, the experimental setup used static, text-based scenarios, which, while valid for internal consistency, may not fully capture the richness of real-world AI interactions involving conversational interfaces or embodied agents. The absence of multimodal data—such as physiological or neurocognitive markers—restricts the ability to disentangle cognitive vigilance from affective responses, leaving gaps in understanding the emotional dimensions of trust. Future research should expand to diverse samples, incorporate immersive designs, and explore deferred trust across different contexts to refine this framework. These limitations underscore the exploratory nature of , emphasizing the need for replication and deeper investigation into how trust in AI evolves in our increasingly digital lives.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn