Ethics
AI Sales Agents Fail at Trust, Not Just Selling
A new study reveals that AI sales agents often execute perfect sales pitches but fail to build user trust, leading to zero conversions—highlighting a critical gap in how AI performance is evaluated.
AI Models Show Four Distinct Ethical Processing Styles
A new study reveals that language models process ethical instructions in fundamentally different ways, with some achieving safety through shallow compliance rather than genuine deliberation—challenging current alignment practices.
AI Hiring Tools Show a Troubling Double Standard
Large language models used in hiring are more likely to recommend women for jobs but still suggest paying them less than men, revealing biases that simple fixes can't solve.
AI Mental Health Tools Miss Critical Safety Information
A new study reveals that AI chatbots often fail to provide essential safety guidance in mental health responses, with omissions more common than outright errors, especially in crisis situations.
Why AI in Healthcare Often Fails to Deliver System-Wide Change
A new analysis reveals that most AI tools in healthcare only improve local tasks without altering the underlying incentives that drive behavior, explaining why system-level transformation remains elusive.
AI Safety Tests Miss How Models Amplify Human Harm
Researchers propose measuring 'harmful capability uplift'—how much AI increases users' ability to cause damage—arguing current safety evaluations fail to capture real-world risks where humans and AI collaborate on malicious tasks.
AI Research Lacks Standards for Robot Gender
A new study reveals that one-third of AI studies manipulate robot gender without measuring it, leading to unreliable results and reinforcing stereotypes in human-robot interactions.
AI Could Make Online Dating Safer by Reading Nonverbal Cues
A new research agenda proposes using computer vision to detect discomfort and disinterest in video dates, aiming to close a communication gap that disproportionately harms women and vulnerable users.
AI Tools Shape Interns' Work in the Philippines
A study of 384 student interns reveals how AI like ChatGPT is becoming a key part of workplace training, boosting productivity but raising questions about skill development and ethical use.
AI Transforms Hiring with Smart Candidate Assessments
A new AI system uses language models to evaluate job candidates with human-like precision, offering detailed reports and rankings to help companies hire more fairly and efficiently.
AI Systems Overlook Black Historical Newspapers
A new study reveals that standard AI evaluation methods fail to detect critical errors in historical Black newspapers, risking the erasure of cultural context and editorial meaning from digital archives.
AI's Hidden Reasoning Flaws Threaten Cancer Care
GPT-4 makes cognitive errors in 23% of oncology note interpretations, leading to guideline-discordant recommendations that could harm patients—exposing a critical safety gap in medical AI.
Robots That Feel: The Quest for Authentic AI Empathy
Scientists are building robots that mimic human empathy, but creating machines that truly 'feel' raises profound ethical questions about consciousness and artificial suffering.
Chatbots Can Sway Your Political Views Without Lying
A new study shows that AI chatbots can shift opinions on issues like defense spending simply by framing arguments around values, raising concerns about subtle manipulation in everyday conversations.
AI Struggles to Extract Key ESG Data from Reports
A new benchmark reveals that AI systems often fail to accurately answer questions about corporate sustainability disclosures, highlighting challenges in analyzing environmental and social metrics for financial decisions.
AI Struggles to Reason with Medical Evidence
A new study reveals that even when AI retrieves correct clinical guidelines, it often fails to use them properly, raising concerns for mental health applications.
AI Can Steer Conversations Toward Human Values With Simple Prompts
A new method allows large language models to align with specific human values like benevolence or achievement through prompt design alone, without costly fine-tuning, offering a flexible approach for dynamic applications.
Turing's Test Was Never About Tricking People
A new analysis reveals that the Turing test has been misunderstood for decades, with critics blaming it for AI's societal harms while ignoring its true purpose as a conceptual benchmark for machine intelligence.
AI Ethics Gap Threatens Computing Education
A new study reveals universities are struggling to address the ethical and societal impacts of generative AI in computer science programs, creating risks for students and institutions alike.
AI Advising for Students Shows Hidden Risks
A new study reveals that large language models often give incomplete or unsupported answers to study-abroad questions, with distinct behavioral patterns that could mislead students if not properly monitored.
AI Struggles to Understand Tunisian Arabic
A new study reveals that large language models often fail to grasp the Tunisian dialect, risking cultural exclusion and pushing millions to use foreign languages for basic AI interactions.
When People Distrust Humans, They Turn to AI
A new study reveals that trust in artificial intelligence often stems from a lack of confidence in human advisors, a phenomenon called 'deferred trust' that could reshape how we interact with technology.
AI Efficiency Is Leaving Most Organizations Behind
A new research agenda argues that current AI optimization methods only work for Big Tech, creating a widening gap for hospitals, schools, and governments that need simpler, more robust solutions.
Privacy Harms Go Beyond Legal Definitions
A new study reveals that most digital privacy harms are psychological, not tangible, exposing a gap in how we protect people online.