AIResearch AIResearch
Back to articles
Science

Robots Learn to Trust and Cooperate with Humans by Understanding Intent and Capabilities

A new AI method helps robots and humans work together more effectively by calibrating mutual understanding, leading to higher performance and trust in collaborative tasks.

AI Research
November 14, 2025
3 min read
Robots Learn to Trust and Cooperate with Humans by Understanding Intent and Capabilities

In homes and workplaces, robots are becoming common assistants, but their success hinges on how well they understand and are understood by humans. When people and robots misjudge each other's intentions or abilities, it can lead to frustration, errors, or even accidents. A study from the National University of Singapore tackles this challenge by developing a system that enables robots to calibrate their beliefs about human goals and capabilities during collaboration, resulting in improved teamwork and trust. This advancement is crucial as robots take on roles in caregiving, manufacturing, and daily tasks, where seamless interaction can enhance safety and efficiency.

The researchers discovered that by modeling and adjusting for human intent and capabilities, robots can achieve better outcomes in collaborative scenarios. In their experiments, robots using this method earned higher rewards in tasks like item collection, where humans and robots worked together without direct communication. The key finding is that mutual calibration—where both the robot and human update their beliefs about each other—leads to more accurate understanding and, consequently, better performance over repeated interactions.

To implement this, the team designed a decision-theoretic framework called the Trust-Intent-Capability-Calibration Partially Observable Markov Decision Process (TICC-POMDP). This model allows the robot to infer the human's goals from observed actions and to estimate their success rates at tasks, such as picking up objects. Unlike standard approaches, it explicitly incorporates uncertainty about capabilities, treating them as probabilities that can be learned through interaction. The robot uses an online solver, TICC-MCP, which plans actions by simulating possible outcomes, encouraging behaviors that teach humans about the robot's own limitations—for example, by deliberately failing a task to communicate incapability.

In simulation experiments, the method was tested against a standard POMCP solver that lacked explicit calibration. With variations in search samples, item types, and goal lists, TICC-MCP consistently achieved higher task rewards, as shown in Figure 2 of the paper. For instance, in one setup with 10 shopping lists and 5 item types, it quickly learned human capabilities, leading to a performance boost in evaluation stages. Human subject experiments involved 28 participants collaborating with a Fetch robot in a tabletop shopping task. Here, the robot could not pick up small sweets and had an 80% success rate with cups, while humans had a 50% chance with certain cups due to faults. Results indicated that teams using TICC-MCP earned significantly higher rewards in later rounds, with one round showing a statistically significant improvement after Bonferroni correction. Additionally, participants reported higher trust in the robot, supporting the hypothesis that calibration fosters better human-robot relationships.

This research matters because it addresses real-world issues in human-robot collaboration, such as in assistive technologies or industrial settings, where misaligned expectations can hinder adoption. By improving mutual understanding, robots can adapt to individual users, reducing risks of under-trust or over-reliance. For everyday readers, this means future robots could assist more effectively in tasks like grocery shopping or home care, making interactions smoother and more reliable.

However, the study has limitations. The experiments were conducted in controlled environments with short-term interactions, and it remains unknown how well the method scales to longer-term scenarios or more complex tasks. The paper notes that factors like varying attention levels or different types of collaborative activities were not explored, suggesting areas for future research to ensure broad applicability.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn