A new interface for controlling mobile robots allows users to draw navigation paths directly on the physical floor using hand gestures in mixed reality, bridging a gap in how humans specify robot movements. In environments like warehouses, hospitals, and offices, autonomous robots typically rely on geometric path planners that optimize for factors like shortest distance, but these systems offer limited ways for operators to explicitly define preferred routes based on spatial intentions, such as avoiding pedestrian flow or maintaining comfortable clearance. This limitation often forces users to resort to indirect s like tuning costs or setting multiple waypoints, which may not accurately capture desired paths. The MRReP system addresses this by enabling direct, intuitive path specification in the real world, making robot control more aligned with human spatial thinking.
The researchers developed MRReP, a mixed reality-based interface where users wear a HoloLens 2 headset to draw a Hand-drawn Reference Path (HRP) on the floor through hand gestures, which is then integrated into the robot's navigation stack. This approach eliminates the need for mental translation between 2D maps and the 3D environment, a common in conventional interfaces. The system includes three main functions: ADD for creating paths by pinching gestures to generate waypoints, CLEAR for deleting paths, and SEND for transmitting the path data to the robot's control system. A custom Hand-drawn Reference Path Planner converts the drawn point sequence into a global path for autonomous navigation, using communication links between Unity, ROS 2, and the robot hardware. The coordinate systems are aligned via QR code recognition, ensuring accurate placement of paths in the physical space.
In a within-subject experiment with 16 participants, MRReP was compared against a conventional 2D baseline interface where users drew paths on a laptop map with a mouse. The evaluation focused on hand-drawn path accuracy, measured by how closely the HRP matched a target path marked with tape on the floor. showed that MRReP significantly improved fidelity: for a straight path (Stage A), precision increased from 71.6% with the 2D system to 84.9% with MRReP, and recall rose from 65.5% to 91.6%. For a more complex piecewise linear path with turns (Stage B), precision improved from 52.9% to 83.6%, and recall from 59.0% to 83.7%. These metrics indicate that MRReP allowed users to reproduce target trajectories more faithfully, with less deviation and lower inter-participant variability, as visualized in global path and robot trajectory comparisons.
Of this research are practical for real-world applications where precise robot navigation is critical in human-shared spaces. By enabling direct path specification in the physical environment, MRReP enhances usability and reduces cognitive load, as evidenced by higher System Usability Scale scores (median 75.0 for MRReP vs. 51.3 for 2D) and lower NASA Task Load Index scores (median 47.7 vs. 61.5). This could lead to more efficient operations in settings like logistics or healthcare, where operators need to quickly and accurately guide robots without extensive training. The system's ability to capture human spatial intentions more directly may also support safer interactions, as robots can follow routes that avoid interference with people or sensitive areas, though the paper does not explicitly discuss safety outcomes.
Despite its advantages, MRReP has limitations noted in the study. The system required more task completion time than the 2D baseline, with median times of 53.0 seconds vs. 29.6 seconds for Stage A, possibly due to the physical movements involved in gesture-based input. Participants reported s such as arm fatigue and stress from gesture misrecognition, highlighting areas for improvement in ergonomics and interface reliability. Additionally, the experiment was conducted in a controlled environment with predefined paths, and further research is needed to assess performance in dynamic, unstructured settings. The paper concludes that direct path specification in mixed reality is effective but acknowledges these hurdles for broader adoption.
Original Source
Read the complete research paper
About the Author
Guilherme A.
Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.
Connect on LinkedIn