AIResearch AIResearch
Back to articles
AI

AR Sticky Notes Change How Teams Brainstorm

A new augmented reality system captures spoken ideas instantly, but researchers found it can make collaboration more awkward and force people to think differently before they speak.

AI Research
March 26, 2026
4 min read
AR Sticky Notes Change How Teams Brainstorm

Imagine brainstorming with a colleague, and every time you speak, your words instantly appear as a digital sticky note floating in the air between you. This is the promise of AnchorNote, a new augmented reality system developed by researchers at Princeton University. In a study with 20 participants, the team explored how this speech-driven, spatial capture tool reshapes collaborative work, revealing that while it reduces the physical act of writing, it introduces new cognitive and social costs that can disrupt the natural flow of conversation. , detailed in a paper on arXiv, highlight the trade-offs as augmented reality tools move into everyday teamwork, showing that what seems like a simple upgrade can fundamentally alter how people generate and share ideas.

The key from the study is that AnchorNote reshapes collaboration by shifting effort rather than eliminating it. Participants reported that the system reduced the manual burden of handwriting notes, allowing them to think out loud without interruption. However, this benefit came with a catch: effort moved from writing to monitoring the system. In Phase 1 of the study, where users triggered note creation with gestures, participants often focused on whether the system had correctly recorded and transcribed their speech, leading to distractions and increased mental effort. As one participant noted, they were 'focused on the system rather than the prompt,' and survey data supported this, showing higher perceived effort and interruptions compared to using analog sticky notes. This shift underscores that speech-driven capture doesn't just remove friction; it reallocates cognitive load to new areas, such as supervising technology and recovering from errors like mistranscriptions.

To understand these effects, the researchers conducted a two-phase iterative study. In Phase 1, 12 participants worked in pairs using AnchorNote with gesture-based controls, comparing it to analog sticky notes in brainstorming sessions about campus-related topics. The system used AR glasses to transcribe speech live, summarize it into short titles with a large language model, and anchor notes in shared physical space, as shown in Figure 1. Phase 2 involved eight returning participants using an updated version with explicit button controls, clearer system-state indicators like 'Recording' messages, and a delete function, as illustrated in Figure 3. This ology allowed the team to diagnose breakdowns in Phase 1 and test repairs in Phase 2, using surveys, video recordings, and interviews to analyze how speech-driven capture affected coordination and ideation.

, Detailed in the paper, show several critical patterns. First, immediate speech-to-note externalization changed how participants formulated ideas, with many feeling pressure to pre-finalize thoughts before speaking because their words would become a permanent artifact. This sometimes suppressed tentative contributions and altered conversational rhythm. Second, interaction ambiguity in Phase 1, such as gesture misfires and transcription delays, disrupted turn-taking and forced explicit coordination, making the system feel like the 'center of the conversation.' In Phase 2, explicit controls reduced these issues, improving predictability and allowing capture to recede into the background. Third, spatial persistence helped with shared reference, but without deletion mechanisms in Phase 1, clutter quickly undermined sensemaking; adding delete functionality in Phase 2 enabled better workspace curation. Survey data in Table 1 corroborates these trends, with Phase 1 showing higher interruptions and lower conversational smoothness compared to analog, while Phase 2 saw improvements.

Of this research extend beyond academic settings to any collaborative environment, from business meetings to creative workshops. For everyday readers, the study suggests that future AR tools need to balance automation with user control, prioritizing interaction legibility and fast error recovery over hands-free novelty. Designers might incorporate features like draft states or delayed publishing to support provisional thinking, making it easier for teams to share half-formed ideas without pressure. also caution against assuming that digital upgrades always enhance collaboration; as the paper notes, AnchorNote does not replace analog practices but serves as an exploratory probe to understand how capture modality restructures collaborative cognition. This insight is crucial as augmented reality becomes more prevalent, reminding us that technology should adapt to human dynamics, not the other way around.

However, the study has limitations that temper its conclusions. The small sample of 20 undergraduate participants may not represent broader populations, and the prototype relied on current AR hardware, which could affect generalizability. Phase 2 improvements might partly reflect participants' familiarity with the system rather than design changes alone, as noted in the limitations section. Additionally, reliance on AR glasses raises accessibility concerns for users with motor impairments, and speech transcription involves privacy risks. The researchers emphasize that these factors require careful consideration in future development, highlighting the need for more inclusive and transparent systems that support diverse collaborative needs without introducing new barriers.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn