AIResearch AIResearch
Back to articles
AI

AI and Peers Shape Better Academic Writers

A new study shows students use AI for structure and peers for depth, developing critical skills for an AI-driven future without losing human connection.

AI Research
March 26, 2026
4 min read
AI and Peers Shape Better Academic Writers

As generative AI tools like ChatGPT become common in classrooms, educators face a pressing question: how can students learn to use these technologies critically rather than relying on them superficially? A study from the University of Illinois Urbana-Champaign offers a promising answer by integrating AI and peer feedback in a graduate writing course, revealing that students develop nuanced strategies to combine machine efficiency with human insight. Over eight weeks, twelve students crafted literature reviews, receiving input from both a custom AI reviewer called CyberScholar and their peers, with showing distinct roles for each feedback source that together enhance writing and foster AI literacy. This approach not only improved academic work but also equipped students with transferable skills for navigating an increasingly AI-mediated world, highlighting a scalable model for higher education.

The researchers discovered that students engaged with AI and peer feedback in complementary ways, relying on AI for rubric alignment and surface-level edits while turning to peers for conceptual development and disciplinary relevance. Data from CyberScholar log files, writing drafts, and student reflections indicated that AI feedback was valued for its timeliness and structure, helping students quickly identify areas for improvement such as grammar corrections or missing components aligned with the assignment rubric. For instance, one student noted that AI provided "sheer volume of feedback in a short amount of time," making it efficient for routine tasks. In contrast, peer feedback was appreciated for its thoughtfulness and contextual awareness, with students highlighting how peers could grasp the tone and intentions behind their writing, often offering perspectives that AI overlooked. This division of labor allowed students to strategically sequence their revisions, using AI early for foundational adjustments and peers later for deeper refinement.

To investigate these dynamics, the study employed a case study–mixed s design within an 8-week graduate course on educational technologies. Students developed 3500-word papers through multiple stages: they submitted initial drafts for AI review via CyberScholar, which used GPT-4o and a retrieval-augmented generation system to provide rubric-aligned feedback, then revised based on that input before receiving peer feedback with in-text annotations and rubric scores. The pedagogical design included structured scaffolds such as weekly discussions and written reflections, allowing researchers to analyze interaction patterns through quantitative metrics like Jaccard similarity scores and qualitative coding of student reflections. For example, log data showed that while some students engaged deeply with AI through extensive dialogues, others had minimal interaction, treating the tool as a scoring engine rather than a collaborative partner.

Revealed varied engagement levels, with only a few students demonstrating deep, critical interaction with AI, such as questioning suggestions or negotiating interpretations, while most exhibited shallow or transactional use. Quantitative analysis of revision trajectories, using n-gram comparisons, showed that students like Ethan made consistent edits across drafts with slight increases after peer feedback, whereas others like Morgan relied heavily on AI for early-stage content development but made minimal changes after peer input. Reflections further illuminated that students recognized AI's strengths in providing structured, actionable guidance but critiqued its limitations, such as generic comments or lack of contextual nuance, with one student describing it as "a rigid checklist." Peer feedback, though sometimes overwhelming in volume, was valued for its emotional support and ability to foster community, as students felt encouraged by peers' enthusiasm and shared intellectual curiosity.

This study has significant for higher education, suggesting that hybrid feedback systems can support both writing development and critical AI literacy, preparing students for future workplaces where human-AI collaboration is essential. By integrating AI and peer input, instructors can help students avoid over-reliance on technology while leveraging its efficiencies, as evidenced by students reporting increased confidence in using AI strategically beyond the classroom. However, the research acknowledges limitations, including a small sample size of ten consenting participants and a single-course context, which may affect generalizability. Future work should explore broader implementations across disciplines to refine designs that deepen critical engagement, ensuring that AI tools enhance rather than mechanize the creative and reflective aspects of writing.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn