AIResearch AIResearch
Back to articles
Data

How Journalists Can Shape AI Tools Before They Shape News

Researchers propose integrating journalistic values like diversity and transparency into algorithm design from the start—not as an afterthought—to preserve journalism's role in democracy.

AI Research
March 26, 2026
4 min read
How Journalists Can Shape AI Tools Before They Shape News

As artificial intelligence tools become increasingly embedded in newsrooms, from automated content generation to personalized news recommendations, a critical question emerges: How can journalism's core values be preserved when algorithms are making editorial decisions? A new analysis reveals that the design of these algorithm-powered tools—not just their use—fundamentally shapes what news gets produced, how it's distributed, and ultimately, what information reaches the public. The researchers argue that to prevent algorithms from undermining journalism's democratic functions, journalistic approaches must be integrated into algorithm design from the beginning, rather than being added later to address concerns.

Key from the analysis show that algorithm design affects every stage of the journalistic process, from news production to dissemination. The researchers identify three crucial aspects that must be considered when designing algorithms for journalism: what features the algorithm should consider (like content topics or user interests), how these features should be weighted, and what the algorithm should optimize for. Traditionally, many algorithms have been optimized for clicks, but the analysis suggests this short-term focus may conflict with long-term journalistic goals like maintaining reputation and serving public interests. The paper emphasizes that different algorithm designs lead to different outcomes—a popularity-based algorithm that ranks articles by clicks operates very differently from a semantic-filtering algorithm that recommends content based on topic similarity.

Ology for studying how journalistic values can be incorporated into algorithm design involves multiple approaches, with qualitative s playing a significant role in understanding these emerging practices. Researchers have conducted in-depth interviews with practitioners in Dutch and Swiss news organizations to examine how journalists can stay involved in algorithmic decision-making processes. Other studies have employed participatory and co-creation approaches, including design thinking with journalism students to explore how journalistic standards can be integrated into AI applications for creating video content. Ethnographic approaches have also been valuable, with researchers using design ethnography to study algorithmic design practices at organizations like the BBC, identifying strategies for blending editorial values with technological capabilities.

From various studies cited in the analysis reveal both s and promising directions. A survey across six countries found that most respondents believe journalists already use AI for tasks like writing whole articles, creating pressure to address value integration now. Research on news recommender systems—one of the most studied algorithm-powered tools in journalism—shows growing recognition that optimizing solely for accuracy in predicting user clicks may undermine other important values. Studies have explored how to operationalize journalistic values like diversity and serendipity in algorithm design, with researchers proposing frameworks for incorporating normatively grounded diversity metrics and designing for "navigable surprise" that balances relevance with unexpected discoveries. The analysis also notes that algorithm audits and agent-based testing can quantitatively measure whether algorithms meet stated goals, such as avoiding information bubbles.

Of this research are substantial for how news organizations approach technology adoption. As the paper notes, the discourse has shifted from whether journalistic organizations should use algorithmic tools to how they should apply them. Successful integration requires advancing algorithmic and AI literacy among media practitioners, enabling them to better understand how journalistic practices can be translated into algorithmic design. The analysis highlights that transparency is crucial not only for tools used by audiences but equally important for journalists working in newsrooms, who need to understand how algorithms affect the content they produce. This is particularly challenging with tools like large language models, where inner workings are less transparent than with simpler algorithms like popularity-based ranking systems.

Several limitations and s remain, according to the analysis. Most research on integrating journalistic approaches into algorithm design currently focuses on the Global North, with limited studies examining practices in the Global South where resources are often constrained and organizations may rely on off-the-shelf tools developed elsewhere. There's also significant variation in how algorithm-powered tools are implemented across different media systems and news organizations, with many running pilot projects but having fewer systems in full production than public perception might suggest. The analysis notes that balancing public interests with economic purposes becomes particularly challenging when processes are automated by algorithms, requiring decisions to be made proactively rather than reactively. Additionally, cultural differences in how journalistic values are interpreted globally must be considered to avoid suppressing regional perspectives in algorithm design.

Future research directions identified in the analysis include examining how approaches specific to different journalism cultures and media systems can be integrated into algorithm design, considering that while some values like truth and accountability may be universal, their interpretation varies across contexts. Another important direction involves exploring the transformative potential of advancing generative AI technologies and their adoption by journalists, as new models for text, image, and sound generation continue to emerge. The researchers emphasize that truly responsible use of journalistic AI requires more than just identifying values—it demands responsible organization of processes for implementing, measuring, and continuously improving how AI tools live up to these values in practice.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn