Behind the Agenda: The “Grunt Work” Myth - Rethinking AI in the Newsroom

20 Aug 2025

Behind the Agenda: The “Grunt Work” Myth - Rethinking AI in the Newsroom

The prevailing narrative around AI in journalism frames automation as suitable mainly for “grunt work” - repetitive, low-stakes newsroom tasks. Yet two award-shortlisted IBC2025 technical papers challenge this assumption in practice. The results suggest that AI’s effectiveness is not determined by whether tasks are “complex” or “routine,” but by whether they rely on explicit, rule-based knowledge versus tacit, interpretive judgment. In reframing this distinction, the article argues that AI trials not only automate tasks but also act as diagnostic tools, revealing where journalism rests on tacit cultural judgment and where explicit standards of verification prevail. This analysis calls for a rethinking of newsroom AI strategy, moving beyond the “grunt work” trope toward a more precise understanding of task suitability.

The dominant narrative shaping AI in journalism today is the “AI for grunt work” trope: automation is best suited for repetitive, low-stakes newsroom tasks. This assumption extends beyond just workplace conversation: it is also reflected in both resource allocation strategy and plays a decisive role in shaping how audiences respond to AI in the news. This year, 93% of publishers were found to use AI for workflow automation, with only 4-5% focused on fact checking or verification. Equally, a Reuters report noted only 36% of readers are comfortable with AI-produced news, with trust dropping further on sensitive topics like politics, crime and local reporting.

Together, these trends demonstrate both the prevalence of the assumption amongst professionals and audiences alike. And the framing is hardly innocuous when we realize just how critical it is in shaping which tasks get targeted, how pilots are resourced, and where organizations are placing their bets.

Yet when we look closely at different applications of AI in the newsroom, it is not entirely clear how well the assumption fares. In this article we will examine two award-shortlisted technical papers from IBC2025, deploying AI in the newsroom, and trialled in real editorial settings, to investigate the limits of this assumption. Though they target distinctive tasks, the outcome of this analysis offers a deeper insight into when AI succeeds and falters. The implication of this allows us to further rethink what makes a task AI-able and what that teaches us about the way we conduct journalism.

Case 1: AI for News Clip Editing (NiF)

In the first case, SWR (a German public broadcaster) partnered with Television.AI to automate the editing of short 20–60 second local news clips, known as News in Film (NiF). The system was embedded in Adobe Premiere Pro and used visual-language models to analyse footage - detecting faces, shot types, scene boundaries - and to align narration with images. Editorial rules were hardcoded: avoid abrupt cuts, ensure no mismatch between narration and imagery, maintain optimal shot lengths (4–5 seconds).

The SWR team stressed that automatic cutting showed significant potential and remains an active area of development across ARD. But in this trial, editors consistently found AI outputs misaligned with newsroom practice. There were, for example, mismatches between narration and footage or awkward jumps across scenes - requiring extensive post-editing. Crucially, manual editing was often twice as fast as AI-assisted workflows.

Case 2: Neo – A Multilingual, Citation-Grounded Chatbot

In contrast, Swedish Radio deployed Neo, a multilingual chatbot indexing over 3.5 million public service media (PSM) articles, with 3,000 new ones added daily. Built on a sophisticated Retrieval-Augmented Generation (RAG) architecture, Neo uses recursive chunking, custom embeddings (Stella), and temporal filters to ground its answers in recent, trustworthy reporting. Every response is traceable, with inline citations and timestamps. After refinements, its answer accuracy rose from 20% to 78%, and its launch recorded nearly 10,000 user queries in the first month.

Typically, we would assign the first case (NiF production) the “grunt work” assumption since the tasks appears repetitive, mechanical and bounded by routine conventions. By contrast, the second case (Neo Chatbot) would not standardly fit under the assumption. Chatbot deployment seems intuitively high risk since its public-facing, interactive, and involves open-ended queries.

At first glance, this appears paradoxical – why did the higher stakes AI chatbot deliver stronger results than the more routine, repetitive task of editing? The paradox sharpens when we look at AI use in sports editing, where it is commonplace for systems to detect goals, whistle sounds, and key events, triggering highlight compilation, etc. Rule-governed domains like NBA or LaLiga now rely heavily on automated workflows. But success here stems from the explicitness of cues: follow the ball, cut after a whistle, replay a goal.  

Sports editing shows that success isn’t about whether a task is “grunt work” or “complex.”  It’s about whether its logic can be formalized into explicit rules fit for that specific domain.

The contrast here between NiF and Neo, and between sports and news editing points to a deeper distinction that does not necessarily depend on stakes or complexity. Another way of explaining the discrepancy relates to nature of knowledge required for that specific domain, and specifically, invoking the distinction between tacit vs explicit knowledge.

Philosopher Michael Polanyi defined tacit knowledge as “we know more than we can tell.” It covers embodied skills and interpretive judgments that resist codification. News editing is full of such judgments: how long to linger on a politician’s pause, which image captures the “tone” of a protest, how to pace rhythm across 30 seconds. Newsroom sociology further demonstrates how (Tuchman 1978) these practices are routinized but not reducible to rules. And Visual framing theory (Entman; Coleman; Fahmy) shows that sequencing images changes meaning - often in subtle but decisive ways.

Crucially, tacit does not mean complex. AI can handle complex but explicit rules - like parsing multilingual corpora. But it struggles when success depends on intuitive, context-sensitive human judgment

By contrast, explicit knowledge is formalizable and checkable: timestamps, citations, document provenance. Neo’s success lies in reframing a high-trust task into an explicitly verifiable one. It only answered from accredited PSM content, cited all claims, and enforced recency through metadata. The task was restructured to minimize tacit judgment.

Neo is not an outlier. A broader trend shows that “bounded chatbots” - those trained on a restricted, verified corpus - are gaining traction precisely because they reframe high-trust tasks around explicit standards:

  • Ask The Post AI: Washington Post’s chatbot only answers from its own reporting, with inline citations (Washington Post, 2024).
  • Ask FT: a chatbot over the Financial Times archive (FT Strategies, 2024).
  • Aftonbladet’s Election Bot: boosted audience engagement by anchoring answers in-house journalism (INMA, 2024).

In contrast, unbounded chatbots routinely falter. BBC’s experiment with generic chatbots led to distortion of current affairs coverage and Apple had to pause AI-generated news alerts after falsely issuing BBC-branded updates (Guardian, 2024).

The distinction tracks trust research. The Trust Project finds transparency cues lift credibility - when surfaced clearly. Industry infrastructure is converging in the same direction: provenance standards like C2PA and metadata frameworks like IPTC NewsML-G2 are embedding machine-legible traceability. Early results from Reuters (2025) confirm that provenance cues raise trust in AI-mediated news.

Rethinking the Trope

These cases don’t just challenge the "grunt work" assumption. They force us to rethink what makes a task automatable. AI doesn’t succeed because tasks are dull, and it doesn’t fail because they’re important. It succeeds when tacit knowledge can be stripped away - even under high-stakes conditions. Or, more specifically:

  • AI succeeds where tasks can be reduced to explicit rules and checkable outputs, even under high trust stakes.
  • AI (currently) fails where tasks depend on tacit cultural judgment, framing, and interpretation, even when the task looks routine.

This reframing inverts the trope. But perhaps more interesting, by forcing explicit rules into workflows, AI trials are exposing where journalism depends on tacit cultural judgement – and where it rests on explicit standards of verification. In that way, it is not just useful for automation, it can also function as a diagnostic tool, showing us what in journalism can be made rule-based, and what resist codification.

In that sense then, AI is not only working in the newsroom, it is a way of seeing the newsroom.

You can see exclusive sessions on both of the technical papers discussed above at IBC2025 with a delegate pass. Alongside these, a wide range of sessions across AI, editorial innovation, trust infrastructure, and production workflows will allow you to meaningfully participate in the evolving conversation - not just as a spectator of tools, but as a contributor to the deeper questions shaping journalism’s future.

In particular, don’t miss these must-see IBC2025 sessions connected to the research above:

  • AI in Post-Production (13 Sept, 11:30 – 12:30, Technical Papers Room) – Hear how a European public service broadcaster and an AI media company trialled automation in news production over 10 months. The lessons learned reveal the true challenges of integrating AI into editorial workflows.

  • AI in Content Curation (13 Sept, 13:30 – 14:30, Technical Papers Room) – See how the EBU is using AI to build a multilingual, citation-grounded news chatbot from its vast archive, alongside cutting-edge video curation projects from Asia.

  • Fighting Disinformation and Disengagement: Staying Relevant in the Digital Age (13 Sept, 14:15 – 15:00, Conference Room 1) – A powerhouse panel with BBC, PA Media, Euronews, Ofcom, and Al Jazeera on how broadcasters can uphold truth and trust in an algorithmic news environment.

👉 Secure your IBC2025 Delegate Pass to access these sessions and join the conversation on how AI is reshaping news and trust in journalism: https://show.ibc.org/ibc2025-pass-types

View all Blogposts
Loading