Behind the Agenda: Who Gets the Credit?
This is arguably one of the biggest challenges the entire industry is facing, with copyright, legitimate ownership, and the preservation of creativity being absolutely integral to success and strategic integrity. Today, then, we turn to the current conflict between AI and authorship: what lessons can be drawn for navigating it, and why there are strong reasons to believe we are more acutely facing the opposite challenge.
Within the media and entertainment industry, we are already seeing instances where using AI has created difficulties in identifying or allocating authorship. The 2023 Writers Guild of America strike, for example, centred partly on the fear that AI might generate material without human authorship, thereby undermining credit and threatening fair compensation for writers.
Research on the so-called “AI ghostwriter effect” further found that when algorithms draft news reports, journalists often deny authorship of the final piece, while equally refusing to credit the AI. The result is text with no acknowledged author at all (Graefe et al., 2023).
Cultural reactions reflect the same concern, with AI produced fiction receiving a broad and persistent range of cutting criticism, a “hollow mimicry” (Flood, 2023) .. in the Guardian being perhaps the most incisive. Such descriptions, beyond their rhetorical cut, implicitly reject AI as possessing the fundamental features required for legitimate authorship.
Legally, systems around the world diverge sharply. Courts in the United States and European Union continue to insist on direct human authorship as a precondition for copyright protection, meaning that fully autonomous AI outputs risk slipping into the public domain.
Other jurisdictions, by contrast, take more flexible views. In China’s Li v Liu case (2023), iterative prompting, selection, and refinement by a human user were judged sufficient to meet originality standards. In the United Kingdom, the Copyright, Designs and Patents Act goes further, recognising as “author” the person who undertakes the arrangements necessary for a computer-generated work.
The fragmentated nature of the legal landscape demonstrates just how uncertain identification of authorship can be.
One way of describing this situation, in its more extreme form, is to say that the introduction of AI into the M&E industry creates an authorship gap. While this may be true in cases where generative AI is used autonomously to produce content, for many working in the industry the reality looks quite different: the introduction of AI into workflows tends to resemble a bilateral, assistive relationship.
In these situation, the problem is not often a gap, but rather the opposite: an abundance and redistribution of legitimate authorship claims.
One way of making sense of this shift is to map the actual processes where AI enters creative pipelines. This year’s IBC shortlisted technical paper offers a really strong case for doing so.
The project: Demonstration of AI-Based Fancam Production for Kohaku Uta Gassen Using 8K Cameras and the VVERTIGO Post-Production Pipeline, offers, on the surface level a strong case for automating editing workflows. Interestingly, however, the success of the project can be attributed less to the automation of the project and more on two key design choices that reveal where creativity and authorship still enter the system.
The first was model training. Rather than simply teaching the AI to detect performers, VVERTIGO was trained on director-labelled fancam footage, enabling it to capture subtle editorial patterns in framing and pacing. In other words, it absorbed not just who was on stage but how a professional editor would have framed them - encoding tacit, human judgements into the model itself.
The second was dataset curation. By building the KBS6700 face corpus - a culturally specific dataset reflecting East Asian features — the team reduced misidentifications and embedded contextual knowledge directly into the infrastructure.
The first case is critical because it shows how tacit knowledge - intuitive editorial judgement about what constitutes “good framing” or “natural pacing” - was translated into model behaviour. Tacit knowledge is one of the clearest markers of authorship: it reflects not mechanical reproduction but a humanly developed sense of timing, rhythm, and aesthetic coherence. By embedding this into training data, VVERTIGO did not erase creativity but relocated it upstream, into the design of the model itself.
The second case is different. It did not involve tacit creative judgement in the same sense, but it still asserted a form of authorship by establishing the conditions under which the system could succeed. By curating their own culturally specific face corpus, KBS embedded structural decisions about representation and recognition directly into the pipeline. This was an infrastructural authorship act: not expressive in itself, but decisive in ensuring that the tacit creative contributions could not be undermined by systematic error or bias.
Together, these examples show that creative decision-making was not eliminated by AI but relocated. What matters is that creativity here signals authorship: tacit editorial judgements, once embedded in training sets and model parameters, carry the same marks of originality that ground claims of authorship. Structural choices, like dataset curation, do the same by fixing the conditions under which those judgements hold. In this way, authorship shifts upstream, into the datasets, model choices, and workflow architectures that shape what outputs can ever be.
This also explains why AI tends not to cause a vacuum of authorship in the media industry, but instead generates more legitimate claims. Editors, dataset curators, model trainers, interface designers can all point to creative input embedded in the final product.
The Marvel Secret Invasion controversy in 2023 is a good example of this. The AI-generated title sequence provoked disputes over who should be credited: the design studio that guided the models, the AI vendor that provided the technology, or the artists who shaped the final results. Authorship claims multiplied and the disputes themselves became part of how the work was received.
In bilateral human-AI workflows, the problem is not absence but an abundance of overlapping claims, dispersed across infrastructural layers. Legally, most jurisdictions still refuse to recognise AI itself as an author, tending instead to vest rights in the human end user, or in those who arranged the conditions of production. But this creates a trickle-down challenge. If authorship is now dispersed across infrastructural layers, how far can claims reasonably extend? Can tacit editorial judgements, dataset curation, or even contributions buried in training data still ground authorship when they become unrecognisable in the final output?
The challenge, then, is not only to ask “who gets the credit?” but to recognise that creativity is being relocated and redistributed. Increasingly, it is instantiated upstream in the infrastructures that shape AI outputs. For broadcasters, producers, and platform owners, recognising this shift is not only a matter of legal compliance but of strategic foresight.
As always, the insights and ideas that follow are indebted to the work of the researchers and developers contributing to this year’s Technical Papers Programme. Their work will be presented live at IBC2025 on the 13 Sep 2025, 13:30 - 14:30, Technical Papers Room. You can read more about the session here: Read more
Tickets and delegate passes are available HERE