Dr. Carolyne Lunga
Policies are critical in workplaces as they articulate an institution’s standpoint on an issue clearly delineating what is acceptable and what is not and removing any ambiguities. Newsrooms are governed by policies that shape journalistic conduct, protect journalists and create accountability. Over the past year, many newsrooms have rushed to introduce AI policies in an effort to keep up with the speed of generative AI development and adoption.
Several questions have arisen on the purpose of Gen AI, the role of journalists, impact on ethics and job displacement. What should the policy document entail? Who should be involved in the policy drafting process? Is this solely a management decision, or a collaborative process? Should journalists be informed when an organisation is planning to deploy AI? To what extent do journalists have a say on newsroom policy matters? How much notice should they be provided if they are going to be laid off due to Gen AI? Should audiences have a say in this matter and to what extent? For non-profit institutions, should they seek input from their donors and board of directors? How holistic should these policies be?
In this article, I discuss these issues through an examination of Gen AI policies of ProPublica and the British Broadcasting Corporation (BBC). I must state at this juncture that there is no one size fits all model for policy development. Newsrooms should develop policies that have a clear structure, scope, intent and engage with critical issues which matter to the organization, employees and the public interest and include key stakeholders.
Last week, journalists at ProPublica, a leading non-profit investigative newsroom in the United States, went on a 1-day strike over various demands, including seeking a voice over the use of artificial intelligence at the publication. According to an article published in the New York Times, about 150 of the workers, including reporters, copy editors, and communications staff have been negotiating since 2023. These journalists are seeking protections in case there are layoffs due to AI.
According to a North American union of communications employees, NewsGuild, which includes journalists among other communications staff, in December 2025, unionised journalists at POLITICO and its sister publication, environment and energy site, E&E News won a case over POLITICO management to provide sufficient notice before introducing a new AI technology that could lead to job losses.
Does the ProPublica protest foreshadow what lies ahead? It remains unclear whether these developments will prompt protests from journalists, as this may hold true for certain types of newsrooms but cannot be applied universally, given the significant variations in newsroom structures, resources, and cultures. As we have seen in previous decades, with new innovations, larger, well-funded newsrooms are far more likely to experiment and move quicker than smaller ones which are largely constrained by tighter budgets and gradual cycles of change. They normally have policies in place than smaller newrooms.
While AI adoption raises multiple concerns, some of which I have addressed in two recently published articles by the leading Qatar daily, The Peninsula, where I argued that the human journalist was indispensable and another where I underlined the value of integrity, accountability and transparency in an age of Gen AI, questions of policy remain the most pressing.
In many cases, newsrooms either lack a formal AI policy altogether or rely on policies that are inadequate, which fail to address the breadth of challenges and opportunities associated with the adoption of Gen AI. Closely linked to AI policy is how Gen AI is addressed within journalists’ contracts.
Contracts should address ownership and consent, ensuring that journalists’ decades of labour are not used to train Gen AI without compensation or agreement. For example, journalistic content is being turned into summaries, explainers, newsletters among others. In some instances, journalists’ voices are turned into machine replicas, for example when their voices are cloned to create ‘AI reporters’ or when they are turned into digital presenters, or AI anchors and avatars trained on journalists’ likenesses, voices, and facial expressions. This happens often without compensation or contractual recognition.
AI policies across media organisations are presented under different labels, variously described as approaches, guidelines, frameworks, or principles. When examining AI policies of ProPublica and BBC, I was mostly interested in the ethical values identified, the issues addressed and what they say on human AI collaboration.
ProPublica has a public facing page on their website with the title ProPublica’s Approach to AI. It is unclear whether this is the only AI-related document or whether others exist. The document outlines where AI is used including transcriptions and looking for patterns in documents. Accountability, verification, transparency, and integrity are recognised as foundational principles guiding the publication. Collaboration between AI and humans is emphasised however, the human journalist bears the responsibility of checking AI assisted reporting to prevent misleading information, errors and fabricated information thereby underlining the importance of truth, verification and credibility.
As an investigative journalism researcher, I was drawn to what the policy says on source protection. “We take care not to input confidential source documents into public AI tools that could compromise data privacy”. The notion of source protection in investigative journalism lies at the core of this vital practice since investigative journalists rely on whistleblowers who provide sensitive information anonymously for fear of retribution. Leak‑driven investigative journalism has become increasingly common, and GenAI must therefore be used responsibly, particularly in relation to the data imputed into it, as this may risk exposing leaks and placing sources in harm’s way.
Since information input into an AI tool may be used by the developer of that AI to train it further or be shared with third parties, significant data protection or information security concerns arise, hence privacy and security are outlined as key at ProPublica. ProPublica has hired AI fellows who are researching on developing news applications.
The British Broadcasting Corporation (BBC)’s public-facing website includes Editorial Guidance on the use of artificial intelligence, in which transparency and accountability are identified as key principles, including informed human oversight. Serving the public interest and creativity are articulated in the policy, and all BBC staff are required to comply with the policy when using AI. The BBC’s AI Handbook: How to Use AI Responsibly says that any use of AI in content should be consistent with impartiality, accuracy, fairness and privacy. Other areas of focus include use of AI by third parties, including independent producers, using AI to support editorial production or research. The BBC emphasizes openness with audiences where AI has been used, directly or indirectly. All BBC staff are required to complete AI training to make sure they understand how to use AI responsibly and put the BBC AI principles into practice.
As I have argued in an earlier article that AI training, developed in collaboration with academic experts, is essential for everyone working in journalism. The principles of impartiality and fairness at the BBC have been questioned before due to the perceived media bias. The BBC’s commitment to impartiality has been questioned following accusations of systemic bias in its coverage of Gaza, transgender issues, and Donald Trump’s January 6 Capitol attack, with the Gary Lineker row cited as a further example. Moreover, Gen AI is known to reproduce bias, and insufficient human oversight risks compounding these concerns. The BBC emphasises that senior editorial line managers are responsible and accountable for how their teams deploy Gen AI.
While I was unable to review the policy documents of the ProPublica and BBC in their entirety, my central argument remains that newsroom Gen AI policy must be sufficiently broad, covering the full spectrum of issues that arise due to GenAI adoption. Journalistic labour, innovation and creativity should be anchored in the public interest.
—Dr. Carolyne Lunga is an Associate Professor in Digital Communication and Media Production (DCMP) at the University of Doha for Science and Technology (UDST).
Dr. Carolyne Lunga is an Associate Professor in Digital Communication and Media Production (DCMP) at the University of Doha for Science and Technology (UDST)