Dr. Carolyne Lunga
Newsrooms are using generative artificial intelligence for multiple purposes including drafting summaries, language translation, story idea generation, editing and content creation, among other tasks.
In an ongoing scholarly inquiry into the integration of generative AI within journalistic practice in the Global North and South, I have the distinct privilege of engaging with journalists to share their experiences on the opportunities and challenges of using these technologies in everyday workflows.
Journalists are dealing with a range of challenges ranging from inaccurate transcriptions, lack of knowledge on effective prompting, failure to manage hallucinations and ensuring that all outputs are accurate. To date, there are newsrooms which have issued policies and guidance on how, and where these tools can be used and where human oversight is needed.
Some newsrooms are yet to create practical guidelines that provide comprehensive instructions, explanations, practical advice, and case studies on how to use particular tools and technology.
A Gen AI policy document is not sufficient if there is no accompanying practical manual for using these tools daily. In this article, I discuss challenges and offer practical guidance on how to alleviate them. My argument is that a practical manual which explains, in non-technical language, what generative artificial intelligence is, how it works and how journalists can maximise its use in achieving their mandate of informing society.
It is essential for journalists to understand what generative AI’s advantages, limitations, key considerations, and societal impact are. The impact on journalistic practice is particularly significant given generative AI affordances and the varied ways of adoption to meet specific requirements. To conduct their role effectively, journalists must provide citizens with accurate and trustworthy information.
Research findings show that journalists are struggling with how to effectively use Gen AI tools in converting long interviews, in various accents, into accurate transcripts quickly. Since the majority of generative AI tools are trained in the English language, journalists often encounter difficulties when transcribing interviews that feature a range of local and regional accents from diverse contexts.
Uploading high quality recordings and choosing transcription tools with advanced speech recognition capabilities that support a wide range of English dialects and accents are important steps for improving the transcription process. AI-generated transcripts are not always accurate. Instead, they must be reviewed by humans, even with tight newsroom deadlines, to ensure accuracy before breaking a news story.
By noting timestamps in transcriptions, journalists can instantly refer back to the original audios or videos to clarify ambiguous sections. The same process should be adopted for correcting captions. Names of people, places and other context specific information requires thorough verification. At a time of massive disinformation and misinformation, it is important for journalists to ensure their transcripts are both accurate and reflective of voices of sources or interviewees. Reliable reporting is guaranteed by a combination of human oversight and generative AI in a reasonably paced newsroom environment.
Newsrooms face another challenge of lack of knowledge on effectively promoting Gen AI tools to guarantee accurate outputs. Prompting needs more than just quickly typing a question, instruction, or statement. It involves cautious consideration of the desired output.
Journalists should craft clear, specific, and context-rich prompts. Instead of relying on a single prompt, journalists should create multiple prompt variations and compare responses to show the most reliable and accurate results. This includes prompting on various software and comparing the results and then deciding on how to continue.
For example, instead of prompting one tool such as Microsoft Copilot, a journalist can also prompt Gemini, Claude and ChatGPT depending on the newsroom guidance on what tools are acceptable for use. Additionally, expression or framing and varying levels of details influence the outcome of a prompt. Journalists should assess AI-generated prompts critically before using them in news reports. Critical thinking comes from questioning information and the motive of sources, analysing the language of articles to find potential biases.
In newsrooms, senior journalists play a vital role of reviewing articles of juniors to scrutinise facts and finding missing details and bias. Undertaking research and reaching out to multiple sources cannot be replaced by prompting Gen AI.
Detecting fake AI-generated videos, audios and images, is another challenge faced by journalists in newsrooms today. Deepfakes are increasingly prevalent across social media platforms, and these are reposted by journalists online. There are a range of disinformation and misinformation detection tools which journalists can use to prevent the spread of harmful and manipulated content.
In an era when short-form videos and fake audios dominate websites and social media platforms, it is significant for newsrooms to use software capable of finding and exposing manipulated user-generated videos and audios. This is especially crucial when covering disasters, wars, droughts, elections, sports events, and others, as the spread of misleading content damages credibility and public trust.
Investing in advanced misinformation/disinformation detection tools will safeguard the integrity of journalists. For instance, software such as InVID is used to detect manipulation of videos. InVID analyses and breaks down videos to detect manipulation. To undertake verification, journalists can perform reverse video search, check the video’s origin and rights. They can check contextual information through assessing location, time, and other video metadata, as well as historical weather data. A list of posts and comments and tweets about the video, collected from social media channels are verified.
For audio, speech recognition software such as Microsoft Azure Speech are available. To prove the origin of an image, journalists can do reverse image searches. By examining the metadata such as such as the date, location, and device information contributes to the identification of manipulation signs. Data journalists often use these techniques in their daily work and acknowledge that these processes can take a significant amount of time.
To verify images, journalists should conduct reverse image to prove if an image has appeared elsewhere online and to prove its origin. Additionally, journalists may compare the image with reputable sources and cross-reference contextual details like weather conditions, landmarks, and relevant news reports to confirm authenticity.
When available, specialised software tools for image analysis such as Yandex, Google Reverse Image Search, TinEye, among others can detect alterations. Relating to the management of hallucinations, claims require thorough cross checking, comparing multiple sources, checking dates/places, verifying quotes with original interviews, and watching out for altered quotes, incorrect names, citations, and mismatched statistics.
Ongoing research into the use and integration of generative AI in newsrooms plays a crucial role in shaping a practical manual. In undertaking practice informed research, journalism researchers keep important conversations on generative AI adoption in newsrooms, challenges, and opportunities at the top of the research and Gen AI agenda.
A practical manual addresses current challenges and offers clear explanations, steps, and recommendations for best practice. Such a manual should guide newsrooms in integrating Gen AI into their daily workflows enabling them to uphold ethical standards and maintain trust in a world increasingly fractured by disinformation and misinformation. The manual would serve as a key reference for newsrooms, aiding in the alignment of policy and practice, and providing support as Gen AI tools develop.
—Dr. Carolyne Lunga is an Associate Professor in Digital Communication and Media Production (DCMP) at the University of Doha for Science and Technology (UDST).
Dr. Carolyne Lunga is an Associate Professor in Digital Communication and Media Production (DCMP) at the University of Doha for Science and Technology (UDST)