Empowering Newsrooms: How Donors Can Support Responsible AI Use in Journalism
The role of AI in journalism offers both benefits and risks. Whilst it enhances efficiency for tasks such as transcription and data analysis, it also poses ethical concerns, propagates misinformation, and causes dependency on tech companies. Responsible AI use, editorial oversight, and robust training are crucial to navigating these growing challenges. Support from donors is essential for building capacity and fostering innovation in newsrooms.
Artificial Intelligence (AI) refers to: “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks that normally require human intelligence.”
Large language models (LLMs), able to comprehend and generate human language text, became widely accessible in late 2022, with OpenAI’s ChatGPT pioneering these efforts. Following its launch, companies like Google, Meta, and Microsoft released their own generative AI products, integrating the technology into existing systems.
The role of AI in journalism emerges as a double-edged sword. Whilst it has already inflicted much harm through social media algorithms and surveillance practices, it also holds promise for enhancing efficiency in the media. Journalists can harness AI to mitigate risks through informed adoption, leveraging its capabilities to increase the speed of monotonous tasks, track malign government funding, and identify deepfakes, particularly benefiting data journalists. However, it is imperative to maintain awareness of the risks posed by AI, especially considering past mistakes with social media and the tendency towards overreliance on it for audience reach.
AI Usage in Newsrooms
Media professionals are increasingly making use of AI tools. A May 2024 global survey conducted by the public relations firm Cision found that 47% of journalists used tools like ChatGPT or Bard. At the same time, in an AP report published in April, 70% of respondents, journalists and editors worldwide indicated that their organisation had, at some point, used various AI tools.
However, geographical differences in AI usage in newsrooms can also be observed. According to a new report by the Thomson Foundation and Media and Journalism Research Center (MJRC), focusing on the Visegrad countries (Poland, Czechia, Slovakia and Hungary), “AI adoption is slower and marked by ethical concerns, highlighting the need for careful management and collaboration.”
At the same time, journalists have been using AI tools for longer and on a much broader spectrum than most would think, says Damian Radcliffe, a professor at the School of Journalism at the University of Oregon.
In a recent survey by the Oxford-based Reuters Institute for the Study of Journalism (RISJ), media professionals mentioned back-end automation, such as transcription and copyediting, where AI tools are the most helpful in the media industry. This was followed by recommender systems, content production, and commercial applications. Another common example of AI application in newsrooms includes data analysis and automating repetitive tasks. This helps improve efficiency and frees up journalists to focus on more complex stories, whilst simultaneously increasing the speed and decreasing the costs of content production and distribution. Nowadays, “it is almost impossible to work without AI tools, especially if one works with large datasets,” says Willem Lenders, Program Manager at Limelight Foundation.
AI tools are used in newsrooms for various other purposes as well. According to Radcliffe, one significant use is in programmatic advertising: over 90% of US ads are handled this way. Another innovative application is dynamic paywalls, which adjust based on user-specific factors such as location, device, and visit frequency. This approach, employed by larger outlets like The Atlantic and The Wall Street Journal, allows organisations to tailor the number of free articles and subscription offers to individual users. Additionally, AI is used for predictive analytics, helping newsrooms identify trending stories, influence article placement, devise social media strategies, and plan follow-up stories.
AI-Associated Risks
The use of AI in journalism also presents significant concerns, as the usage of AI poses substantial risks related to reliability, ethics, and the dissemination of misinformation. AI’s ability to “hallucinate” facts, or generate plausible but incorrect information, makes its use in information gathering problematic. Therefore, experts argue that news organisations should implement ethical guidelines and robust training to navigate these challenges.
Limelight’s Lenders emphasises that responsible AI use depends not just on its application but on who owns the tool, drawing parallels to the influence of big tech on content distribution. He advocates for a balanced use that includes human oversight, to prevent the exclusion of critical editorial judgment. Radcliffe also identifies the most significant risk as removing human oversight in newsrooms. He thinks there are topics where AI tools can be helpful, for example in sports coverage, which can often be quite formulaic. However, other beats might require more nuance, and AI cannot provide that yet. An example of this risk is the insensitive headline generated by AI in an MSN obituary of a basketball player, underscoring the need for editorial supervision to avoid catastrophic mistakes. Furthermore, Lenders argues that LLMs regurgitate what has been written before, which can lead to reproducing harmful stereotypes.
The current function of generative AI jeopardises access to trustworthy information. It does not distinguish between reliable and unreliable sources and often fails to disclose its primary source of information, making verification difficult. This amplifies misinformation and public confusion, emphasising users’ need for digital and media literacy.
Accountability is another critical issue. Unlike human-generated content, AI lacks clear attribution, undermining public trust in journalism. Journalists’ intellectual property can even be compromised this way, as AI often uses information from journalistic articles without credit, exacerbating existing viability issues in journalism.
Radcliffe notes that smaller newsrooms might embrace AI as a cost-saving measure, reducing the number of reporters. Those roles will never come back. He warns of the dangers of dependency on platforms, highlighting the lessons from social media where algorithm shifts have impacted reach, and the control has always remained with tech companies. “It is not a partnership; all power lies with the tech companies,” he argues.
Lenders echoes this concern, pointing out that the primary aim of tech companies is profit, not public interest or quality information. He suggests developing independent tools and technologies, like those by OCCRP, ICIJ, Bellingcat, Independent Tech Alliance, AI Forensics, and others. However, these require significant investment and user support from the journalism sector.
Radcliffe further cautions that news organisations risk becoming redundant if users turn to chatbots for information. To mitigate this, he advises preventing chatbots from scraping content and looking to the newsrooms to create unique content that adds value beyond what AI can offer. He believes fostering trust, and educating the audience on why journalism matters, are crucial. Lenders concurs that AI cannot replace the relationship with the audience, highlighting trust as the main issue. He also believes smaller independent newsrooms will recognise that they cannot maintain quality by relying solely on AI.
The debate about AI in journalism often polarises into two extremes, Lenders adds that it will either save or ruin the industry. “We don’t need to worry about the robots, we have to look at the reality,” he argues. A realistic perspective acknowledges the harm algorithms have already caused, such as in ad distribution and spreading disinformation. An AI Forensics study showed how Meta allowed pro-Russia propaganda ads to flood the EU, illustrating the potential for AI misuse.
Reporters Without Borders (RSF) also raises alarms about AI-generated websites that mimic real media sites and siphon ad revenue from legitimate news outlets. Research by NewsGuard identified numerous sites predominantly written by AI, aiming solely for profit by maximising clicks with minimal effort. This approach eliminates ethical journalism, floods the market with questionable articles, and diminishes access to reliable information. These AI-generated articles also sometimes contain harmful falsehoods, underscoring the moral necessity to disclose AI-generated content and ensure transparency, so readers can critically evaluate the information.
The Potential Role of Funders
In this evolving landscape, donors could play a crucial role, not by providing direct solutions but by supporting organisations which, together, form an ecosystem that nurtures innovation. Their involvement could bridge the gap between technology and policy, particularly in journalism. For example, donors can invite experts with a high level of tech knowledge to critically assess potential pitfalls and ensure they are well-informed, in order to avoid simplistic utopian or dystopian narratives.
Lenders highlights the importance of donors informing themselves about the possible harms and risks of AI and encouraging grantees to improve their technology knowledge profoundly. He emphasises the need for good core funding to avoid reliance on cheaper, riskier solutions. Lenders argues that, given the rapid pace of technological change, it is crucial to have robust organisations that can anticipate risks and support journalists in connecting with these entities or conducting their analyses. Rather than shifting funding every few years, building capacity within newsrooms and CSOs to keep up with AI advancements is a more sustainable strategy.
Conversely, Radcliffe underscores the necessity of AI training, particularly for smaller news organisations. Whilst large organisations are well-resourced and capable of developing in-house AI solutions, smaller ones often lack the resources to follow or contribute to debates on AI. These smaller newsrooms are also less able to engage in legal battles against tech companies. Thus, donors should support them in lobbying for their needs and amplifying their voices. Training surrounding the uses and dangers of AI, especially increasing revenue through methods like dynamic paywalls and facilitating connections among smaller newsrooms to share their AI experiences and use cases, are crucial steps donors can take. “But I would encourage all donors to ask newsrooms what they need,” he adds. “Don’t dictate the training and funding, ask the outlets you want to support how you can best help them in this space.”
Smaller publishers often turn to third-party AI solutions from platform companies due to the high costs and challenges of independent development, such as the need for extensive computing power, competition for tech talent, and the scarcity of large datasets. These platform solutions offer convenience, scalability, and cost-effectiveness, allowing publishers to leverage AI capabilities without the financial burden of in-house development. However, Lenders points out the risks associated with cheaper solutions. “We need newsrooms that have the capacity to be critical of what they use,” he argues, adding that it is not a question of utopia versus dystopia: understanding how AI tools can help newsrooms requires a realistic analysis of its benefits and risks.