The role of AI in journalism offers both benefits and risks. Whilst it enhances efficiency for tasks such as transcription and data analysis, it also poses ethical concerns, propagates misinformation, and causes dependency on tech companies. Responsible AI use, editorial oversight, and robust training are crucial to navigating these growing challenges. Support from donors is essential for building capacity and fostering innovation in newsrooms.

Artificial Intelligence (AI) refers to: “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks that normally require human intelligence.” 

Large language models (LLMs), able to comprehend and generate human language text, became widely accessible in late 2022, with OpenAI’s ChatGPT pioneering these efforts. Following its launch, companies like Google, Meta, and Microsoft released their own generative AI products, integrating the technology into existing systems.

The role of AI in journalism emerges as a double-edged sword. Whilst it has already inflicted much harm through social media algorithms and surveillance practices, it also holds promise for enhancing efficiency in the media. Journalists can harness AI to mitigate risks through informed adoption, leveraging its capabilities to increase the speed of monotonous tasks, track malign government funding, and identify deepfakes, particularly benefiting data journalists. However, it is imperative to maintain awareness of the risks posed by AI, especially considering past mistakes with social media and the tendency towards overreliance on it for audience reach.

AI Usage in Newsrooms

Media professionals are increasingly making use of AI tools. A May 2024 global survey conducted by the public relations firm Cision found that 47% of journalists used tools like ChatGPT or Bard. At the same time, in an AP report published in April, 70% of respondents, journalists and editors worldwide indicated that their organisation had, at some point, used various AI tools.

However, geographical differences in AI usage in newsrooms can also be observed. According to a new report by the Thomson Foundation and Media and Journalism Research Center (MJRC), focusing on the Visegrad countries (Poland, Czechia, Slovakia and Hungary), “AI adoption is slower and marked by ethical concerns, highlighting the need for careful management and collaboration.”

At the same time, journalists have been using AI tools for longer and on a much broader spectrum than most would think, says Damian Radcliffe, a professor at the School of Journalism at the University of Oregon.

In a recent survey by the Oxford-based Reuters Institute for the Study of Journalism (RISJ), media professionals mentioned back-end automation, such as transcription and copyediting, where AI tools are the most helpful in the media industry. This was followed by recommender systems, content production, and commercial applications. Another common example of AI application in newsrooms includes data analysis and automating repetitive tasks. This helps improve efficiency and frees up journalists to focus on more complex stories, whilst simultaneously increasing the speed and decreasing the costs of content production and distribution. Nowadays, “it is almost impossible to work without AI tools, especially if one works with large datasets,” says Willem Lenders, Program Manager at Limelight Foundation.

AI tools are used in newsrooms for various other purposes as well. According to Radcliffe, one significant use is in programmatic advertising: over 90% of US ads are handled this way. Another innovative application is dynamic paywalls, which adjust based on user-specific factors such as location, device, and visit frequency. This approach, employed by larger outlets like The Atlantic and The Wall Street Journal, allows organisations to tailor the number of free articles and subscription offers to individual users. Additionally, AI is used for predictive analytics, helping newsrooms identify trending stories, influence article placement, devise social media strategies, and plan follow-up stories.

AI-Associated Risks

The use of AI in journalism also presents significant concerns, as the usage of AI poses substantial risks related to reliability, ethics, and the dissemination of misinformation. AI’s ability to “hallucinate” facts, or generate plausible but incorrect information, makes its use in information gathering problematic. Therefore, experts argue that news organisations should implement ethical guidelines and robust training to navigate these challenges.

Limelight’s Lenders emphasises that responsible AI use depends not just on its application but on who owns the tool, drawing parallels to the influence of big tech on content distribution. He advocates for a balanced use that includes human oversight, to prevent the exclusion of critical editorial judgment. Radcliffe also identifies the most significant risk as removing human oversight in newsrooms. He thinks there are topics where AI tools can be helpful, for example in sports coverage, which can often be quite formulaic. However, other beats might require more nuance, and AI cannot provide that yet. An example of this risk is the insensitive headline generated by AI in an MSN obituary of a basketball player, underscoring the need for editorial supervision to avoid catastrophic mistakes. Furthermore, Lenders argues that LLMs regurgitate what has been written before, which can lead to reproducing harmful stereotypes.

The current function of generative AI jeopardises access to trustworthy information. It does not distinguish between reliable and unreliable sources and often fails to disclose its primary source of information, making verification difficult. This amplifies misinformation and public confusion, emphasising users’ need for digital and media literacy.

Accountability is another critical issue. Unlike human-generated content, AI lacks clear attribution, undermining public trust in journalism. Journalists’ intellectual property can even be compromised this way, as AI often uses information from journalistic articles without credit, exacerbating existing viability issues in journalism.

Radcliffe notes that smaller newsrooms might embrace AI as a cost-saving measure, reducing the number of reporters. Those roles will never come back. He warns of the dangers of dependency on platforms, highlighting the lessons from social media where algorithm shifts have impacted reach, and the control has always remained with tech companies. “It is not a partnership; all power lies with the tech companies,” he argues.

Lenders echoes this concern, pointing out that the primary aim of tech companies is profit, not public interest or quality information. He suggests developing independent tools and technologies, like those by OCCRP, ICIJ, Bellingcat, Independent Tech Alliance, AI Forensics, and others. However, these require significant investment and user support from the journalism sector.

Radcliffe further cautions that news organisations risk becoming redundant if users turn to chatbots for information. To mitigate this, he advises preventing chatbots from scraping content and looking to the newsrooms to create unique content that adds value beyond what AI can offer. He believes fostering trust, and educating the audience on why journalism matters, are crucial. Lenders concurs that AI cannot replace the relationship with the audience, highlighting trust as the main issue. He also believes smaller independent newsrooms will recognise that they cannot maintain quality by relying solely on AI.

The debate about AI in journalism often polarises into two extremes, Lenders adds that it will either save or ruin the industry. “We don’t need to worry about the robots, we have to look at the reality,” he argues. A realistic perspective acknowledges the harm algorithms have already caused, such as in ad distribution and spreading disinformation. An AI Forensics study showed how Meta allowed pro-Russia propaganda ads to flood the EU, illustrating the potential for AI misuse.

Reporters Without Borders (RSF) also raises alarms about AI-generated websites that mimic real media sites and siphon ad revenue from legitimate news outlets. Research by NewsGuard identified numerous sites predominantly written by AI, aiming solely for profit by maximising clicks with minimal effort. This approach eliminates ethical journalism, floods the market with questionable articles, and diminishes access to reliable information. These AI-generated articles also sometimes contain harmful falsehoods, underscoring the moral necessity to disclose AI-generated content and ensure transparency, so readers can critically evaluate the information.

The Potential Role of Funders

In this evolving landscape, donors could play a crucial role, not by providing direct solutions but by supporting organisations which, together, form an ecosystem that nurtures innovation. Their involvement could bridge the gap between technology and policy, particularly in journalism. For example, donors can invite experts with a high level of tech knowledge to critically assess potential pitfalls and ensure they are well-informed, in order to avoid simplistic utopian or dystopian narratives.

Lenders highlights the importance of donors informing themselves about the possible harms and risks of AI and encouraging grantees to improve their technology knowledge profoundly. He emphasises the need for good core funding to avoid reliance on cheaper, riskier solutions. Lenders argues that, given the rapid pace of technological change, it is crucial to have robust organisations that can anticipate risks and support journalists in connecting with these entities or conducting their analyses. Rather than shifting funding every few years, building capacity within newsrooms and CSOs to keep up with AI advancements is a more sustainable strategy.

Conversely, Radcliffe underscores the necessity of AI training, particularly for smaller news organisations. Whilst large organisations are well-resourced and capable of developing in-house AI solutions, smaller ones often lack the resources to follow or contribute to debates on AI. These smaller newsrooms are also less able to engage in legal battles against tech companies. Thus, donors should support them in lobbying for their needs and amplifying their voices. Training surrounding the uses and dangers of AI, especially increasing revenue through methods like dynamic paywalls and facilitating connections among smaller newsrooms to share their AI experiences and use cases, are crucial steps donors can take. “But I would encourage all donors to ask newsrooms what they need,” he adds. “Don’t dictate the training and funding, ask the outlets you want to support how you can best help them in this space.”

Smaller publishers often turn to third-party AI solutions from platform companies due to the high costs and challenges of independent development, such as the need for extensive computing power, competition for tech talent, and the scarcity of large datasets. These platform solutions offer convenience, scalability, and cost-effectiveness, allowing publishers to leverage AI capabilities without the financial burden of in-house development. However, Lenders points out the risks associated with cheaper solutions. “We need newsrooms that have the capacity to be critical of what they use,” he argues, adding that it is not a question of utopia versus dystopia: understanding how AI tools can help newsrooms requires a realistic analysis of its benefits and risks.

The Ethical Media Alliance (EMA) in Romania aims to tackle quantitative metrics: the main flaw of the digital advertising ecosystem that undermines public interest journalism. The initiative aims to allocate funds based on ethical principles to support trustworthy media in achieving a positive social impact and financial sustainability, while also securing brand safety for advertisers.

In the age of web 2.0, the advertising market disproportionally incentivises clickbait content. The quest for the higher reach at the lowest cost has led to an emphasis on viral content, often at the expense of responsible, public interest journalism. However, amidst this challenging landscape, initiatives like the Ethical Media Alliance (EMA) in Romania are emerging as beacons of hope for independent media organisations, especially those dedicated to public interest journalism.

The initiative was born as a “result of frustration over how much money is funnelled to irresponsible content producers,” says Dragos Stanca, EMA’s initiator. He thinks that there is a critical flaw in the digital advertising ecosystem, where the focus on quantitative metrics such as clicks and impressions undermines the value of public interest journalism. This has led even serious publishers to embrace clickbait content to survive in the era of programmatic advertising.

In the first phase of the project, EMA positions itself as a not-for-profit sales house guided by ethical principles, involving journalistic startups and projects often excluded from commercial funding due to their relatively modest audience numbers. Stanca acknowledges the necessity of speaking the language of the advertisers, thereby integrating metrics and key performance indicators (KPIs) into their approach. Additionally, EMA aims to foster a positive social impact by supporting content essential for democracy.

The network currently includes 15 journalistic projects employing over 120 journalists, with a joint monthly reach of 1.2 million users and 550,000 video views on average. Advertisers are required to commit to a minimum one-month campaign that spans across all portals. In other words, the same ad is on display on all the sites thus increasing its reach to a level that, as Stanca puts it, “makes sense for a media buyer.” EMA ensures brand safety by allowing only organisations producing public interest content to join. This guarantees advertisers that their ads will be associated with responsible content.

Furthermore, EMA also reforms the distribution of ad revenue in the network. Half of it is distributed among partners based on the number of employed journalists, while the other half is based on quantitative metrics (35% based on the number of ad impressions and 15% on social media reach). This follows the usual ‘cost per mille’ (CPM) approach, which is the cost an advertiser pays for one thousand views or impressions of an advertisement.

The initiative sets an ambitious target: diverting 1% of the total ad spending in Romania to public interest journalism. In Romania, where the total ad market is €700m a year, with only €30-35m spent online by local companies (out of a total digital ad spending of €255m), the EMA initiative has the potential to double the funds allocated to digital journalism. Currently, no more than €3.5-5 million a year is allocated to digital journalism, according to the initiators of the Ethical Media Alliance.

Stanca believes that shifting even relatively smaller amounts to trustworthy media could significantly enhance their financial sustainability. Particularly for emerging journalistic startups, even a few thousand euros per month can make a significant difference.

Early successes are evident, with the initiative launching last fall and the first campaigns commencing in October. The two largest banks in Romania have joined to date, contributing €35,000 for the first two months as a test campaign.

“Drawing from our experience in the commercial digital brokerage market, I can confidently say that anything that is new takes up to one year to become adopted by the market,” says Stanca, adding that he aims for an ad spending of €100,000 per month by the end of the year.

Recently, he outlined the operating principles of the alliance and presented what his team considers to be an initial format for ethical advertising in a dialogue hosted by independent journalist Petrisor Obae, who operates the media-focused portal Pagina de Media. The principles the EMA operates on aim to create an “ethical algorithm” to be used for the allocation of funds from the ad space. The principles the EMA operates on aim to create an “ethical algorithm” to be used for the allocation of funds from the ad space. The primary goal is to provide enough resources to motivate especially young people to choose a career in journalism, according to Stanca. Moreover, the EMA wants to motivate them to produce content in and for the public interest, “not just to focus on gaining the programmatic advertising revenues or, more seriously, to exclusively serve the interests of media owners with questionable agendas or to write solely for and about brands that are essentially seeking disguised advertising,” Stanca added. “The Ethical Media sales house part is just the first phase of the project; we plan to propose additional initiatives that support an ecosystem which, in our opinion, is essential for the survival of democracy,” he said.

While the initiative is currently confined to Romania, Stanca is open to expansion. As EMA pioneers ethical advertising to support public interest journalism, it could have an impact far beyond national boundaries, ushering in a new era for responsible media funding.