In March 2024, a consortium of seven media organisations and a media-focused technology provider launched the Taktak project, with the objective of developing an innovative donation solution, supported by the European Commission. The initiative addresses a fundamental challenge facing modern journalism: the need to identify sustainable revenue sources in the context of evolving consumption patterns and the adverse circumstances faced by freelance journalists. It introduces an innovative approach to donations, whereby readers can decide which organisations to support.

The concept for Taktak was developed by Worldcrunch, a Paris-based digital magazine known for its work with international partners. Lucie Holeček, a design-thinking expert and consultant on the project at Transitions Online, outlines that Worldcrunch’s distinctive collaborative model presented challenges that existing payment platforms were unable to accommodate. As Worldcrunch frequently translates and shares articles with various international media partners, a key challenge emerged in relation to the allocation of donation revenue across contributors. “None of the existing payment solutions worked,” Holeček states, adding that a new approach was needed to ensure funds were distributed fairly among all parties involved.

The Taktak project represents a convergence of three key developments in the journalism sector. Firstly, the initial research phases revealed significant problems faced by journalists, particularly freelancers, in terms of job stability, financial security and stress levels. “We were aware of the difficulties, but not to this extent,” Holeček recalls. Secondly, there is significant untapped potential for joint reporting efforts across languages and borders, which could enhance the scope and reach of journalism. Finally, there is an increasing need to generate direct revenue from the audience.

The Taktak consortium, formed by Worldcrunch, comprises an impressive array of local, national, and international media outlets, which are coming together to explore these opportunities. The consortium includes Mensagem, which provides local news in Lisbon; Pod Tepeto, a media outlet based in Plovdiv, Bulgaria; La Marea, a Spanish publication; and Livy Bereg, a Ukrainian news source. The platform’s geographic diversity and the difference in scale between its members enables it to address the needs of journalists and readers at multiple levels, from the hyper-local to the transnational. The involvement of these media groups also benefits younger journalists, who are facing an increasingly unstable job market and income situation. The consortium’s reach is extended further through the inclusion of WAN-IFRA, the World Association of News Publishers, and Transitions Online, both of which have extensive networks within the journalism community.

Taktak is currently a closed consortium, funded by an investment of €1,376,040 over two years. Eighty percent of this funding, totalling €1,100,832, is provided by the European Commission under the Journalism Partnerships Collaboration call. The remaining 20% is provided by the Taktak partners themselves. “The funding goes toward creating the tool,” said Holeček. She adds that the tool is currently in development and will support various types of content, including articles and podcasts, with options for transparent payment distribution. The tool enables readers to make donations and to see precisely where their contributions are being allocated. This transparency is a key element of the project’s value proposition for donors, as it builds trust.

One of the distinctive features of Taktak is its flexibility. Readers are able to select the total donation amount, while collaborating journalists can choose the ratio of how it is shared. Holeček states that Taktak’s donation model provides an alternative to the fatigue that many readers feel with multiple subscriptions. This new solution offers flexibility, allowing readers to give money without any obligation. They can simply indicate their appreciation for an article and choose to support the publication directly. This approach is particularly beneficial for freelancers, who might otherwise be excluded from revenue-sharing models even when their work is particularly successful.

Taktak’s primary objectives extend beyond the mere creation of a new revenue stream. They also encompass the fostering of collaboration across media, the promotion of diverse voices, the growth of reader engagement, and the encouragement of a more resilient journalism sector. Taktak’s donation-based model encourages journalists and media organisations to commit to quality, in-depth coverage that resonates with readers, with the aim of creating a mutually beneficial relationship. The platform’s secondary objectives include facilitating the sharing of best practices and insights among media outlets, which can ultimately benefit the wider sector.

The tool is currently in the development stage and has been designed with the objective of collecting payments efficiently while distributing them fairly. The tool is essentially a flexible ‘donate’ button that allows readers to decide how much to give to each party involved in the content’s creation. This flexibility addresses a market gap for direct support of journalists, particularly in cases where readers wish to contribute without committing to a full subscription. As Holeček explains, the objective is to make the process “as flexible as possible”, offering financial support to journalists facing financial difficulties who might otherwise go unrewarded.

The first prototype of the Taktak tool is scheduled for release in 2025, following which it will undergo further refinement based on feedback. Holeček emphasises that, while the eventual aim is to roll out Taktak across Europe, the team is mindful of the regional nuances involved. “Every country is specific,” she states, citing differences in consumer attitudes towards paying for news content and in regulatory frameworks. The consortium’s approach to scaling will be strategic and tailored to the specific needs and context of each market.

In the summer of 2023, amidst ongoing debates on artificial intelligence’s cultural and economic impact, the American Journalism Project (AJP) announced a partnership with OpenAI. This collaboration aimed to provide funding for the innovative use of AI in local newsrooms. Although OpenAI’s donation of US$10 million was smaller than recent contributions from Meta and Google, it marked a significant moment where a leading technology company sought to support the news industry through philanthropy.

Sarabeth Berman, CEO of the American Journalism Project, emphasised the opportunity to involve local news organisations in shaping the implications of generative AI. She highlighted the dual role of venture philanthropists in both fostering innovation and mitigating the financial decline of local news. AJP is part of a broader movement of venture philanthropy programmes, including the Google News Initiative and Meta Journalism Project, which are increasingly influential in journalism. These organisations position themselves as key players in revitalising local journalism through entrepreneurial ideologies and market-oriented solutions.

Venture philanthropists frame the crisis in local news as an opportunity for innovation, portraying themselves as essential in matching financial resources with deserving organisations. They argue that their investments can achieve sustainability and growth in local news where market forces have failed. John Thornton, a co-founder of AJP, compared venture philanthropy to venture capital, suggesting that these funds are necessary to support mission-driven news organisations.

This approach links financial growth directly to the success of local journalism, positioning revenue generation as a critical measure of success. AJP’s impact report claims that their grantees generally grow significantly in revenue, suggesting that venture philanthropy can address market failures in local journalism. However, critics note that much of this funding tends to benefit already affluent communities, raising questions about the equitable distribution of resources.

Venture philanthropists also play a crucial role in disseminating practical knowledge and technical capabilities in journalism. They offer training, best practice guides, and case studies to help news organisations integrate new technologies and business strategies. For example, the Google News Initiative supports projects like the Post and Courier’s use of Google Analytics to develop paid newsletters, demonstrating how local newsrooms can adopt innovative practices to drive revenue growth.

The influence of venture philanthropy also extends beyond financial support, to shaping the discourse around journalism’s future. These organisations produce significant discussion about journalism, promoting their own role as arbiters of change and innovation. They position themselves as knowledgeable experts who understand how to best use available philanthropic capital to achieve sustainability in journalism.

The discourse of venture philanthropy often merges financial and public missions, suggesting that market-oriented strategies can serve the public interest. For instance, Elizabeth Green, co-founder of AJP, stressed the need for expert teams to raise diverse revenue and develop strategic leadership within local news organisations. This framing implies that financial sustainability and public mission are intertwined and that successful organisations must navigate market realities to fulfil their public roles.

Venture philanthropy organisations also respond to journalism critiques, such as the need for diversity and equity, by framing these issues as mission and business goals. They highlight successful examples of integrating diversity into their product and audience development strategies, suggesting that these efforts can attract philanthropic funding.

According to this study, venture philanthropy’s blending of financial concerns with public mission creates a powerful discourse that shapes how resources are directed in journalism. While some critics argue for increased public and government support for local journalism, venture philanthropists present a compelling alternative by leveraging market-driven innovation and philanthropic capital to address the ongoing crises that the industry has been facing for more than a decade.

Creech, B. (2024). Venture Philanthropy, Local News, and the Murky Promise of Innovation. Media and Communication, 12. https://doi.org/10.17645/mac.7496

Reaching young people has become a strategic priority for Public Service Media (PSM) in many Western countries, as these organisations face challenges in engaging those audiences with the news. To expand their reach, PSM organisations often rely on social media platforms. However, this reliance creates dependencies on platforms like Facebook, Instagram, and Snapchat. These platforms are driven by commercial interests, leading to datafication and algorithmic filtering, which do not align with the values driving PSM, such as universality, independence, diversity, and accountability. As gatekeepers, these platforms significantly influence news dissemination, posing challenges to journalistic integrity and PSM’s core ideals. These concerns are particularly relevant given the central role of digital intermediaries in reaching youth.

This study focuses on the Norwegian Broadcasting Company (NRK), Norway’s public service media (PSM) organisation, and its efforts to target young audiences on Snapchat. Despite the global popularity of TikTok, Snapchat remains a major platform in Norway, particularly among young adults. As scholarly research on how newsrooms navigate Snapchat’s rules and metrics for disseminating news is limited, this study aims to fill that gap by exploring how journalists produce news for Snapchat, while adhering to PSM obligations and examining the implications for content dissemination and audience reach.

The research uncovers complex gatekeeping processes throughout the publication process, referred to as “dynamic gatekeeping,” in which journalists navigate Snapchat’s algorithmic gatekeeping. This involves interpreting audience metrics, adhering to Snapchat’s guidelines, and responding to audience reactions as seen in analytics. NRK’s news flow on Snapchat involves a reciprocal relationship between journalistic decisions and platform algorithms. The study identifies three key gate-keeping stages: pre-publication, publication-stage, and post-publication.

In the pre-publication phase, journalists at NRK UNG (NRK Youth) use Snapchat metrics from the Story Studio to prioritise and produce news stories. This data provides detailed information about the audience and influences decisions on news topic selection and presentation. The goal is to maximise engagement from the target youth demographic. The newsroom monitors metrics such as click-through rates, reading times, and audience demographics. They adjust the content to align with audience preferences and algorithmic influences. Despite relying on these metrics, journalists argue that they prioritise independent editorial decisions based on news values and ethics.

During the publication stage, Snapchat’s algorithms directly influence the selection of “tiles” (front page visuals for Snapchat editions) through ABCD testing. This testing helps determine which tile will engage the audience most effectively, influencing the prominence of specific stories. The newsroom creates multiple tiles for each story and uses algorithmic feedback to improve future editions. This process highlights the interplay between the platform’s algorithmic decisions and journalistic content creation.

Post-publication, Snapchat’s flagging system enforces community guidelines by restricting the dissemination of content deemed inappropriate. This flagging often frustrates journalists, as it limits the reach of what they believe to be important stories. Violations, such as those related to graphic content or commercial elements, result in flagged stories that only reach existing subscribers. Journalists adapt by modifying content to avoid flagging but express concerns about the inconsistent enforcement of guidelines and its impact on editorial integrity.

The study examines the complex and ever-changing relationship between PSM journalists and Snapchat’s algorithmic gatekeeping. It underscores the difficulties of upholding journalistic independence while reaching out to younger audiences on external platforms. These findings call for careful consideration of the implications for PSM’s editorial autonomy and the credibility of their news coverage. As PSM organisations grapple with these challenges, the study recommends additional research into the broader effects of platform algorithms on journalistic methods.

Røsok-Dahl, H., & Kristine Olsen, R. (2024). Snapping the news: Dynamic gatekeeping in a public service media newsroom reaching young people with news on Snapchat. Journalism, 0(0). https://doi.org/10.1177/14648849241255701

The role of AI in journalism offers both benefits and risks. Whilst it enhances efficiency for tasks such as transcription and data analysis, it also poses ethical concerns, propagates misinformation, and causes dependency on tech companies. Responsible AI use, editorial oversight, and robust training are crucial to navigating these growing challenges. Support from donors is essential for building capacity and fostering innovation in newsrooms.

Artificial Intelligence (AI) refers to: “a collection of ideas, technologies, and techniques that relate to a computer system’s capacity to perform tasks that normally require human intelligence.” 

Large language models (LLMs), able to comprehend and generate human language text, became widely accessible in late 2022, with OpenAI’s ChatGPT pioneering these efforts. Following its launch, companies like Google, Meta, and Microsoft released their own generative AI products, integrating the technology into existing systems.

The role of AI in journalism emerges as a double-edged sword. Whilst it has already inflicted much harm through social media algorithms and surveillance practices, it also holds promise for enhancing efficiency in the media. Journalists can harness AI to mitigate risks through informed adoption, leveraging its capabilities to increase the speed of monotonous tasks, track malign government funding, and identify deepfakes, particularly benefiting data journalists. However, it is imperative to maintain awareness of the risks posed by AI, especially considering past mistakes with social media and the tendency towards overreliance on it for audience reach.

AI Usage in Newsrooms

Media professionals are increasingly making use of AI tools. A May 2024 global survey conducted by the public relations firm Cision found that 47% of journalists used tools like ChatGPT or Bard. At the same time, in an AP report published in April, 70% of respondents, journalists and editors worldwide indicated that their organisation had, at some point, used various AI tools.

However, geographical differences in AI usage in newsrooms can also be observed. According to a new report by the Thomson Foundation and Media and Journalism Research Center (MJRC), focusing on the Visegrad countries (Poland, Czechia, Slovakia and Hungary), “AI adoption is slower and marked by ethical concerns, highlighting the need for careful management and collaboration.”

At the same time, journalists have been using AI tools for longer and on a much broader spectrum than most would think, says Damian Radcliffe, a professor at the School of Journalism at the University of Oregon.

In a recent survey by the Oxford-based Reuters Institute for the Study of Journalism (RISJ), media professionals mentioned back-end automation, such as transcription and copyediting, where AI tools are the most helpful in the media industry. This was followed by recommender systems, content production, and commercial applications. Another common example of AI application in newsrooms includes data analysis and automating repetitive tasks. This helps improve efficiency and frees up journalists to focus on more complex stories, whilst simultaneously increasing the speed and decreasing the costs of content production and distribution. Nowadays, “it is almost impossible to work without AI tools, especially if one works with large datasets,” says Willem Lenders, Program Manager at Limelight Foundation.

AI tools are used in newsrooms for various other purposes as well. According to Radcliffe, one significant use is in programmatic advertising: over 90% of US ads are handled this way. Another innovative application is dynamic paywalls, which adjust based on user-specific factors such as location, device, and visit frequency. This approach, employed by larger outlets like The Atlantic and The Wall Street Journal, allows organisations to tailor the number of free articles and subscription offers to individual users. Additionally, AI is used for predictive analytics, helping newsrooms identify trending stories, influence article placement, devise social media strategies, and plan follow-up stories.

AI-Associated Risks

The use of AI in journalism also presents significant concerns, as the usage of AI poses substantial risks related to reliability, ethics, and the dissemination of misinformation. AI’s ability to “hallucinate” facts, or generate plausible but incorrect information, makes its use in information gathering problematic. Therefore, experts argue that news organisations should implement ethical guidelines and robust training to navigate these challenges.

Limelight’s Lenders emphasises that responsible AI use depends not just on its application but on who owns the tool, drawing parallels to the influence of big tech on content distribution. He advocates for a balanced use that includes human oversight, to prevent the exclusion of critical editorial judgment. Radcliffe also identifies the most significant risk as removing human oversight in newsrooms. He thinks there are topics where AI tools can be helpful, for example in sports coverage, which can often be quite formulaic. However, other beats might require more nuance, and AI cannot provide that yet. An example of this risk is the insensitive headline generated by AI in an MSN obituary of a basketball player, underscoring the need for editorial supervision to avoid catastrophic mistakes. Furthermore, Lenders argues that LLMs regurgitate what has been written before, which can lead to reproducing harmful stereotypes.

The current function of generative AI jeopardises access to trustworthy information. It does not distinguish between reliable and unreliable sources and often fails to disclose its primary source of information, making verification difficult. This amplifies misinformation and public confusion, emphasising users’ need for digital and media literacy.

Accountability is another critical issue. Unlike human-generated content, AI lacks clear attribution, undermining public trust in journalism. Journalists’ intellectual property can even be compromised this way, as AI often uses information from journalistic articles without credit, exacerbating existing viability issues in journalism.

Radcliffe notes that smaller newsrooms might embrace AI as a cost-saving measure, reducing the number of reporters. Those roles will never come back. He warns of the dangers of dependency on platforms, highlighting the lessons from social media where algorithm shifts have impacted reach, and the control has always remained with tech companies. “It is not a partnership; all power lies with the tech companies,” he argues.

Lenders echoes this concern, pointing out that the primary aim of tech companies is profit, not public interest or quality information. He suggests developing independent tools and technologies, like those by OCCRP, ICIJ, Bellingcat, Independent Tech Alliance, AI Forensics, and others. However, these require significant investment and user support from the journalism sector.

Radcliffe further cautions that news organisations risk becoming redundant if users turn to chatbots for information. To mitigate this, he advises preventing chatbots from scraping content and looking to the newsrooms to create unique content that adds value beyond what AI can offer. He believes fostering trust, and educating the audience on why journalism matters, are crucial. Lenders concurs that AI cannot replace the relationship with the audience, highlighting trust as the main issue. He also believes smaller independent newsrooms will recognise that they cannot maintain quality by relying solely on AI.

The debate about AI in journalism often polarises into two extremes, Lenders adds that it will either save or ruin the industry. “We don’t need to worry about the robots, we have to look at the reality,” he argues. A realistic perspective acknowledges the harm algorithms have already caused, such as in ad distribution and spreading disinformation. An AI Forensics study showed how Meta allowed pro-Russia propaganda ads to flood the EU, illustrating the potential for AI misuse.

Reporters Without Borders (RSF) also raises alarms about AI-generated websites that mimic real media sites and siphon ad revenue from legitimate news outlets. Research by NewsGuard identified numerous sites predominantly written by AI, aiming solely for profit by maximising clicks with minimal effort. This approach eliminates ethical journalism, floods the market with questionable articles, and diminishes access to reliable information. These AI-generated articles also sometimes contain harmful falsehoods, underscoring the moral necessity to disclose AI-generated content and ensure transparency, so readers can critically evaluate the information.

The Potential Role of Funders

In this evolving landscape, donors could play a crucial role, not by providing direct solutions but by supporting organisations which, together, form an ecosystem that nurtures innovation. Their involvement could bridge the gap between technology and policy, particularly in journalism. For example, donors can invite experts with a high level of tech knowledge to critically assess potential pitfalls and ensure they are well-informed, in order to avoid simplistic utopian or dystopian narratives.

Lenders highlights the importance of donors informing themselves about the possible harms and risks of AI and encouraging grantees to improve their technology knowledge profoundly. He emphasises the need for good core funding to avoid reliance on cheaper, riskier solutions. Lenders argues that, given the rapid pace of technological change, it is crucial to have robust organisations that can anticipate risks and support journalists in connecting with these entities or conducting their analyses. Rather than shifting funding every few years, building capacity within newsrooms and CSOs to keep up with AI advancements is a more sustainable strategy.

Conversely, Radcliffe underscores the necessity of AI training, particularly for smaller news organisations. Whilst large organisations are well-resourced and capable of developing in-house AI solutions, smaller ones often lack the resources to follow or contribute to debates on AI. These smaller newsrooms are also less able to engage in legal battles against tech companies. Thus, donors should support them in lobbying for their needs and amplifying their voices. Training surrounding the uses and dangers of AI, especially increasing revenue through methods like dynamic paywalls and facilitating connections among smaller newsrooms to share their AI experiences and use cases, are crucial steps donors can take. “But I would encourage all donors to ask newsrooms what they need,” he adds. “Don’t dictate the training and funding, ask the outlets you want to support how you can best help them in this space.”

Smaller publishers often turn to third-party AI solutions from platform companies due to the high costs and challenges of independent development, such as the need for extensive computing power, competition for tech talent, and the scarcity of large datasets. These platform solutions offer convenience, scalability, and cost-effectiveness, allowing publishers to leverage AI capabilities without the financial burden of in-house development. However, Lenders points out the risks associated with cheaper solutions. “We need newsrooms that have the capacity to be critical of what they use,” he argues, adding that it is not a question of utopia versus dystopia: understanding how AI tools can help newsrooms requires a realistic analysis of its benefits and risks.