Type:
Theme:

How AI Forces Us to Rethink Journalism’s Business Model

Artificial intelligence offers powerful tools while also putting new pressures on already fragile business models in journalism. While newsrooms can turn to AI to boost efficiency and reach audiences in new ways, they also have to face a significant decrease in traffic – and complex questions about who should pay for the journalism that fuels AI systems.

AI tools are reshaping journalism in many ways, and their impact on business models is becoming increasingly visible. Experts usually articulate both optimism and uncertainty surrounding these changes.

For example, Niamh Burns, Senior Research Analyst in Tech and Media at Enders Analysis, describes the current situation as “a mixed bag.” Veronika Munk, Director of Innovation and New Markets at Denník N, notes that every newsroom she knows already uses AI in some form. There is even a sense of fear of missing out, she says, as newsrooms rush to try new tools, but “only a few look at success metrics,” and follow whether these tools deliver the results they expect.

The Benefits of AI in Newsrooms

Despite this lack of clarity, certain benefits of AI are evident. AI tools can help news outlets respond more quickly to their audiences, operate more efficiently, and personalise their products. They also give news organisations new ways to tell stories, reach audiences through different formats, and create products that have the potential to bring in more revenue. One clear example is the use of AI tools for tagging, which Munk describes as particularly helpful for search engine optimisation and direct email campaigns, both of which are essential for maintaining and growing readership. Other examples include automated social media posting, translation, and transcription tools, both from audio to text and vice versa.

Burns highlights that AI tools can become especially valuable for data journalists. These tools make it possible to analyse large datasets far more quickly and with fewer resources, which means even small newsrooms can attempt investigations that would previously have been out of reach. She also points out that “AI can also help with the multiplatform distribution model,” making it easier to prepare audio, video, or social media versions of a single story. In many cases, these new formats lead to higher user engagement and, with that, a greater chance of converting casual readers into subscribers.

Some organisations are already using AI to write headlines that perform better in search engines or to translate stories into new languages. This allows them to reach audiences they have never served before. Burns, however, warns against taking personalisation too far. While AI can support more sophisticated recommendation systems, journalism has always been shaped by editorial judgement, and she argues that this cannot be fully delegated to algorithms. Editors must still decide which stories matter most.

Munk takes a practical view of these developments. If a newsroom can save time on routine tasks, she says, then there is more capacity for journalistic work, and this ultimately strengthens the product. She has also seen AI tools directly contribute to higher revenue. “We have a lot of campaigns, and this tool, Manychat, a social media client, is really useful,” she explains. Denník N integrated the tool into Instagram, where sharing links is not possible. However, when the outlet partnered with The New York Times and bundled subscriptions, users could comment “New York Times” on an Instagram post and they immediately received an automated direct message with the subscription link. Munk explains that they have been using this tool for half a year and “the conversion rate is quite high sometimes.”

Another important contribution of AI is its ability to analyse audience behaviour. By identifying trends in topics, formats, or publishing times that perform best, AI tools can guide editors as they shape content strategies. These insights help balance public interest journalism with the need to produce stories that draw enough attention to sustain the business.

Still, as Munk notes, while some outlets may think about monetising the tools they develop, most are building similar systems for internal use, such as summarisation tools or language checkers. This means the competitive advantage often lies not in creating unique tools, but in deploying them thoughtfully.

Risks to Traffic and Visibility

At the same time, concerns about the risks AI poses to journalism have been steadily growing. Many of these concerns arise from the simple fact that the business model for news media was already fragile long before AI tools became widespread. Burns explains that publishers originally put their content online for free because they expected to earn money from advertising. That model has been faltering for a decade, but, as she says, “with AI, we see a further challenge: news organisations not getting clickthrough traffic as before.”

This is indeed the main concern for news media. While tech companies have relied heavily on news content to train large language models, now search engines and chatbots answer many queries directly. This means that even when users seek reliable information, they may not reach the website that produced it. Studies already show that the clickthrough rate for Google’s AI-generated summaries is dramatically lower than for traditional search results – Tollbit, for example, found a 91% decrease. Furthermore, Cloudflare reported that OpenAI scraped a news site hundreds of times for every single referral page view it sent.

Publishers see a pattern in this: while their content helps power AI tools, their own visibility shrinks. According to a study by the Reuters Institute, 74% of respondents are worried about a decline in referral traffic for their news organisation.

The impact is not equal across newsrooms. Munk notes that Denník N feels the decline in clickthrough traffic less because of their hard paywall model. Still, it can be a serious problem for outlets that rely heavily on advertising, she adds.

This has serious consequences. As audience behaviour changes, more people turn to AI-powered search engines and chatbots. For many publishers, disintermediation, the loss of direct connection with audiences, is becoming the greatest fear. Younger audiences, who already have weaker ties to traditional news brands, are drifting even further away.

Therefore, Burns argues that newsrooms “need to build direct engagement with their audience,” also because nobody knows how these tools will evolve. They change constantly as tech companies adjust their products to improve user experience. The figures and patterns we see today may shift again in as little as a few months.

Legal Battles and Licensing

Against this backdrop, publishers are trying to rethink how they can adapt. Some believe that they may eventually need to distinguish between human readers and AI agents. As The Atlantic’s CEO, Nicholas Thompson, argued at a conference, they need to identify who is visiting the site so that the organisation can decide how to monetise that interaction. He imagines showing different products or even blocking access in some cases.

You could block AI crawlers, Burns says, but then you face the “problem of losing visibility, because users will still go to these tools to search for information.” Therefore, blocking may protect content, but it also deepens the risk of disappearing from public view.

There is also a broader ethical and economic issue. AI companies rely heavily on the quality of journalistic work, which depends on careful fact-checking, verification and editorial judgement. A white paper published last year by News Media Alliance confirmed that journalistic content is among the most frequently used sources for AI systems. Yet much of this use happens without permission or payment. Burns asks whether it is possible to create an incentive structure where AI companies pay for content. Her answer is cautious: “It’s very patchy at the moment.”

Some publishers have responded with legal action. The New York Times has sued OpenAI for copyright violations, while Dow Jones and the New York Post have taken action against Perplexity.

Others have chosen partnership. A number of news organisations have already agreed to licensing deals. Burns sees “some development in the content of such deals, they are more sophisticated.” She believes that the companies should pay the newsrooms not only for historical, but also for ongoing access.

Survey data from the Reuters Institute shows that almost four in ten publishers expect licensing income from AI companies to become significant. Most, however, prefer collective deals that support the entire sector, rather than each newsroom negotiating its own terms.

Munk agrees that AI companies need journalistic content, and she sees licensing as essential for the future. “It doesn’t work otherwise. If you use something, you need to pay for it,” she argues. Burns also believes that licensing is the right path but warns that not every market will benefit equally. Large English-language publishers have more leverage, while smaller organisations will struggle. She argues that this imbalance shows why “regulatory intervention is needed here, not just ad hoc deals.”

At the same time, the growing value of human editorial oversight may become a strength for publishers who emphasise accuracy, verification, and accountability. Munk notes that journalists are responsible for the content they produce, and this responsibility gives this type of content greater value. Outlets that maintain strong editorial standards, she says, will stand out in the information environment.

Looking ahead, many argue that publishers, journalists, and tech companies should work together to understand how different forms of journalism contribute to the AI value chain. This understanding will be essential for building sustainable business models in the next era of journalism.