Generative AI is being used by some traditional media as a way to generate more content at a lower cost.

In a sector affected by sluggish growth, some media outlets are betting on AI-generated reporting as the future of journalism; but in a time of ‘polycrisis’ when too much is happening, too fast, this is the last thing democracies and societies need.

In the lack of AI guardrails, the implications for the quality of content and the role of journalists are huge.

Is generative AI the future of journalism?

The journalism profession has started to grapple with how AI will be incorporated into workflows with the first cases of unionisations making the news.

Last year, CNET was caught producing articles with generative AI systems without being transparent with readers about their AI authorship. The news sparked outrage, and The Verge reported that more than half of the AI-generated stories contained factual errors. Members of CNET’s editorial staff have since unionised, asking for better conditions for workers and more transparency and accountability around the use of AI.

For similar reasons, Hollywood writers have been on strike against major studios. Mainly, they want higher wages and more residuals from streaming platforms. But another key issue that writers are concerned about is the use of AI in writing films and TV shows.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Traditional media eyes AI to boost revenue

For its part, News Corp Australia is using AI to produce 3,000 articles a week, as revealed by its executive chair Michael Miller, to generate stories on weather, fuel prices, and traffic reports.

The media conglomerate has recently reported a 75% drop in its full-year profits and that, for the first time, more than half of the company’s revenue in 2023 came from its digital platforms. As a result, it is now looking at expanding the use of cost-saving AI-produced content.

What is at stake?

The first thing at stake is the quality of journalism, as the level of accuracy of the current generative AI is notoriously poor. Large language models such as ChatGPT have no factual understanding of the world. They do not produce facts, they only predict language. They cannot detect nuance in language and tend to amplify biases and stereotypes.

In addition, media corporations may see their reputations harmed if they choose to harness AI, particularly if they are not transparent about its use. Historically, these sites and publications have been renowned for investing money in quality journalism and fact-checking and have been typically seen as a bulwark against misinformation and fake news.

Some media corporations are increasingly moving to adopt a data-driven business model based on the monetisation of users’ personal data. This model—which has been at the centre of social media platforms for decades—has already proved to be unsustainable, as it prioritises profit over quality content and respect for the user’s privacy. In the unpredictable and unstable world we live in we need to rely on strong, sustainable, quality journalism to keep us informed and hold the people in power to account. With AI, our democratic values are at stake.