A little over one year after the launch of ChatGPT, the world’s first major publicly available generative AI product, it’s clear the technology can save busy journalists a huge amount of time. Newsrooms across the US are already using AI to come up with headlines, generate summaries, and choose pictures.
But it also carries risks, for journalists and the reading public.
Indeed, ChatGPT itself rattled off numerous when I asked what they might be: factual errors, bias, potential for skill erosion, creation of misleading ‘deep fakes’, and of course, ultimately (for journalists, at least), redundancies!
Some of these are already well known, especially among New York City lawyers after two were sanctioned last June for presenting evidence from six fictitious cases to the court, courtesy of ChatGPT.
But what about the economic risks for media companies, some of whom might be privately salivating at the opportunity to cut costs. AI interfaces are widely available either for free, or a small monthly subscription – ChatGPT’s most advanced version, GPT-4, costs around US$20 ($A30) a month.
Aimee Rinehart, a senior product manager for AI strategy at the Associated Press, among the world’s biggest news organisations, issued a stark warning earlier this month at a seminar in New York City.
Provided cheaply for now to encourage take-up, AI developers could suddenly jack up the price significantly, after news organisations and their employees have come to rely on it.
“That is always the worry, and that is what has always born out. Journalists start using a tool, they really like it, and then Google says, ‘we’re not supporting that tool anymore’,” she told journalists at the New York Foreign Press Centre on 18th January.
“They could also decide not to offer any public APIs; they could close it down entirely, and then suddenly it’s only blue-chip companies that are able to afford [them] – I’m thinking like sectors that make more money than journalism, like banking and medicine and things like that.”
Were this to occur down the track, media organisations might have an even greater case to receive royalties from AI providers, who rely heavily on published articles as the inputs to their AI interfaces. Last September, News Corp (publisher of The Australian) said it was in negotiation with AI businesses with a view to receiving payment for use of its publications.
In December the New York Times went a step further, suing Microsoft and OpenAI, the developer of ChatGPT, for using its articles without permission. “These tools were built with and continue to use independent journalism and content that is only available because we and our peers reported, edited, and fact-checked it at high cost and with considerable expertise,” the Times said.
The courts could in effect destroy ChatGPT by ordering its developers to stop sweeping the millions of news articles available on the internet, often only via a paid subscription.
Little regulation exists in the US surrounding AI. Last month the US senate held a hearing on its implications for the future of journalism, where senators appeared inclined to side with the news industry, as politicians in Australia typically did when news publishers sought compensation from search engine providers for referencing news articles.
Regardless, investors appear sceptical that publishers will get the upper hand in this emerging titanic stoush. The share price of Microsoft, which owns half of ChatGPT, has increased around 70 per cent since ChatGPT began rolling out in late 2022.
Artificial intelligence has already started to revolutionise journalism. It’s not yet (thankfully, perhaps!) able to break stories or lunch with contacts, but its ability to answer questions accurately on any topic within seconds, or even make arguments, is without exaggeration mind blowing.