By Karl Quinn
More than half of Australia’s media and creative workers are “extremely concerned” about the rise of artificial intelligence, and almost all believe the government needs to take a stronger role in regulating its use, according to a survey of members of the Media, Entertainment and Arts Alliance released on Monday.
Almost 400 members of the MEAA, which represents actors (through its Equity branch) and journalists, participated in the survey, with 56 per cent saying they were extremely concerned about the rise of AI and a further 30 per cent expressing “moderate” concern. Only 2 per cent of respondents said they were not concerned at all.
The spread of misinformation was deemed the leading threat by 91 per cent of respondents (74 per cent extremely concerned, 17 per cent moderately), closely followed by the potential for the theft of intellectual or creative work (72 per cent extremely concerned, 18 per cent moderately so).
And for Cooper Mortlock, the latter is more than just a possibility. The Sydney-based voice artist claims it has already happened to him.
In December 2022, Mortlock landed a steady gig as one of five voice artists working on an animated online series. He signed a contract for 52 episodes to be recorded over 12 months.
“But when we reached episode 30, they cancelled it,” he says. “And then about a year later, after the contract expired, the producer released another episode using an AI clone of my voice and the voices of the other actors.”
Mortlock challenged the producer over what he felt was a clear infringement of his rights.
“We sent a cease and desist letter, and they responded with, basically, a no,” he says.
The producer claimed they hadn’t used AI, but rather “vocal impressionists as well as digital technology to make it sound like the characters”, says Mortlock.
Through their lawyer, the producer repeated the denial, while adding, “even if they had used AI, that would have been allowed under the terms of your contract”, Mortlock says.
He was furious, and took the case to the MEAA. But the union’s legal advice was that the producer was probably right – even if the producer had used AI, it probably didn’t constitute a clear breach of the contract, which had been signed before most people in the media and entertainment industries had given much thought to the risks the rapidly evolving technology might pose to their careers and income.
There are plenty of advocates in the entertainment industry for the value of AI as a tool to enhance creativity. But the Hollywood unions argued with some success last year that it needs to be used with the consent and compensation of performers. And that is not always the case.
Scarlett Johansson recently alleged that Open AI had used an AI-generated version of her voice for its personalised digital assistant, after she had declined to give permission for her actual voice to be used in the app. Though the company has denied the allegation, a digital analysis has found “striking similarities” in respective voices.
If it can happen to people of that status, with all the protections their fame and wealth can buy, what chance do those further down the professional tree have?
That’s why Mortlock, like 97 per cent of those surveyed by the MEAA, believes the government needs to regulate the use of this technology as a matter of urgency.
“This type of AI, as it currently exists, is stealing people’s work and reappropriating it without compensation,” he says. “That is theft.
“There need to be laws in place to protect actors and workers from being exploited like this. It shouldn’t have to be stipulated in a contract that you won’t do this without my consent. It’s pretty shitty.”
Contact the author at kquinn@theage.com.au, follow him on Facebook at karlquinnjournalist and Twitter at @karlkwin, and read more of his work here.