NewsBite

Adobe uses ‘fake’ Indigenous art to train AI models

The tech giant is carrying fake Indigenous art as stock, raising concerns about if it can lawfully resell and licence this art. Then there’s the use of such images for training AI models.

The Australian has seen numerous instances of images of purported indigenous art developed by West Bengali site hellovector.com selling on Adobe Stock under third party names.
The Australian has seen numerous instances of images of purported indigenous art developed by West Bengali site hellovector.com selling on Adobe Stock under third party names.
The Australian Business Network

Cheaply acquired vector and AI enhanced imagery originally developed and sold by small offshore sites are selling on Adobe Stock at premium price.

The Australian has seen numerous instances of images of purported indigenous art developed by West Bengali site hellovector.com selling on Adobe Stock under third party names.

Adobe grants “an extended licence” for around $88, but the originator site claiming ownership sells them itself for about $4 and forbids copying by third parties.

A search on Adobe Stock for Aboriginal artworks returns 384,000 results, an indicator of a large image base.

Hellovector.com’s website said it did sell AI-developed vector imagery. Some of its works are marked as AI enhanced. Copying was against its policy.

The same image from helloVector in Adobe's stock library.
The same image from helloVector in Adobe's stock library.

In reply, Adobe said this vector art was legal, and it was questionable whether AI was involved. In any case, the image we supplied was submitted before its generative AI policy was introduced and won’t be taken down.

“As this content does not violate our policies, it will remain online,” Adobe said in a statement.

One concern is the cultural issues around Adobe carrying fake Australian indigenous art as stock. Another is whether Adobe can lawfully resell and licence this art. Another is how effectively Adobe enforces its policies. Then there’s the use of such images for training AI models.

In its statement, Adobe confirmed it did use third-party provided Adobe Stock to train its AI models.

“Yes, part of the Stock content used to train Firefly contains generative AI content which has gone through the same submission process as any other Stock asset.” Adobe said contributors had to agree to terms of use that included having the rights to the images.

The corporation is far from the only one grappling with the complex world of copyright, intellectual property and image authenticity thrown up by generative AI.

Elsewhere, Adobe has been kicking goals in the fight to expose the avalanche of fake AI-created images on social media, with AI and generative AI regarded as a major threat to democracies and social cohesion. That issue is pressing, with two billion voters going to the polls this year in about 50 countries.

However, the tech industry is rallying behind a new standard that will force creators to add content credentials to media should they want their images, video and documents to be seen as credible on social media, websites and other platforms.

The content credential adds a layer of “tamper-evident provenance” to all types of digital documents, according to the Content Authenticity Initiative website. That includes details of editing and any involvement of artificial intelligence.

Adobe, The New York Times and Twitter set up the initiative in 2019, but concern about fake online content is heightening now.

Speaking with The Australian at the Adobe Summit in Las Vegas, Adobe executive vice president, general counsel Dana Rao said membership had ballooned to more than 2500 companies.

Members included camera companies such as Nikon and Sony, software companies such as Microsoft and Adobe, and media outlets such as The Wall Street Journal, The New York Times, AP, Reuters, and BBC.

“They want to make sure they can communicate authentically to their customers,” Mr Rao said.

“We’re very excited that Google decided to join a month ago and is actually on the steering committee of the standards.”

Mr Rao said Meta had examined the standard and was going to add aspects of it into Facebook and Instagram. Users would touch an icon on the image to reveal its content credentials. No credentials would equate to unverified credibility.

Governments were starting to implement the standard in their administrations too. Mr Rao said he would encourage the Australian government to implement it within their administrations, and encourage social media companies and news organisations in Australia to adopt it too.

“We would be encouraging them (Australian government) to take a look at how they want to protect their elections. And this is the way to do it.

“We’re very happy that we’ve seen the government in the European Commission and in the White House all say this is part of the solution. We have to have this. The European Commission is further along in terms of actually issuing regulation,” he told The Australian.

Mr Rao said the initiative wanted all content capture devices and all phones to have the ability to add content credentials if users wanted it. And for all media outlets and social media platforms to carry it if they find it. This would apply to images, audio and video.

Chris Griffith attended Adobe Summit in Las Vegas courtesy of Adobe.

Add your comment to this story

To join the conversation, please Don't have an account? Register

Join the conversation, you are commenting as Logout

Original URL: https://www.theaustralian.com.au/business/technology/adobe-uses-fake-indigenous-art-to-train-ai-models/news-story/3ebc3498e96f84977c464c28d93d25cf