Can you spot the issue with these photos of Australian workers?
These look like normal photos of Australian workers, but there is one big difference. But can you spot it?
Internet
Don't miss out on the headlines from Internet. Followed categories will be added to My News.
Artificial intelligence has hit the mainstream, with art and language tools now freely accessible online
They can generate content from simple text prompts – no coding required.
We asked OpenAI’s Dall-E 2 to imagine the people behind common Australian jobs.
This is what it produced:
ENGINEER
TEACHER
AGED CARE WORKER
The prompt for the above was simple: “a photo of an Australian ”.
Dall-E 2 then produced four options for each.
The algorithm seemed to equate “Australian” with “white”, despite 3 per cent of the population being of Aboriginal and/or Torres Strait Islander origin, and 28 per cent being born overseas, including 5 per cent born in India or China alone, according to the 2021 Census.
The AI also presented all four teacher images with a male, despite the Australian government’s Workplace Gender Equality Agency data showing 75 per cent of professionals in the preschool and school education sector were female.
Computer programmers, taxi drivers, waiters and aged care workers also had no gender diversity in their AI-generated images.
MINE WORKER
COMPUTER PROGRAMMER
PLUMBER
NURSE
Perhaps most surprising, however, was that all Australian CEOs were represented as men – not just because WGEA showed almost a quarter (22 per cent) of company heads were female, but because OpenAI used the exact example of a CEO when it announced it was “implementing a new technique so that DALL-E generates images of people that more accurately reflect the diversity of the world’s population”.
“This technique is applied at the system level when DALL-E is given a prompt describing a person that does not specify race or gender”, the statement read.
University of New South Wales School of Computer Science and Engineering’s Dr Sebastian Sequoiah-Grayson, who has designed courses on IT ethics, said we should be worried about the results from AI tools such as DALL-E 2 “because of what it says about us”.
AI datasets usually start with a process of humans tagging images with labels so the algorithm has something to learn off.
DOCTOR
CEO
TAXI DRIVER
“The AI is training from the cultural tropes we as a species have built,” he said.
“You are paying someone 2 cents an image to see if someone looks law abiding or like a CEO.
“It’s us humans that make these decisions and that dataset gets fed into a system that then learns to make predictions.
“(If you approached) humans of different ages and showed them pictures of white men in suits and people who didn’t look like that then said ‘pick the ones you think look like CEOs’, I think a lot of humans would make the same choice.”
RETAIL WORKER
WAITER
AI’S PROBLEMATIC HISTORY
There are many examples of unethical or biased AI tools.
When Microsoft launched it’s Tay bot in 2016, it took about 24 hours of learning from Twitter to start posting pro-Hitler tweets and supporting Donald’s Trump’s wall separating the US from Mexico.
Meanwhile, a 2018 research paper on facial recognition technology found algorithms performed worst on darker-skinned females, with error rates up to 34 per cent higher than for lighter-skinned males.
Dr Sequoiah-Grayson said a major ethical issue for AI in general was that humans became less answerable to decisions.
He gave the example of a US AI used to predict whether a criminal is likely to reoffend.
“We don’t want to live in a place where we ask, ‘Why was so-and-so sent to prison for 20 years?’, ‘Oh I don’t know but the AI says it’s right so it must be right’,” he said.
He also gave the example of AI weapons.
“Even if they are more accurate than soldiers on the battlefield, who is answerable?” he said.
“Do we want to live in a place where decisions around targets are just being pumped out the end of an algorithm?”
AI IN THE WRONG HANDS
In 2017, an algorithm was developed that claimed to predict a person’s sexuality from a single image.
Whether accurate or not, Dr Sequoiah-Grayson said the consequences of using such AI was “horrific”.
“It won’t be long before (facial recognition) is available on people’s phones and downloadable as an app,” he said.
“You can imagine kids running around with a ‘gay detection algorithm’ on their mobile phone – if you have ever met children, you know exactly what will happen.”
And that’s not to mention the ramifications if it was used by state actors.
Dr Sequoiah-Grayson said new technology was typically introduced with a positive application, but once rolled out might be used in another way.
He suggested facial recognition apps might be marketed as a way for parents to find their lost child in a crowd, but others could use it to stalk and harass.
“I’d be very surprised if (a facial recognition app) wasn’t available in the next five to 10 years,” he said.
“There is no point banning it in Australia because the internet exists, people will be able to get it anyway.”
Originally published as Can you spot the issue with these photos of Australian workers?