NewsBite

New study finds AI hiring systems may be discriminating

Think your dream job application is in the hands of a recruiter? Think again — it may be an algorithm.

Disturbing job interview trend taking over

AI hiring systems are revolutionising the job market, offering employers a faster and cheaper way to screen candidates. But a new study from the Melbourne Law School researcher Dr Natalie Sheard warns that these systems could unintentionally be entrenching discrimination against women, people with disabilities, and culturally diverse applicants.

Disturbing job interview trend taking over

About 30 per cent of Australian businesses are already using AI hiring tools, with their adoption only set to increase in the coming years. These systems rely on algorithms to sift through CVs, assess candidate responses and rank suitability while promising efficiency. But experts say they could also lock some people out of the workforce.

Discriminatory AI hiring practices can affect women, people with disabilities, and culturally diverse candidates. Picture: Supplied
Discriminatory AI hiring practices can affect women, people with disabilities, and culturally diverse candidates. Picture: Supplied

Who is at risk?

The study published by The Guardian found that AI tools often penalise candidates for employment gaps, which disproportionately impacts women who take time off for maternal or medical reasons.

It also found that speech-to-text tools used in video interviews struggle with non-native English speakers, with error rates for Chinese non-native English speakers spiking to a staggering 22 per cent, compared to less than 10 per cent for US native English speakers.

“The training data will come from the country where they’re built – a lot of them are built in the US, so they don’t reflect the demographic groups we have in Australia,” Dr Sheard told the publication.

AI hiring tools can unintentionally reinforce biases from historical data. Picture: Supplied.
AI hiring tools can unintentionally reinforce biases from historical data. Picture: Supplied.

“[AI hiring systems] promise time and cost savings for employers, improved quality of hires, and a superior candidate experience. But they may also enable, reinforce, and amplify discrimination against historically marginalised groups,” the report read.

“They have been found to discriminate against applicants who wear a headscarf or have a Black-sounding name, and when the system is unable to accommodate requests for reasonable adjustments to enable access by people with disability.

“In the most well-known example, an AI system developed by Amazon learned to downgrade the applications of job seekers who used the word ‘women’s’ in their CVs.”

Biased Algorithms

Some algorithms rank candidates based on “ideal” traits, like an uninterrupted career path. Others use poorly designed filters to screen applicants, such as searching for specific universities or minimum GPAs — criteria that often act as proxies for social class, age, or gender.

AI speech-to-text tools have error rates as high as 22 per cent for Chinese non-native English speakers. Picture: Supplied
AI speech-to-text tools have error rates as high as 22 per cent for Chinese non-native English speakers. Picture: Supplied

The report found this reliance on historical data becomes problematic when it reflects discriminatory practices, such as preferring male-dominated work patterns or penalising certain linguistic styles.

Call for transparency

Dr Sheard warned these errors could have a devastating impact on affected groups.

“Distinct risks of discrimination emanate from the use by employers of AHSs,” the report reads.

“A discriminatory AHS can cause harm at unprecedented speed and scale, and has the capacity – as one research participant, explained – to ‘systematically lock … [disadvantaged groups] out of the workforce’.”

With 42 per cent of global companies already relying on these tools, the study called for urgent regulatory measures to ensure AI hiring systems are transparent, fair, and inclusive.

Dr Sheard highlighted the lack of transparency as a key issue with the AI interview systems.

“This is the problem. In a human process, you can go back to the recruiter and ask for feedback, but what I found is recruiters don’t even know why the decisions have been made, so they can’t give feedback,” she said. “That’s a problem for job seekers … It’s really hard to pick where liability lies, but absolutely vendors and employers are legally liable for any discrimination by these systems.”

Originally published as New study finds AI hiring systems may be discriminating

Original URL: https://www.ntnews.com.au/technology/new-study-finds-ai-hiring-systems-may-be-discriminating/news-story/9db27239579e261c976ad32df044b685