Published on the 22/05/2025 | Written by Heather Wright

Discrimination and beyond…
A University of Melbourne academic is warning that there is a serious risk that AI in recruitment will breathe new life into old biases and lead to harm ‘at unprecedented speed and scale’.
Use of predictive AI hiring systems (AHS), which analyse candidate profiles and historical data to predict their potential success in a role, is becoming increasingly widespread. The technology is being used to screen CVs, conduct assessments evaluating personality, behaviour and/or abilities, provide interview analytics – should a candidate get that far – and in some cases to automate initial interactions through chatbots. ‘Robo-interviews’ where candidates self-record answers to questions from an AI program are also being used. (And if you need a warning on that, check out the proliferation of videos on Tiktok of alleged AI interview fails.)
“Employers also need a better understanding of the AHSs rolled out in their organisations and their potential to cause harm at scale,”
On the flip side job seekers are tapping into generative AI to create resumes and cover letters – a dystopian world of machines talking to machines, with AI writing job descriptions and then screening candidates whose CVs are AI written.
But Natalie Sheard, an AI + discrimination researcher and lawyer, and McKenzie postdoctoral fellow at the University of Melbourne, says while the systems promise time and cost savings, they may also enable, reinforce and deepen discrimination against already marginalised people in the labour market.
Sixty-two percent of Australian organisations used AI in recruitment, either moderately or extensively, in 2024, according to the Responsible AI Index 2024 and a survey from AI hiring platform HireVue said earlier this year claimed AI adoption by HR professionals had surged to 72 percent this year, from 58 percent last year – but the deluge of AI-generated content from candidates is also prompting increased demand for skills assessments.
Sheard says her research investigating the use of AHSs by Australian companies found the way the systems were used in practice created a serious risk of discrimination, with data used to train the systems embedding present-day and historical discrimination, while systems developed overseas may not reflect the diversity of people in Australia.
One system featured in Sheard’s research has just six percent of its training data from Australia or New Zealand.
She says urgent action is required by government to review and reform discrimination laws to address AI discrimination, with greater transparency around the working of the AI systems – and the training data used – also needed.
“Employers also need a better understanding of the AHSs rolled out in their organisations and their potential to cause harm at scale,” she says.
“The discovery in this research of significant risks to equality rights when employers use AHSs raises the question: Should these systems be used at all?”
Concerns over discrimination have been an ongoing theme with AI, and AI in recruitment. But the issues around AI in recruitment extend beyond discrimination of marginalised groups, with questions over whether AI actually picks the most qualified candidates anyway.
AI might be able to screen resumes and match keywords fast tracking someone whose resume has, say, ‘accounts payable’, but when it comes to assessing candidates soft skills and areas such as empathy, creativity, enthusiasm and temperament, along with cultural fit and interpersonal qualities, AI falls short.
With their reliance on historical data and trends, they can easily skip over the more unique and innovative qualities a candidate might bring – at a time when companies are increasingly saying they’re looking for creativity, adaptability and other soft skills.
Tools which analyse body language and vocal tone can misinterpret cues and potentially rule out highly qualified candidates.
And when it comes to candidates the growing use of AI to ‘help’ write applications can see those who are good at using the technology engineering the process to rank well – with or without the skills actually required for the job.
James Robinson, managing director at Cardiff-based media planning and buying agency, took to LinkedIn earlier this year to discuss some of the fails he’s seen including ‘copy and pasters’ who address the cover letter to the wrong business and ‘AI overloaders’ who forget to remove prompts such as [insert company name here].
His post prompted plenty of responses from others who had experienced similar.
“I’m all for tech making life easier, but if AI is getting candidates interviews that they’re not ready for, it wastes all of our time and energy,” Robinson says.
Many highly qualified job candidates, confused by their inability to land the dream job, may also be thinking the same thing.