The rise of hire intelligence
A Swedish firm claims its recruitment robot will do away with discrimination and bias in the hiring process.
Job interviews are rarely enjoyable, but the one I had the other Friday morning was more agonising than most.
It took place in the reception area of a business hotel next to London’s Billingsgate Market, which meant having to endure not only eavesdropping strangers but also the deep stench of fish.
Second, the questions were of the “competency-based” variety (“Can you tell me about a work situation where you had to come up with a solution on your own?”), which aim to solicit revelations but just enervate everyone involved.
The main challenge, however, was my interviewer, who had no discernible scalp, no particular gender, no torso, no pulse, was only 41cm tall and weighed 35kg.
I was having a blind interview for a non-existent job with the world’s first recruitment robot, Tengai. Developed by Furhat Robotics, a Swedish artificial intelligence company, in collaboration with TNG, Sweden’s biggest recruiter, the robot can nod, smile, wink, encourage candidates by conceding that some questions may be difficult and mutter “ah” and “hmm” occasionally, before it transcribes applicants’ answers and analyses them through “diversity and inclusion” software powered by “15 years of experience in unbiased recruitment”.
The aim? To remove, its makers say, the conscious and unconscious bias inherent in the human recruitment process. The kind of discrimination that leads, they say, to 33 per cent of recruiters deciding who to hire within 90 seconds of meeting a candidate; 55 per cent deciding who not to hire at the first handshake; 40 per cent rejecting a candidate if they do not like the way they smile; 71 per cent rejecting a candidate based on their tattoos; and employers dismissing other candidates for all sorts of reasons including gender, race, education, accent, handshake, tone of voice, lack of eye contact, clothes, style of walking or make-up.
Apparently, Upplands-Bro council in Sweden recently became the first organisation to use Tengai to help it to fill a post, the robot having been programmed to assess soft skills and objectively measure personality traits by always posing the same questions, in the same tone and order.
There are things I really liked about it, beyond the fact it means fewer human resources people in the world.
Interviews conducted by humans are routinely terrible at finding the right candidate and most jobseekers say they have experienced discrimination or bias during a job interview.
The tech is impressive: it is infinitely easier to talk to Tengai than to have video conversations over Facetime, WhatsApp or Skype, where you never really know where to look.
Also, artificial intelligence is already used extensively in recruitment to screen CVs and answer frequently asked questions; according to Jobscan, 98 per cent of Fortune 500 companies used a computerised system to select job applicants last year.
I have reservations, however, not least that some AI systems have been shown to exhibit just the kind of bias that the operators of Tengai claim it will eradicate.
Last year Amazon gave up on an AI-driven hiring tool because it was making decisions in favour of male applicants for software developer positions. Its algorithms were based on data from historical success stories, the lion’s share of whom were white blokes, so it gave lower ratings to candidates with female attributes, such as those who had attended all-women’s colleges.
Moreover, a report by the AI Now Institute at New York University, entitled Discriminating Systems: Gender, Race and Power in AI, recently warned of a “diversity disaster” in the making, as a result of tech being so monocultural.
It gave many examples of AI going wrong: facial-recognition software used by Uber that failed to recognise trans drivers; image recognition services making offensive classifications of minorities; sentencing algorithms biased against black defendants; health-management systems that allocate resources to wealthier patients; chatbots that begin to spew racist and sexist language; and technology failing to recognise users with darker skin colours.
More profoundly, there is the question of what happens if AI goes wrong with bias. Human interviewers have all sorts of flaws, inadequacies and problems but if one, say, goes against the law and denies a woman a job because he fears that she might be pregnant, there is at least the possibility of legal recourse.
With a robot, accessing and explaining the technology is infinitely more difficult. Tengai’s website describes the traditional job interview as “the mysterious black box of the recruitment process” but AI is arguably even more inscrutable.
There is no doubt that technology is transforming recruitment for the better and will continue to do so. A recent feature in People Management magazine explained how AI could soon mine thousands of data points to suggest shortlists for roles, how the growth of “big data” could lead to AI potentially inferring all sorts of personal characteristics from trawling candidates’ social media posts, online affiliations or location check-ins, with neuroscience-based assessments combined with machine learning not only further de-biasing the recruitment process but helping to “predict a better fit”.
Already hi-tech screening of job applications is widening the pool of people who get interviewed for posts: a computer can scan thousands of applications in seconds, whereas a human being might struggle with a couple of hundred.
Frankly, I wish it had been around when I was a young wannabe journalist in Wolverhampton failing to get my foot in the door with London media organisations.
Until the technology involved can be held accountable, however, or somehow be made more transparent, I think I would rather be interviewed for a job by a potentially flawed human being than by a potentially flawed robot.
The Times
To join the conversation, please log in. Don't have an account? Register
Join the conversation, you are commenting as Logout