The next time you apply for a job, the fate of your application may not be in human hands — instead, an algorithm could determine whether or not you make the cut.
Employers are increasingly turning to artificial intelligence-driven tools to carry out recruitment and hiring. Advocates say AI is the key to rooting out human bias and ending discrimination in the hiring process. But its critics warn that AI-driven hiring tools are just as found biased as the humans who train them. Predictive hiring tools are mostly used to weed out candidates deemed unfit for hire, rather than affirmatively choose who gets a job.
These tools sift through huge numbers of resumes in short time periods, evaluate candidates’ answers to written questions and interactive games, and conduct video interviews with candidates.
But AI experts have raised concerns about the technology’s rapid growth. “Technology is rapidly changing how people find jobs, learn about jobs, apply to those jobs, and are evaluated for those jobs,” Aaron Rieke, managing director at tech equity nonprofit Upturn said. “The fear is if that is not done very carefully and thoughtfully, there could be problems down the road.”
Just like all algorithms, AI hiring tools are trained by humans, typically using data sets from the real world meant to help AI recognise the features of a “good” or “bad” candidate.
Accordingly, tech experts and activists have warned that AI hiring tools might pick up on preexisting human biases, especially in fields like tech, where progress towards diversity has been slow.
AI hiring tools have the potential for bias, even if that’s not a company’s intention, according to Joy Buolamwini, a computer scientist based at the MIT Media Lab. Buolamwini founded the Algorithmic Justice League, which conducts research and advocacy on AI bias.