Home Artificial Intelligence The Algorithm by Hilke Schellmann — why AI really is coming for your job

The Algorithm by Hilke Schellmann — why AI really is coming for your job

Unlock the Editor’s Digest for free

AI alarmists warn that machine learning will end up destroying humanity — or at the very least make humans redundant. But what if the real worry was more mundane — that AI tools simply do a bad job?

That is what Hilke Schellmann, a reporter and professor at New York University, felt after spending five years investigating tools that are now widely used by employers in hiring, firing and management. Bots increasingly dictate which job ads we see online, which CVs recruiters read, which applicants make it to a final interview and which employees receive a promotion, bonus — or redundancy notice. But in this world where algorithms “define who we are, where we excel, and where we struggle . . . what if the algorithms get it wrong?” asks Schellmann in The Algorithm, an account of her findings.

Recruiters and managers have many reasons to turn to AI: to sift impossibly large piles of CVs and fill posts faster; to help them spot talented people, even when they come from an atypical background; to make fairer decisions, stripping out human bias; or to track performance and identify problem staff.

But Schellmann’s experience suggests many of the systems on the market may do more harm than good. For example, she tests video interviewing software that finds her to be a close match for a role, even when she replaces her original, plausible answers with the parroted phrase “I love teamwork” or speaks entirely in German.

She talks to experts who have audited CV screening tools for potential bias — and found them liable to filter out candidates from certain postcodes, a recipe for racial discrimination; to favour particular nationalities; or to view a liking for male-dominated pursuits such as baseball as a marker of success. Then there are the cases of high performers selected for redundancy or automatically cut out of the running for jobs they were qualified for, purely because they had done poorly in apparently irrelevant online games used to score candidates.

After playing some, Schellmann is sceptical that high-speed, pattern-matching games or personality tests will help recruiters identify the people most likely to fail or excel in a role. The games would also be even harder for anyone who was distracted by children, or had a disability the software did not recognise.

But many of the problems Schellman finds are not intrinsically about the use of AI. Developers cannot design good recruitment tests if recruiters do not understand why some hires work out better than others. If a system is designed primarily to fill a vacancy fast, it will not pick the best candidate.

Schellmann finds that, unless developers intervene, job platforms serve more adverts to the candidates (often men) who are most aggressive in replying to recruiters and applying for senior positions regardless of experience. Problems also arise because managers rely blindly on tools that were meant only to inform human judgment — sometimes under the mistaken belief that it will protect them against legal challenge.

Machine learning can amplify existing bias in ways that are hard to spot, even when developers are on the alert. Algorithms identify patterns among people who have done well or badly in the past, without any capacity to understand whether the characteristics they pick up on are significant. And when the algorithms get it wrong — sometimes on a grand scale — it can be incredibly difficult for individuals to find out why, seek redress or even find a human to speak to at all.

Possibly the most useful section of Schellmann’s book is an appendix giving tips for jobseekers (use bullet points and avoid ampersands in your CV to make it machine-readable) and people whose employer is watching them (keep emails upbeat). But she also has suggestions for regulators, on how to make sure AI tools are tested before they come to market.

At minimum, lawmakers could mandate transparency on the data used to train AI models, and technical reports on their efficacy, she argues. Ideally, government agencies would themselves vet tools used in sensitive areas such as policy, credit rating or workplace surveillance.

In the absence of such reform, Schellmann’s book is a cautionary tale for anyone who thought AI would take human bias out of hiring — and an essential handbook for job hunters.

The Algorithm: How AI Can Hijack Your Career and Steal Your Future by Hilke Schellmann Hurst £22/Hachette Books $30, 336 pages

Join our online book group on Facebook at FT Books Café and subscribe to our podcast Life and Art wherever you listen

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment