HomeIndustriesHilke Schellmann’s algorithm – why AI is actually taking up your job

Hilke Schellmann’s algorithm – why AI is actually taking up your job

Unlock Editor's Digest totally free

AI alarmists warn that machine learning will destroy humanity, or not less than make it obsolete. But what if the actual concern was more banal – that AI tools simply do a poor job?

That's what Hilke Schellmann, a reporter and professor at New York University, felt after spending five years studying the tools now widely utilized by employers in hiring, firing and management. Bots are increasingly determining which job advertisements we see online, which resumes recruiters read, which applicants make it to a final interview, and which employees receive a promotion, bonus, or termination. But on this world where algorithms “define who we’re, where we excel, and where we struggle…” . . What if the algorithms do something fallacious?” asks Schellmann in , a report on their findings.

Recruiters and managers have many reasons to show to AI: to sift through incredibly large stacks of resumes and fill positions faster; to assist them recognize talented people, even in the event that they come from an atypical background; to make fairer decisions and eliminate human biases; or to trace performance and discover problem employees.

But Schellmann's experience suggests that lots of the systems available on the market could also be doing more harm than good. For example, she is testing video interview software that determines that she is an excellent fit for a job even when she replaces her original, plausible answers with the parroted phrase “I really like teamwork” or speaks entirely in German.

She talks to experts who’ve reviewed resume screening tools for potential bias — and located that they have an inclination to filter out candidates from certain zip codes, which is a recipe for racial discrimination; to favor certain nationalities; or viewing a preference for male-dominated pursuits like baseball as a marker of success. Then there are the cases where top performers are fired or robotically disqualified from running for jobs they were qualified for just because they performed poorly in seemingly irrelevant online candidate evaluation games.

After a number of tries, Schellmann is skeptical that high-speed pattern matching games or personality tests will help recruiters discover the people more than likely to fail or excel in a task. The games would even be even tougher for anyone who’s distracted by children or has a disability that the software doesn't recognize.

But lots of the problems Schellman finds should not necessarily related to the usage of AI. Developers can't design good recruitment tests if recruiters don't understand why some settings work higher than others. If a system is primarily designed to fill a emptiness quickly, it would not select the very best candidate.

Schellmann notes that unless developers intervene, job platforms will serve more ads to the candidates (often men) who respond most aggressively to recruiters and apply for management positions no matter their experience. Problems also arise because managers blindly depend on tools designed only to enhance human judgment, sometimes within the mistaken belief that this can protect them from legal challenges.

Machine learning can reinforce existing biases in ways which are difficult for even vigilant developers to detect. Algorithms discover patterns in individuals who have done well or poorly up to now, without with the ability to understand whether the characteristics they recognize are significant. And when the algorithms get something fallacious—sometimes on a big scale—it could actually be incredibly difficult for people to determine why, seek redress, and even discover a human to confer with.

Perhaps probably the most useful section in Schellmann's book is an appendix with suggestions for job seekers (use bullet points and avoid ampersands in your resume to make it machine-readable) and folks whose employers are maintaining a tally of them (carry on email current). But she also has suggestions for regulators on how they’ll ensure AI tools are tested before they arrive to market.

She argues that lawmakers could not less than require transparency concerning the data used to coach AI models and technical reports on their effectiveness. Ideally, government agencies would themselves review the tools utilized in sensitive areas reminiscent of politics, creditworthiness or workplace surveillance.

In the absence of such reform, Schellmann's book is a cautionary tale for anyone who thought AI would eliminate human bias in hiring – and an important handbook for job seekers.

The Algorithm: How AI Can Hijack Your Career and Steal Your Future by Hilke Schellmann

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read