AI use for hiring, firing employees ‘problematic,’ NYU professor says – Life Changer

AI use for hiring, firing employees ‘problematic,’ NYU professor says

[ad_1]

Journalist Hilke Schellmann discusses the impact of artificial intelligence (AI) on hiring, monitoring, promotion and firing in her book “The Algorithm.”

In her book, Schellmann delves into the accountability of AI and investigates its increasing presence in the workplace.

Schellmann, an Emmy Award-winning investigative reporter and assistant professor of journalism at New York University, examines the impact of AI on workplace decisions and cautions hiring managers to be more skeptical when using the technology.

She sat down with ABC News Live to discuss her new book.

ABC NEWS LIVE: It should come as no surprise that companies, large and small, are using artificial intelligence platforms and software to help them make decisions that were once the domain of HR departments. Joining us in a moment is Emmy-winning investigative reporter and professor of journalism at NYU, Hilke Schellmann.

Her new book, “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted and Fired [And Why We Need to Fight Back Now],” investigates these seemingly automated tools that could potentially be weeding out current and potential employees based on some questionable criteria. So welcome Hilke. Your book, “The Algorithm,” delves into the ways that AI is impacting the lifecycle of employees. This will no doubt raise a lot of eyebrows here.

So if you could explain the tools employers are using and why do they warrant investigation?

HILKE SCHELLMANN: Yeah. So I think we see that, you know, most Fortune 500 companies use AI somewhere in the pipeline of hiring. So we see it a lot with résumé screeners. So if, you know, you upload your application to any of the big job platforms or you apply to, you know, employers’ websites directly, often there’s a resume parser that will sort of sort the resumes into the yes and into the no pile.

And then we see a lot of, you know, and that was obviously coming through the pandemic. We see a lot of one-way video interviews, where, where folks get prerecorded questions and there’s no one on the other side. And then we see also video games, that, that, job seekers are asked to play to sort of understand what are their capabilities, what are their personalities, you know, are they agile? Are they are they, you know, quick, quick to learn, all of those things that, that companies want to know.

And, you know, this is all coming from a place that we see now, you know, companies get inundated with millions and millions of applications. So they feel like we need a technological solution. I’ve heard from lots of them, employment lawyers who said, ‘Oh, yeah, we found gender discrimination in that tool. And we told the company not to use it, but the startup is still around.’ So, yeah, there is a lot of tools that, unfortunately, do more harm than good.

ABC NEWS LIVE: And we also wanted to ask you about some eye-opening tests that you conducted on a specific tool, claiming to assess personality and job suitability based on voice samples. So walk us through that experiment and the surprising results.

SCHELLMANN: Yeah. So what I often think about is in, in this case, it was a one-way video interview where, you know, you get a bunch of questions and you were asked to answer them and record yourself. So I always think about like, what is with folks who maybe have an accent, who have a speech disability, and you know, what is with their audio?

When their audio does get transcribed into text, will the AI tool be fairly examining them? So I thought, you know, let’s give it a little test. So I spoke to one of the tools in German, and I read the, a entry on, on Wikipedia about psychometrics. And I was surprised, you know, I sent the email out and I got an email back saying like, ‘Oh, you are 73%, you know, qualified for this job.’ And I was like, I didn’t even say a word in English. And when I looked at the transcription, it was just gibberish. I think that really worries me. About like, you know, these are high stakes decisions, like, it matters who gets a job and why we get a job.

ABC NEWS LIVE: Right? Very clearly not a perfect science just yet. Now you spent five years on this book. You talked to job applicants, employers and whistleblowers. So what steps will need to be taken to ensure the responsible and ethical use of AI in the field of human resources?

SCHELLMANN: Yeah. So I feel, you know, there should be a lot of skeptical questions first for like the developers, like, you know, we need to have, explainability. Why would somebody reject it? Why was somebody put in the next round? And often the even the developers of the tools do not know that because these are unsupervised AI models. So I find that very problematic. We have to have explainability, transparency. So if somebody is in front of a judge, they have to explain why somebody made it into the next round.

ABC NEWS LIVE: Hilke Schellmann. This is a fascinating topic that we will be grappling with, likely for years to come. Thank you so much for your time today.

SCHELLMANN: Yeah, thank you for having me.

[ad_2]

Source link

Loading

Leave a Reply

Your email address will not be published. Required fields are marked *