• October 8, 2024, 11:44 pm

NYC rule requires audits of AI recruitment tech SoftAIT

Reporter Name 65 Time View
Update : Sunday, November 5, 2023

[ad_1]

On Wednesday, New York City took a small step in regulating the effects of AI technologies on hiring processes for jobs based in the city. Referred to as Automated Employment Decision Tools (AEDT), such technologies typically employ AI or algorithms to assign automated rankings or scores to candidates for jobs or promotions. The city’s Automated Employment Decision Tool law, also known as Local Law 144, now requires that any employer that uses such technologies must have the tools audited for intentional or accidental bias by a third party. The law is currently in effect and will be enforced starting July 5th, 2023, after which noncompliant businesses could face fines of starting from $500 and up to $1,500 per day per tool used. 

[Related: A simple guide to the expansive world of artificial intelligence.]

This is the first major legislation in the arena of hiring practices and AI. According to Zippia, 65 percent of recruiters use AI in the hiring process in some form—such as sorting through what can be thousands of applications for one job for specific qualification parameters, scouring social media, or even analyzing candidates’ facial expressions or body language during a video interview. 

AEDTs are often promoted as a way to manage the hiring process for jobs with a high volume of applicants. “In the age of the internet, it’s a lot easier to apply for a job. And there are tools for candidates to streamline that process. Like ‘give us your resume and we will apply to 400 jobs,’” Cathy O’Neil, the CEO of consulting firm Orcaa, told NBC News. “They get just too many applications. They have to cull the list somehow, so these algorithms do that for them.”

But, AI is far from an impartial judge of potential candidates—AI datasets and technology can often perpetuate human biases such as racism, sexism, and ageism.

Critics argue that the law isn’t enough. Specifically, one qualification warranting an audit requires that AI “substantially assist or replace discretionary decision making.” Alexandra Givens, president of the Center for Democracy & Technology, worries this can be interpreted that, in these instances, AI is “the lone or primary factor in a hiring decision or is used to overrule a human,” as opposed to a part of the process, she told the New York Times in May. 

“My biggest concern is that this becomes the template nationally when we should be asking much more of our policymakers,” she added. Currently, similar laws are under consideration in New Jersey, Maryland, Illinois, and California

The audit required also doesn’t look into age- or disability-based discrimination, Julia Stoyanovich, a computer science professor at New York University and a founding member of the city’s Automatic Decisions Systems Task Force, pointed out to NBC.

[Related on PopSci+: 4 privacy concerns in the age of AI.]

Bias in hiring, whether via human or AI, has been a glaring issue for decades—one that often faces negligible improvements over the years. Whether or not this law will make a difference is yet to be seen.

“I don’t hold out any hope that [the law] will give us any information,” Ben Winters of the Electronic Privacy Information Center told Quartz, “or really allow people to increase the equity around their experience with hiring.”



[ad_2]


আপনার মতামত লিখুন :

Leave a Reply

Your email address will not be published. Required fields are marked *

More News Of This Category
https://slotbet.online/