Attention FAE Customers:
Please be aware that NASBA credits are awarded based on whether the events are webcast or in-person, as well as on the number of CPE credits.
Please check the event registration page to see if NASBA credits are being awarded for the programs you select.

Want to save this page for later?

News

States Creating Laws to Counter AI Bias in Hiring 

By:
Karen Sibayan
Published Date:
Nov 12, 2024

iStock-686690190 Robot Robotics Bots AI Artificial Intelligence

Bloomberg Law reported that state lawmakers creating statutes to counter artificial intelligence (AI)-based bias in employment decisions are grappling with understanding private companies’ use of AI tools. 

Colorado and Illinois—where legislators passed laws in 2024 regulating AI use for hiring decisions—and others like Texas and California, which are considering them, mandate differing degrees of disclosure. According to Bloomberg Law, only one US jurisdiction, New York City, has required firms to post publicly biased audits of their AI systems. Still, its strict definition of automated decisions gave the freedom to most companies to determine the law doesn’t apply to them. 

According to Bloomberg Law, crafting audit requirements that effectively prevent bias has been difficult. It is also hard to get policymakers to pass them as tech companies go against strict regulation, leaving more straightforward transparency measures easier to attain. 

“We’ve been pushing for audits to be a part of it. That’s the preferred approach. There doesn’t seem to be a lot of appetite for that in the states, which is unfortunate,” noted Matt Scherer, senior policy counsel at the Center for Democracy & Technology. “The first thing that needs to happen is transparency. We need to know which companies are using which tools.” 

According to Bloomberg Law, Colorado and Illinois laws mandate companies notify potential job recruits and staff when utilizing AI for hiring. However, these laws are not effective until  2026 and are initially subject to state agency rulemaking. 

"The devil’s in the details,” said Tracey Diamond, an employment attorney with Troutman Pepper Hamilton Sanders LLP. For instance, she asks when companies purchase a third-party tool, how much do they know about its inner machinations? 

Colorado’s law enacted in 2024 might be the country’s broadest effort to stop discriminatory AI utilization, but employee advocates, including Scherer, say it is inadequate.

It allows firms to conduct in-house impact assessments of their AI use in hiring instead of requiring independent third-party audits like New York City. Other proposals, such as the Texas draft bill and California’s AB 2930, take the same approach as Colorado. 

“I don’t have a great deal of faith that companies are going to do rigorous impact assessments in-house,” Scherer noted. “When you don’t have independence, there is a strong urge to find the answers to the questions that you want to find.” 

Additionally, the Colorado law does not mandate firms to publish a copy or summary of the impact assessments online. The attorney general’s (AG) office can ask them, but those documents are protected from open records requests from the public. 

The AG will implement Colorado’s law, which does not give companies or job recruits a private right of action to sue even if they find out that there are violations. 

Even where audits are published—which some New York City companies have done since its law took effect in 2023—it’s not clear how those results could pressure these firms or technology developers to do something about the signs of bias. 

For example, an audit under city law of HireVue tools that JetBlue Airways Corp. and other companies utilize had some race-plus-gender categories that got favorable ratings less than 80% as often as higher-scoring groups on particular assessments. Federal regulators utilize this threshold to indicate the possible “disparate impact” of a hiring practice on groups of applicants that under civil rights law are protected. 

The audit's metrics are aggregated throughout various firms utilizing the same HireVue tools. Their audit results being below the 80% line does not necessarily mean that the technology has to be revised, said Lindsey Zuloaga, HireVue’s chief data scientist. 

Even though audits under the New York City law can hint at a disparate impact, they do not indicate demographics for candidates being hired or rejected, only how they scored on specific tests, noted Adam T. Klein, managing partner at Outten & Golden LLP, who represents workers in employment law cases. That makes it hard to identify possible bias victims and bring a claim. 

“If a large employer said, ‘Hey, we’re violating Title VII, and here’s the report that goes with it,’ cool, I’d like to see that, and I’d find a way to make that case,” Klein said. “So far, I haven’t seen a single example of an employer reporting anything under Local Law 144 that has been useful.” 

According to Bloomberg Law, JetBlue did not respond to requests for comment, and auditor DCI Consulting Group declined to comment.

Better disclosure to job applicants might be the most likely alternative for near-term regulation of AI decision tools. 

As Colorado’s law and the California and Texas proposals integrate transparency mandates into their broader requirements, Illinois lawmakers took a transparency-only approach. Their law requires employers to notify employees or job candidates of AI use without audit rules. 

“Where I see a lot of this state and local regulation going is requiring a statement when you’re using AI that these are the things that the AI is looking for,” stated Mark J. Girouard, an employment attorney at Nilan Johnson Lewis PA. “It puts employers through their paces” to explain how systems work. 

The required details will make a major difference in how transparent firms are. Illinois law requires a notice, although it leaves the specifics up to rulemaking by the state’s human rights agency. The Colorado law also requires firms to explain why unsuccessful candidates were rejected and how the AI tool evaluated them and gave them a chance to appeal, Bloomberg Law reported. 

Scherer said states must mandate detailed and actionable explanations like the federal Fair Credit Reporting Act. This act requires lenders to let rejected credit applicants know specific factors in their credit history that stopped them from getting approval so they can correct errors while improving the chances for future approvals.