Attention FAE Customers:
Please be aware that NASBA credits are awarded based on whether the events are webcast or in-person, as well as on the number of CPE credits.
Please check the event registration page to see if NASBA credits are being awarded for the programs you select.

Four Agencies Pledge to Combat Automated Systems’ Discrimination and Bias Potential

S.J. Steinhardt
Published Date:
Oct 19, 2023

iStock-686690190 Robot Robotics Bots AI Artificial Intelligence

Four federal agencies have announced a coordinated effort to prevent bias in automated systems and artificial intelligence, the AP reported.

“Today, the use of automated systems, including those sometimes marketed as 'artificial intelligence' or 'AI,' is becoming increasingly common in our daily lives,” said Rohit Chopra, director of the Consumer Financial Protection Bureau (CFPB); Kristen Clarke, assistant attorney general for the U.S. Department of Justice’s Civil Rights Division; Charlotte A. Burrows, chair of the Equal Employment Opportunity Commission (EEOC); and Lina M. Khan, chair of the Federal Trade Commission (FTC) in a joint statement. “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”

In particular, they noted, “automated systems” are used “to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services.”

The statement was applauded by Ben Winters, senior counsel for the Electronic Privacy Information Center.

“There’s this narrative that AI is entirely unregulated, which is not really true,” he told the AP. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision. This is our opinion on this. We’re watching.’”

The statement said that automated systems may contribute to unlawful discrimination and otherwise violate federal law. In particular, it cited three such sources of potential discrimination in automated systems: data and datasets; model opacity and access; and design and use.

“Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the four officials stated. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

Click here to see more of the latest news from the NYSSCPA.