On April 25, 2023, the Consumer Financial Protection Bureau (CFPB), the Department of Justice (DOJ) Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) issued a joint statement regarding potential discrimination and bias in the use of automated systems, including Artificial Intelligence (AI). The joint statement confirms that existing legal authorities apply to the use of automated systems and innovative new technologies, just as they apply to other practices. The joint statement includes previously published information from each agency expressing concern about potentially harmful uses of automated systems.
The joint statement used the term “automated systems” broadly to include “software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.” While the joint statement does note that such tools may provide insights, increase efficiencies, and modernize existing practices, “their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”
The joint statement includes three areas from which potential discrimination in automated systems may come from:
- Data and Datasets: Outcomes from automated systems may be skewed by “unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors.” Additionally, discriminatory outcomes could result from a system correlating data with protected classes.
- Model Opacity and Access: Automated systems can be “black boxes” and this lack of transparency makes it difficult to know whether a system is fair.
- Design and Use: Developers may not understand all contexts in which the system will be used and, therefore, the system may be based on flawed assumptions regarding its use.
The Four Agencies’ Positions
The CFPB supervises, sets rules for, and enforces numerous consumer financial laws, and protects consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and from discrimination. The CFPB previously published, in March 2023, a circular confirming that regardless of technology used, the federal consumer financial laws still apply.
The Civil Rights Division of the DOJ enforces constitutional provisions and federal statutes prohibiting discrimination in education, the criminal justice system, employment, housing, lending, and voting. The joint statement notes that the DOJ filed a statement of interest in federal court stating that the Fair Housing Act applies to algorithm-based tenant screening questions.
The EEOC enforces laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to an individual’s race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information (including family medical history). Last May, the EEOC issued a technical assistance document which explains how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions.
The FTC protects consumers from deceptive or unfair business practices and unfair methods of competition across most sectors of the economy by enforcing the FTC Act and numerous other laws and regulations. The FTC issued a report last June which outlined significant concerns that AI tools can be inaccurate, biased, and discriminatory by design. Additionally, the FTC warned earlier this year that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.
The joint statement ends with the pledge of the four agencies “to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”
Action Steps for Employers
It is very clear that these four federal agencies have automated systems and AI as a priority enforcement area. Additionally, a growing number of states and cities are starting to regulate some AI and algorithm-based decision-making tools. Therefore, if you are currently using any automated systems in making employment decisions (with applicants or employees), now would be the time to review those systems or tools and ensure that they do not run afoul of current employment law. And, if you are thinking of implementing any automated systems for employment decisions, review with an attorney. Stall Legal is here to assist with an evaluation or answer any questions you have regarding this topic.