WORK WITH US
California Civil Rights Department Publishes New Regulations To Prevent Discrimination From Use Of AI Tools
Artificial intelligence (AI) and other automated decision systems (ADS) have a growing role in public sector hiring. Resume screeners, video interview platforms, and other algorithmic tools promise efficiency, but they also create legal exposure.
On October 1, 2025, California’s new Fair Employment and Housing Act (FEHA) regulations took effect. They include a new regulation that defines terms (2 Cal. Code Regs. section 11008.1) and revisions to several existing regulations. They clarify how FEHA applies to AI and ADS in employment decisions. And, they aim to prevent discrimination in hiring and promotion practices based on protected characteristics such as race, gender, age, disability, religion, and other categories. The new regulations apply this protection to any AI or ADS tool used in recruiting, testing, evaluating, or promoting employees. Employers must treat automated tools the same way they treat human decision-makers under the regulations.
Key Provisions:
- Disparate Impact Counts: Even when bias is unintentional, agencies can face liability if an automated system disproportionately excludes applicants from a protected group.
- Examples of Risk: Tools that rank candidates by schedule availability, measure reaction time, or evaluate facial expressions or speech patterns in video interviews may disadvantage applicants with disabilities, religious commitments, or language differences.
- Pre-employment Inquiries: FEHA limits what an employer can ask before hiring, and those limits apply equally to inquiries made by or through automated systems.
- Liability Extends to Agents: When a vendor or recruitment partner uses a discriminatory algorithm on an agency’s behalf, the agency remains responsible under FEHA.
- Recordkeeping Required: Agencies must retain records of ADS use for at least four years. This includes data inputs, selection criteria, and employment outcomes
- Bias Testing Encouraged: Although the regulations do not mandate bias testing, the Civil Rights Council encourages agencies to conduct self-audits and fairness evaluations. The timing, scope, and quality of these efforts can support a defense if a discrimination claim arises.
Steps Toward Compliance:
Public agencies can continue to use AI and automated tools under the new regulations, but they must manage those systems carefully to maintain FEHA compliance.
- Inventory and Assess AI Tools: Identify every automated system involved in recruitment, hiring, promotions, and employment decisions. Determine whether each tool directly or indirectly screens or ranks applicants.
- Audit for Bias: Test each system for disparate impact on protected groups. Request documentation from vendors showing validation studies and fairness testing.
- Update Policies and Vendor Contracts: Require vendors to certify compliance with FEHA. Include shared responsibility and indemnification clauses in contracts. Specify that human review will supplement any automated recommendations or scores.
- Strengthen Recordkeeping: Maintain ADS-related data, selection criteria, and decision records for at least four years. Document all compliance activities to create a clear record of diligence.
- Train HR and Hiring Staff: Educate staff about the capabilities and limitations of AI tools. Train them to identify potential bias and to exercise independent judgment when reviewing automated results.
- Ensure Transparency and Accessibility: Provide accessible hiring processes for applicants with disabilities. Offer reasonable accommodations or alternative methods for completing applications or assessments when needed, including for religious observances.