Back to Home
Anti-bias must embrace color and race-blind design in AI tools

Anti-bias must embrace color and race-blind design in AI tools

3 min read
November 09, 2025

A legal collision is now inevitable. Companies exposed to both state anti-bias mandates and federal anti-DEI enforcement face profound uncertainty

California and Colorado have been at the center of a growing trend among states to regulate artificial intelligence (AI), especially with a focus on preventing discrimination in high-stakes contexts like employment, lending, housing, and healthcare. These laws, however, are maturing at a time when the federal government has taken a dramatically different approach, aggressively targeting and penalizing DEI (diversity, equity, and inclusion) efforts that consider protected classes, including those advanced through anti-bias AI tools.

California & Colorado: The State Approach

California's Civil Rights Council adopted new regulations (effective October 1, 2025) that apply longstanding anti-discrimination law to any "automated decision system" (ADS) used in employment. Employers with five or more California employees must not use AI tools to discriminate on the basis of race, gender, age, disability, religion, or any other protected class status, and are required to keep detailed records for at least four years documenting ADS use and bias testing. This record-keeping includes both the criteria used in ADS and the results of any AI analysis. In essence, the regulations expect all California employers using AI to proactively identify, monitor, and mitigate any disparate impact that their systems may cause.

Colorado's AI Act (SB 24-205), set to take effect June 30, 2026, takes a similar approach but applies more broadly to any "consequential" decision-making by AI in areas like hiring, healthcare, and housing. The law mandates annual impact assessments for "high-risk" AI, written risk management policies, and consumer notices when AI influences significant decisions. If a company fails to exercise "reasonable care" to avoid algorithmic discrimination, civil penalties may be imposed.

Both state regimes effectively require companies to use anti-bias tools and protocols that surface and remediate group-based disparities—a practice closely aligned with corporate DEI practices.

Federal Anti-DEI Enforcement: The Current Landscape

Starting in 2025, the federal government, under Trump administration policy and DOJ/EEOC guidance, has taken an aggressive stance against DEI programs and tools that treat protected class status as a factor. DEI is now defined by the administration to include any program or practice, whether labeled "anti-bias," "fairness," or "diversity"—that considers race, sex, ethnicity, or other protected characteristics in hiring, promotions, contracts, or other decisions. DOJ guidance warns organizations, especially federal contractors, that certifying compliance while using such DEI-related tools could trigger severe penalties under the False Claims Act. Federal enforcement sees DEI-based auditing as contrary to the "colorblind" mandate of U.S. anti-discrimination law.

Numerous investigations and enforcement actions have now begun, targeting not just explicit workplace DEI, but also anti-bias AI technologies (such as auditing tools, group-fairness metrics, or supplier diversity certifications) when those technologies measure or optimize for race, gender, or similar group status.

Likely Next Steps

A legal collision is now inevitable. Companies exposed to both state anti-bias mandates and federal anti-DEI enforcement face profound uncertainty:

  • Litigation to clarify whether federal law preempts state bias auditing requirements is probable, particularly as more states pass similar laws.
  • Some organizations may adopt "dual-stack" AI compliance: one system for state law, one for federal contracts. This is risky as Federal law is likely to preempt state law.
  • Many will and should avoid using protected class features or group fairness optimization at all, even where state law encourages it.

The MLK Ideal: Colorblindness and True Fairness

The resulting policy debate calls us back to Martin Luther King Jr.'s ideal: judging individuals by the content of their character, not by the color of their skin. In this spirit, the full and future-proof implementation of anti-bias must embrace color and race-blind design in AI tools. Only by ensuring that automated systems are built and governed in a way blind to race, sex, and other protected class status can organizations align their technology with both the law and the founding principles of American civil rights—delivering fairness universally, without creating new forms of group-based preference

dMedia to the Rescue

dMedia is actively expanding a series of tools that we already use to remove DEI or ESG related bias when working with AI models. We will start infusing our products with these tools shortly. More news soon.