Technology Policy Brief #162 | Jason Collins | December 30, 2025
Summary
In 2025, the Federal Trade Commission (FTC) signaled that it would use existing federal law to address algorithmic discrimination in automated decision-making services regarding hiring, lending, and tenant screening. Some tech companies argue that the FTC is overreaching in the absence of explicit AI laws. The FTC’s push highlights how federal agencies are reshaping AI governance on a case-by-case basis rather than implementing broad new laws.
Analysis
Currently, no federal legislation explicitly governs the use of AI across all sectors in the U.S. Still, the FTC has invoked its authority under statutes such as Section 5 of the FTC Act, the Equal Credit Opportunity Act (ECOA), and the Fair Credit Reporting Act (FCRA).
In a joint statement with the Consumer Financial Protection Bureau, the Justice Department’s Civil Rights Division and the Equal Employment Opportunity Commission, the FTC said, “Private and public entities use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities, and other goods and services,” but added, “Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.”
Together, these agencies have signaled that they will hold companies accountable where automated systems harm workers and consumers.
The FTC has maintained that existing legal authorities apply to those automated systems. In late September last year, the FTC announced Operation AI Comply, which included five law enforcement actions against operations that use AI hype or sell AI technology that can be used in deceptive and unfair ways.
In public guidance, the FTC outlined a roadmap for companies using AI to comply. Companies are expected to:
- Test their algorithms
- Be honest with customers about how and why their data is being used
- Be transparent about the AI frameworks
Throughout 2025, the FTC has brought multiple new cases involving companies like Ryter, in which the agency ultimately found no Section 5 violation. This case shows that the FTC continues to enforce its policies on a case-by-case basis.
In response to the FTC’s heightened scrutiny, some tech companies raised concerns about how far enforcement policy could go before stifling innovation. During a panel at CES 2025, current and former FTC Commissioners, including Christine Wilson and Julie Brill, sat together to examine the FTC’s role in AI oversight and in unfair and deceptive acts or practices (UDAP).
During the panel, it became clear there were two schools of thought: that the FTC should act to prevent new technologies that bring new risks, and that excessive enforcement could discourage technological advancement. However, the answer is still unclear, as no new policies or federal AI laws have been implemented yet.
The FTC is adopting a case-by-case approach to ensure fairness rather than sweeping AI regulation and seeks to fill the gaps in existing frameworks. This means that companies deploying automated decision-making systems can no longer rely on those gaps for protection.
Engagement Resources
- Section 5 of the FTC Act
- The Equal Employment Opportunity Commission’s role in AI
- An Authoritative Report on FTC AI Enforcement
- One-year review of the FTC’s “Operation AI Comply.”
