AI Regulation: Who’s Up to the Challenge?

Technology Policy Brief #150 | Inijah Quadri | June 16, 2025

Artificial intelligence (AI) is the discipline of designing computer systems that can perform tasks normally requiring human cognition—pattern recognition, language generation, planning—by learning statistical relationships from large data sets. Modern AI works by training vast machine-learning models on petabytes of text, images, audio, and code and then applying those models to new inputs to produce predictions or content; it now powers everything from chatbots and fraud detection to medical imaging and autonomous drones.

Artificial intelligence is no longer a futuristic topic: chatbots write school essays, algorithms screen renters, and synthetic voices flood voters’ phones. The Biden-Harris administration took its first swing at nationwide rules with the Safe, Secure, and Trustworthy AI Executive Order of October 30, 2023, which instructed federal agencies to protect civil rights and worker safety when deploying AI systems. In March 2024, the Office of Management and Budget (OMB) turned that order into binding rules—every agency must name a Chief AI Officer, publish risk assessments, and refuse any “high-impact” system that endangers rights or safety. Three months ago, however, OMB issued Memo M-25-21, promising faster procurement and “American-made AI” while trimming several earlier guardrails, a move cheered by industry and eyed warily by civil rights advocates.

While these memos include privacy protections such as restricting use of government data in training, mandating transparency documentation, and more, they generally favor a pro-innovation posture and allow more flexibility. In contrast, Europe’s landmark AI Act outright bans social scoring (assigning reputational or risk scores to individuals based on aggregated personal data) and real-time biometric surveillance (automated identification or tracking of people through biometric traits such as faces, voices, or gait) and enforces up to a seven-percent-of-global-revenue penalty for non-compliance, signaling a standard that U.S. protections risk failing to match. Unless the United States matches those standards, U.S. workers and consumers will be left with weaker protections even as U.S. companies scramble to meet tougher foreign rules.

Analysis

From a progressive standpoint, the policy debate is fundamentally about who controls AI’s future—public institutions or dominant technology firms. A recent Federal Trade Commission report confirms what many feared: cloud giants are consolidating exclusive access to compute power, data, and distribution—all at once. Without stronger merger rules or public-sector compute resources, market concentration will deepen and independent research will be priced out.

Civil rights advocates contend these trends have grave social implications. The Leadership Conference reports that AI systems are reinforcing redlining and racial profiling, and it advocates outright bans on biometric surveillance rather than mere transparency. Other leading thinktanks recommend halting law enforcement use of facial recognition and limiting opaque algorithmic scoring processes.

Workers are also organizing around AI. For example, the Writers Guild now prohibits studios from using generative text tools to reduce writers’ pay or take away credit, setting a much welcome precedent. Irrespective, since the OMB relaxed several safeguards in its April 3, 2025, Memo M-25-21—allowing agencies to fast-track “American-made AI” purchases if developers self-certify baseline privacy and civil-rights tests—pressure has been offset from many AI firms. Indeed, a recent CRS analysis confirms that the United States still lacks a comprehensive federal statute, leaving agencies to patch gaps piecemeal.

A January FTC study of cloud-AI equity deals documents how the same three giants lock frontier developers into exclusive compute and distribution contracts, warning of a looming “compute cartel.” The leading U.S. frontier-model developers are OpenAI (partnered with Microsoft), Google DeepMind, Anthropic (backed by Amazon and Google), Meta AI, and Cohere. Each depends on hyperscale cloud providers—Azure, Google Cloud, and Amazon Web Services—to rent the thousands of specialized GPUs needed to train and serve cutting-edge models. Start-ups gain similar access through credit programs and joint go-to-market deals, so the developer ecosystem is tightly coupled to cloud infrastructure. Conversely, AI workloads have become the main engine of cloud-provider revenue growth, making the relationship symbiotic: state-of-the-art AI needs elastic, low-cost compute, and the clouds need AI demand to keep their data centers full.

States are sprinting to plug the gap: Colorado’s SB 24-205 imposes an affirmative duty on any “high-risk” system to prevent algorithmic bias starting in 2026, and at least 28 states adopted AI measures this year alone. Industry is fighting back, though. House appropriators are introducing rules that try to ensure states do not enforce new AI rules for a decade. Progressives have sketched an alternative path: the bipartisan TEST AI Act would turn the National Institute of Standards and Technology and Energy Department testbeds into a public audit regime, making risk assessments much more transparent. Coupled with an antitrust crackdown on the “compute cartel,” this triad could transfer power from monopolies to the public and ensure the next generation of algorithms serves people—not profit.

Engagement Resources

  • Center for AI and Digital Policy (https://www.caidp.org/): A non-profit that promotes democratic values in AI and digital governance. Offers briefings on global AI regulations and ethical deployment.
  • AI Now Institute (https://ainowinstitute.org/): An interdisciplinary research center studying the social implications of artificial intelligence. Focuses on bias, labor impacts, and regulation.
  • Algorithmic Justice League (https://www.ajl.org/): Works to raise awareness about algorithmic bias and advocate for equitable AI systems, especially in surveillance and hiring.
  • The Leadership Conference on Civil and Human Rights (https://civilrights.org/): A civil rights coalition pushing for policy reforms, including a ban on biometric surveillance and safeguards against algorithmic discrimination.
  • Public Knowledge (https://www.publicknowledge.org/): Focuses on balancing innovation and consumer protections in digital policy, with specific positions on AI, privacy, and antitrust issues.
DONATE NOW
Subscribe Below to Our News Service

x
x
Support fearless journalism! Your contribution, big or small, dismantles corruption and sparks meaningful change. As an independent outlet, we rely on readers like you to champion the cause of transparent and accountable governance. Every donation fuels our mission for insightful policy reporting, a cornerstone for informed citizenship. Help safeguard democracy from tyrants—donate today. Your generosity fosters hope for a just and equitable society.

Pin It on Pinterest

Share This