Is Biden’s New Executive Order on AI Enough?

Technology Policy Brief #104 | By: Christopher Quinn | December 20, 2023
Photo taken from: https://cybernews.com

__________________________________

On October 30, 2023, President Joe Biden issued an Executive Order to ensure that the United States leads the way in seizing the promise and managing the risks of Artificial Intelligence (AI). AI relies on machine learning algorithms that are trained on specific datasets and learn to make predictions based on that data. These algorithms are limited by the quality and quantity of the data they are trained on, and they cannot understand concepts that are not represented in that data.

Analysis

 The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and Civil Rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more. The Order is an initial effort by the executive branch of government to address the complicated issues related to AI. However much more needs to be done, including at some point Congressional regulatory legislation. Biden’s Executive Order will be implemented by a variety of Federal Agencies by the end of 2024.

AI is already helping the government better serve the American people, including by improving health outcomes, addressing climate change, and protecting federal agencies from cyber threats. In 2023, Federal Agencies identified over 700 ways they use AI to advance their missions, and this number is only likely to grow.  AI has already been successfully deployed by the Federal Government in departments ranging from NASA to the Department of Homeland Security. The new Executive Order will further strengthen support the efforts or federal agencies to make productive use of AI.

The  Executive  Order fails to address a number of pressing issues.   For instance, it doesn’t directly address how to deal with killer AI robots, a complex topic that has recently been debated recently at the General Assembly of the United Nations.  The Pentagon is developing swarms of low-cost autonomous drones as part of its recently announced Replicator program.  Ukraine has developed homegrown AI-powered attack drones that can attack Russian forces without human interaction.  The Executive Order only asks for the military to use AI ethically but doesn’t stipulate what that means. Unless strict controls are implemented, we risk living in a world where nothing you see or hear online can be trusted.

Frontier Models

Perhaps the most controversial aspect of the executive order is that which addresses the potential harms of the most powerful so-called “frontier” AI models. Frontier models are large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models, and can perform a wide variety of tasks.  Some experts believe these models – which are being developed by companies such as Open AI, Google and Anthropic – pose an existential threat to humanity. Experts say it’s going to be difficult, and perhaps impossible, to police the development of frontier models. Biden’s Executive Order on AI does not explicitly target frontier models, but it does address some of the issues and challenges that they pose. For example, the Executive Order requires that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.

Engagement Resources

DONATE NOW
Subscribe Below to Our News Service

x
x
Support fearless journalism! Your contribution, big or small, dismantles corruption and sparks meaningful change. As an independent outlet, we rely on readers like you to champion the cause of transparent and accountable governance. Every donation fuels our mission for insightful policy reporting, a cornerstone for informed citizenship. Help safeguard democracy from tyrants—donate today. Your generosity fosters hope for a just and equitable society.

Pin It on Pinterest

Share This