By John Cassidy
On May 30th, a group of hundreds of artificial intelligence (AI) experts and computer scientists from the Center for AI Safety, along with other public figures in the technology industry, issued a brief, but vaguely alarming statement to the public: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Among the signers were people like Demis Hassabis (CEO of Google’s DeepMind artificial intelligence division), Sam Altman (CEO of OpenAI and creator of ChatGPT), and Bill Gates (Founder of Microsoft), as well as hundreds of university faculty and researchers from the top artificial intelligence and computing programs across the globe.
Many signees reported that their biggest concern was not, in fact, a ‘terminator’ style uprising, but a more bureaucratic problem: regulation.
AI programs are on the verge of transforming every aspect of our lives, from health care, entertainment, and education to transportation and manufacturing. This new technology has the potential to bring enormous benefits to humanity, such as curing diseases, fighting climate change, and enhancing creativity. However, AI also poses significant risks, such as undermining human rights and privacy, disrupting labor markets and social cohesion, as well as threatening national security and democracy.
The rapid development of this new breed of technology requires a careful and balanced approach to regulation, one that fosters innovation and trust, while mitigating the harms and challenges. Unfortunately, the United States is falling behind its global competitors in this regard, particularly when compared to the European Union (EU), which has been leading the field in proposing a comprehensive and coherent framework for AI governance.
The EU’s proposed AI Act, proposed in European Parliament in April 2021, assigns AI usage to three risk categories: unacceptable, high-risk, and low-risk. Unacceptable uses are those that violate fundamental rights or values, such as social scoring systems or mass surveillance. High-risk uses pose significant threats to safety or fundamental rights, including biometric identification or critical infrastructure. Low-risk uses are those that pose minimal or no threats, like chatbots or video games.
The AI Act imposes strict obligations on high-risk AI systems that would increase transparency, accuracy, robustness, human oversight, and improved data quality. In addition to creating a governance structure involving national authorities, a European AI Board, and the European Commission to monitor and enforce the rules, the act also establishes an assessment mechanism to verify compliance prior to placing AI systems on the market.
The EU's approach is based on the idea of proportionality, meaning that the level of regulation is aligned with the level of risk. It also follows the principle of human centricity - the development and deployment of AI should be centered around and respect human dignity, autonomy, and agency. The EU's goal is to create a single market for trustworthy and ethical artificial intelligence that can compete in the global market and serve the public interest.
On the other hand, Washington has no comparable federal legislation or strategy for AI regulation. Instead, it has a patchwork of sector-specific laws and guidelines that address certain aspects of AI, such as privacy, discrimination, or safety. Yet these laws are often outdated or inadequate for the new challenges posed by AI. For example, there is no active legislation regulating biometric data or facial recognition technology, which are widely used by law enforcement agencies and private companies without safeguards or oversight.
The US also lacks a coordinated and coherent vision for AI governance. In 2019, the Trump administration released an executive order on maintaining American leadership in AI, focusing on promoting research and development, fostering public-private partnerships, and removing regulatory barriers. Although it signaled its intention to prioritize ethical and responsible AI practices, the Biden administration has failed to articulate its own concrete policy agenda for this kind of technology beyond its “Blueprint for an AI Bill of Rights”.
Washington risks losing its competitive edge and moral authority in the global AI landscape if it does not catch up with Brussels in developing a comprehensive and coherent framework for regulation. The US should not only aim to protect its national security interests from malicious or adversarial uses of AI by foreign actors, but ensure that its own domestic uses of this technology respect human rights and democratic values.
If the US is to remain at the forefront of AI research and development, it must engage in constructive dialogue and cooperation with the EU and other international stakeholders to adopt global standards and norms for AI governance. Adopting such measures is essential for effective regulation and interoperability among different markets, but also in preventing a hypothetical ‘race to the bottom’ in some areas of AI tech that have the potential to violate privacy or become tools for state surveillance, such as facial recognition.
Artificial intelligence is advancing faster than regulation and quickly becoming an issue for America. The current administration needs to act swiftly and decisively to catch up with its European counterpart in developing a comprehensive and coherent framework for AI regulation. The benefits of adequately supervising emerging technology extends far beyond America’s economy. Facial recognition technology and artificial intelligence have proved to be incredibly useful security and defense tools on the battlefield in Ukraine, ranging from gathering intelligence to cyberwarfare. If America is to contribute to a more peaceful and prosperous world, it must work to develop laws that protect public safety and individual rights from a rapidly evolving landscape, while also making significant investments and establishing policies that remain on par with other state and non-state actors who also possess this technology. The fluid nature of AI development and its wide range of uses has made emerging tech not only an economic or privacy threat, but a security threat that the US and its allies should take very seriously.
Comments