#5 - Under the Gavel: Senate Hearing on AI + Connecticut's AI Law
Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence.
For this 5th edition, we'll look at what US senators discussed with AI experts, Connecticut's ambitious legislative proposal to regulate AI, and more.
Let’s dive in!
In the Loop
AI Experts Testify Before the US Senate
In the first of a series of hearings on AI governance, Sam Altman (CEO of OpenAI), Gary Marcus (emeritus professor at NYU) and Christina Montgomery (Chief Privacy & Trust Officer at IBM) testified before the U.S. Senate Judiciary Committee. Both senators and witnesses agreed that AI needs to be regulated; the divergence is on what is to be regulated and how.
Ms. Montgomery from IBM wants Congress “to govern the deployment of AI in specific use cases, not regulating the technology itself; this entails an approach that only regulates the most high-risk applications of AI (which is close to what the EU’s AI Act is doing, although it probably isn’t what IBM would want to see in terms of legislation). Gary Marcus and Sam Altman have more ambitious proposals, including creating new agencies to govern AI, cooperating globally on AI safety standards, or requiring independent audits for AI systems.
The message of the witnesses is noteworthy. As Senator Dick Durbin said, “I can’t recall when we’ve had people representing large corporations or private sector entities come before us and plead with us to regulate them.”
The proposals mentioned by the Senators or the witnesses included (see also the full hearing transcript here):
Creating scorecards (or “nutrition labels”) for AI systems to “encourage competition based on safety and trustworthiness” (Senator Richard Blumenthal)
Safety reviews pre-deployment, similar to the US Food and Drug Administration review-and-approval process (Gary Marcus); or creating a new agency that would “license any effort above a certain scale of [AI system] capabilities and can take that license away and ensure compliance with safety standards” (Sam Altman).
Require independent audits to test the compliance of AI systems with certain “safety thresholds” or “performance on question X or Y” (Sam Altman); or having “external reviewers that are scientifically qualified” look at a system (Gary Marcus).
“Conduct impact assessments that show how systems perform against tests for bias and other ways that they could potentially impact the public” (Christina Montgomery).
What risks are we talking about? Most of the conversation related to risks currently posed by AI, including privacy, bias, copyright infringement, increased economic inequality and unemployment, and manipulated political beliefs or targeted advertising. Significantly, there was no discussion of the existential risks posed by AI that many AI safety and deep learning experts (like Geoffrey Hinton) worry about (though there was one brief mention of AGI).
When asked by Senator Blumenthal what Sam Altman meant by saying in a 2015 blog post that the “development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”, he confirmed the Senator’s (wrong) intuition that this was referring to widespread unemployment. Mr. Altman said that AI could cause “significant harm to the world”, and go “quite wrong”, but didn’t frame the risks even remotely as strongly as he did elsewhere (for example: “the bad case – and I think this is important to say – is like lights out for all of us”).
This is symptomatic of the ambiguous way AGI labs usually communicate with policymakers. They are much more pessimistic when talking with other stakeholders; elsewhere than before Congress, the message is generally that AGI is close (potentially less than 5 years away), that it would be extremely powerful, and that there is a non-trivial chance that it might threaten humanity’s existence.
Reactions to the hearing varied widely. Many worried about AI risks welcomed the discussions of stringent regulations, which looked a lot like the measures that many experts would like to see. But many also think that the discussions didn’t cover high-stakes risks from more advanced systems. Others don’t agree that the US needs new agencies, saying that currently existing ones could take up this responsibility if they had more resources and expanded mandates.
There are also fears that the calls for regulation by Sam Altman are an early sign of regulatory capture; that the real goal is to prevent competitors to OpenAI from emerging (because small companies may not have the resources to comply with extensive regulations, like licensing schemes). But some are skeptical that this is actually the reason that Altman is calling for new regulations, because (i) people close to him say he is genuinely worried about large-scale risks from AI (that could be alleviated by new regulations) and (ii) he underlined that “regulatory pressure should be on us [companies that lead AI development]”, not smaller players.
The hearing will presumably serve as an input into current discussions on federal AI rules, such as the one proposed by Senate Majority Leader Charles Schumer. Still, that AI regulation is getting more political attention doesn’t mean that the US Congress will pass stringent regulation. Most proposals that were discussed, such as the creation of a new agency or requiring independent audits, still seem relatively far away.
Connecticut Charts a Course for AI Governance
The state of Connecticut has recently put forth a proposal to address the governance of artificial intelligence, automated decision-making, and personal data privacy. The proposed law entails a risk-based approach, where rules would apply only to systems that make, inform, or support “critical decisions”.
The state will have to set “policies and procedures concerning the development, procurement, implementation, utilization and ongoing assessment of systems that employ artificial intelligence.” The law would require the state to make an inventory of all AI tools it uses, and implement robust safeguards for AI development and use, and data governance requirements. The deadlines for this are short: the relevant policies would have to be devised by the end of the year.
The law also includes provisions that would:
Help ensure alignment with national and international standards
Require identifying and addressing new AI vulnerabilities
Create procedures that mandate that AI systems be regularly examined, and, in case of faulty performance, shut down.
Require more transparency on the data and algorithms used.
The proposal would establish several new bodies to oversee the development, procurement, and use of “automated decision systems”, such as an AI advisory Board that will advise state agencies and the state legislature on AI policy. An AI officer will develop “government-wide AI procedures”, while an AI implementation officer will execute them. Finally, a task force will study more in depth the implications of AI.
As Connecticut steps into the AI governance arena, its efforts could serve as an influential blueprint for other states and jurisdictions seeking to navigate AI-related challenges. The bill received unanimous support in the Senate but still has to go through the State’s House.
What Else?
World: After leaders from G7 countries met in Japan for their annual Summit, they released a communiqué stressing “the importance of international discussions on AI governance and interoperability between AI governance frameworks”. They call national ministries to establish the “Hiroshima AI process”, through a G7 working group that will discuss generative AI with the OECD and the Global Partnership on AI, including issues such as governance, intellectual property, transparency, disinformation, or “responsible use”.
US: The US Senate held another hearing on AI, this time on government use of AI. This is important, because “even if those rules don’t technically apply to the private sector, they set norms and standards that often trickle out into the broader economy.”
US/China: As part of a China-US track II dialogue, academia, think-tank, and industry experts from both countries called for the establishment of a white list to allow semiconductor exports from one country to the other when there are no risks that it may benefit the other country’s military. This could be done through “verification agreements and on-the-ground review”.
US: The White House has launched a working group on generative AI to provide input to the President on how AI systems can be “developed and deployed as equitably, responsibly, and safely as possible”.
Japan: The government’s Strategy Council on AI, which will advise the government on AI governance and regulation, held its first meeting. It will release a report on generative AI by the end of June.
EU/US/Industry: After a first fine amounting to €20 million, American facial recognition startup Clearview AI received an additional €5.2 million fine from France’s privacy regulator.
US: 70 national security leaders wrote a letter calling the US Congress to “attract international STEM talent” and address a “talent gap” in the face of “unprecedented competition from China”.
EU/India: As part of the first EU-India Trade & Technology Council ministerial meeting, “a coordination platform to address key trade, trusted technology and security challenges”, EU and Indian representatives agreed to cooperate “on trustworthy Artificial Intelligence and coordinate their policies with regards to the strategic semiconductors sector through a dedicated Memorandum of Understanding”.
UK: The British government announced its national semiconductor strategy, including investments of “up to £1 billion in the next decade to improve access to infrastructure, power more research and development and facilitate greater international cooperation”. The country also announced it will collaborate closely with Japan to strengthen semiconductor supply chains and do joint research and development projects.
US/Industry: Google released a policy agenda for “responsible AI progress” focused on seizing the benefits of tech development. The agenda calls for an industry-led, risk-based approach to regulating AI. It also comes out against calls to pause the development of AI models more powerful than the state-of-the-art (“Calls for a halt to technological advances are unlikely to be successful or effective, and risk missing out on AI’s substantial benefits and falling behind those who embrace its potential”).
UK: A British think-tank called the government to dedicate £11 billion to build ‘BritGPT’ and a national AI cloud to avoid becoming even more dependent on US tech firms and provide public goods such as “medical research, clean energy research, and AI safety research”.
What We’re Reading
Existential risk and rapid technological change: Advancing risk informed development (United Nations Office for Disaster Risk Reduction)
Towards best practices in AGI safety and governance: A survey of expert opinion (Schuett et al.)
Spotlight on Beijing Institute for General Artificial Intelligence: China's State-Backed Program for General Purpose AI (Center For Security and Emerging Technology)
How to deal with an AI near-miss: Look to the skies (Bulletin of the Atomic Scientists)
Controlling Access to Advanced Compute via the Cloud: Options for U.S. Policymakers (Center For Security and Emerging Technology)
Large Language Models Can be Used to Effectively Scale Spear Phishing Campaigns (Julian Hazell)
Controlling critical technology in an age of geo-economics: actors, tools, and scenarios (Swedish Institute of International Affairs)
What a Chinese Regulation Proposal Reveals About AI and Democratic Values (Carnegie Endowment for International Peace)
The power of control: How the EU can shape the new era of strategic export restrictions (European Council on Foreign Relations)
The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation (Roberts et al.)
That’s a wrap for this 5th edition. You can share it using this link. Thanks a lot for reading us!
— Siméon, Henry, & Charles.
(If you want to meet us, you can book a 15-minute call with us right here.)