#1 - Slowing Down AI: Rationales, Proposals, and Difficulties
The way forward after the "Pause Giant AI Experiments" Open Letter
Greetings, and welcome to the first edition of Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence.
Starting from the latest developments in AI industry, governance, and policy, we reflect on the important questions surrounding the transition to a world with advanced AI.
Every 2 weeks, you’ll receive the following in your inbox:
Governance Matters: Our long-form thoughts on a fundamental question in AI governance
In the Loop: An overview of what’s been happening these past two weeks
What We’re Reading: A list of interesting readings on (or relevant to) AI governance
With the introductions out of the way, let's jump right into it.
(You can also navigate directly to sections In the Loop or What We’re Reading)
Slowing Down AI: Rationales, Proposals, and Difficulties
Lead Author: Siméon Campos (from SaferAI)
Our world is one where AI advances at breakneck speed, leaving society scrambling to catch up. This has sparked discussions about slowing AI development. We explore this idea, delving into the reasons why society might want to have slowdown in its policy toolbox. This includes preventing a race to the bottom, giving society a moment to adapt, and mitigating some of the more worrisome risks that AI poses. We'll also discuss various proposals to implement a slowdown and the most common concerns around those proposals.
On March 22, an open letter published by the Future of Life Institute and signed by prominent tech CEOs and researchers (including Turing Prize winner Yoshua Bengio and leading AI risks expert Stuart Russell) called for a 6-month pause on giant Large Language Model (LLM) experiments. This comes a few weeks after the release of GPT-4, and a few months after the release of ChatGPT. LLMs are being widely adopted in different sectors of the economy. Major publishers like Time in the US and The Guardian in the UK are now discussing risks linked to transformative AI. Fox News’ Peter Doocy asks questions about existential AI risks to the White House Press Secretary. US president Joe Biden says AI systems should be safe before release.
What do people want to slow down?
Most advocates of a pause want to set basic guardrails to ensure LLMs are safely developed, understandable, and secure before deployment, while continuing progress on other AI systems (as long as they don’t present risks).
Why Slow Down?
Essentially, to avoid a race to the bottom, i.e. a race where companies try to be the first to deploy a technology cheaply, at the expense of ensuring its safety. That’s a classic problem in safety-critical industries, such as healthcare or aviation. Given the economic incentives to develop advanced AI, corner-cutting seems poised to define the relationship between AI labs (and between the states in which they operate).
A pause in large language model development would allow society to adapt and decide on the extent of its deployment, considering its exponential growth and impact on up to 300 million full-time jobs, including white-collar roles. This break would also provide time for policymakers to evaluate and update laws related to intellectual property, liability, discrimination, and privacy, enabling the establishment of basic guardrails for safer future technology and ensuring legal certainty for AI development and deployment.
A substantial fraction of AI experts worry about existential accidental risks. Among those expressing concern is Geoffrey Hinton, godfather of the deep learning revolution. Another, Dan Hendrycks, is a leading Machine Learning researcher, expert in evaluating AI systems, and the inventor of GeLU, an important component of most frontier AI systems. He has said publicly that the chances that we go extinct are as high as 80%. He defends his position more thoroughly in a paper which states that “Natural Selection Favors AI over Humans”. Zooming out, a poll on US public opinion reveals that 46% of the population is “somewhat” to “very concerned” about AI existential risks.
Concerns about misuse, including high-consequence risks, also lend support to a pause in LLM development. Europol warns that enhanced disinformation campaigns and the empowerment of criminal actors are growing threats. LLMs are also disrupting the global cybersecurity landscape by facilitating large-scale search of security failures and the creation of rapidly mutating viruses. Without countermeasures, frequent and large-scale cyberattacks could occur, and advanced AI systems might make chemical and biological weapons more powerful and easier to create. We’ll discuss those risks more in future newsletters.
Proposals & Criticisms
As a result of growing awareness of those issues, calls for slowing down AI development are on the rise. But proposals vary a lot on their strength.
Most arguments against a slowdown come down to a) it wouldn’t work and b) it delays too much the benefits from AI. The second argument illustrates the risk-benefit trade-off at the center of most policy debates. Thus, a pause should be as short and targeted as possible. Some of the people worried about AI risks have indeed proposed to accelerate AI safety research to avoid having to excessively slow down AI development, and thus postponing its benefits too much.
The Open Letter itself calls for a 6-month pause on large AI training runs to develop guardrails and auditing procedures. Some view it as a step towards establishing basic safety requirements in the industry. Others think the plan would be counterproductive, and increase AI risks: If the training of AI systems is restrained while research to improve the training process continues, it could result in sudden capability jumps and increased safety risks when training resumes (at the end of the proposed pause). As AI governance analyst Matthew Barnett puts it, “continuous progress is more predictable, and better allows us to cope with challenges as they arise, compared to the alternative in which powerful AI suddenly arrives.”
Independently of this factor, implementing robust guardrails will be challenging, in part due to the complex and opaque nature of LLMs. OpenAI supports some measures advocated by the Open Letter, such as "independent audits," "independent review before training future systems," and limiting compute growth for advanced models. However, the key area of disagreement is about when the best time to pause is. Because we know very little about the current risks of models and the future pace of AI progress, there’s no authoritative answer to this question.
On the other end of the spectrum lies Eliezer Yudkowsky, founder of the Machine Intelligence Research Institute and precursor in the study of Artificial General Intelligence (a hypothetical AI system that can do anything a human brain can do). He believes the Open Letter doesn't go far enough, and suggests waiting until the "alignment problem" (ensuring AI systems do exactly what their designers want) is confidently solved before proceeding with LLMs development. If we don’t, Eliezer claims, “literally everyone on Earth will die”. Other critics argue that a full shutdown is inadequate due to uncertainty around existential risk1.
Between those two ends of the slow down proposals spectrum lies conditional slowdown, which roughly consists in conditioning the slowdown to specific dangerous capabilities bars being hit. So for instance, if a model becomes better than the top 10% hackers, as measured by an independent auditor, the regulator would prevent AI labs from dedicating R&D to more powerful capabilities until they have managed to implement adequate countermeasures (such as, in this case, removing or offsetting those hacking abilities by enhancing the state of global cybersecurity). Conditional slowdown could also go the other way, allowing companies to develop models only if they fulfill specific conditions (such as “benchmarks of explainability/transparency” or firm-level safeguards). This approach incentivizes labs to prioritize safety, differentially accelerating safer actors if properly enforced.
Difficulties and Countermeasures
As you can see, proposals are vastly different from each other, ranging from a gradual approach to a full and complete ban on AI capabilities research. But for all of those, challenges have been said to stand in the way:
“Coordination is hard. Those proposals will never see the light”. This argument seems to understate a few factors that are crucial to the situation we’re in:
Most importantly, key AI leaders are US-based and have at least a few years of technological lead, making a unilateral slowdown feasible without harming US AI leadership. Unlike climate change, responsibility lies with few actors, and other countries, not benefiting from the technology race, may support an international slowdown plan.
Because of its enormous economic and security benefits, it seems hard to imagine a world where public authorities or even companies strive to slow down AI progress. But history shows several examples of technologies that were deliberately not pursued despite their benefits. And both political elites and the general public in the US are increasingly in favor of regulating AI. If this trend persists, the idea of slowing down AI development may not seem as far-fetched as it currently appears.
“A slow down will allow China to surge ahead of the US in AI development”.
If we take the warning of existential risks from experts seriously, whether advanced AIs are developed by the US or China, humanity risks becoming extinct (as long as we don’t have the technology to control those systems, and we don’t seem to be on track to getting it anytime soon).
China is very likely to be a few years late in terms of technology given the information available on the current state of its technology industry. It’s not even clear they’re matching the capabilities of second tier US players like Adept.ai or Cohere.ai (which are already lagging a few years behind OpenAI and probably Anthropic). Given that, China may be willing to agree to a slow down.
China's AI industry has been severely affected by US export controls and is expected to fall significantly behind the US in the next few years. The gap between the two countries' capabilities may widen even more with the upcoming introduction of NVIDIA's state-of-the-art high-end computing chips (H100), which won’t be available in China (due, again, to US export controls). Moreover, the US is very likely to continue trying to slow China's tech industry, especially AI (for example by preventing certain American companies from investing in China or further restricting exports). As the US Undersecretary of Commerce, responsible for export controls, puts it, “If I was a betting person I would put down money on [additional export controls, including on AI]”.
Even just a unilateral US pause on AI development could be beneficial. It would signal to China that the US is not willing to do “whatever it takes” to get ahead (i.e. sacrifice safety in favor of capability), thus reducing dangerous racing dynamics and potentially leading China to be less scared of slowing down AI development as well.
Overall, many agree that a slowdown would help with navigating AI risks successfully. Everyone agrees that slowing down won’t be easy to implement in the right way. But everyone is also surprised by how much more people than expected seem sympathetic to it. Which means that coordination to make transformative AI development go well might be much easier than many thought. That's great news for AI risk management!
In the Loop
Lead Author: Charles M.
Italy bans ChatGPT. Other European countries are considering similar measures.
In addition to a security breach, the regulator says ChatGPT was banned because (i) it doesn’t require age verification to be used, and thus subjects minors to “absolutely unsuitable answers”, and (ii) the use of massive amounts of personal data to train the model behind ChatGPT is not legally justified.
If OpenAI doesn’t come into compliance with EU data protection law, the company may have to pay fines up to 4% of its global annual revenue. CEO Sam Altman has told the Italian regulator that it will put in place “measures to address the concerns” that caused the ban.
As Europe’s privacy regulators are in constant communication, Italy’s ban may spread to other European countries, notably in Germany and France, the bloc’s two largest economies.
China reacts to Japan’s new chips export controls
On March 31, Japan announced new restrictions on exports of semiconductor equipment. Although China is not explicitly mentioned, the move is clearly intended to restrict its ability to harness semiconductors for military and economic advantage. It follows similar restrictions implemented by the United States in October 2022 and by the Netherlands in March 2023 (for which China is already finding loopholes).
China immediately condemned the move, saying it “will take decisive measures to safeguard its rights and interests if Japan” does not change course. Later, on April 5, it filed a complaint with the World Trade Organization.
This comes at a time where China is doubling down on its ambitions to become the world’s tech leader. In March:
The country announced plans to reform the organization of the ministry responsible for dealing with tech issues, with the goal of “centralizing China’s push for technological self-reliance.”
At a major political event, a leading AI researcher advocated for the launch of a national effort to develop Artificial General Intelligence.
China’s Cyberspace Administration announced it would investigate an American chip making company’s activities in China, in part as a response to US export controls.
Researcher proposes framework to verify compliance with international AI rules by monitoring compute
Current international AI governance arrangements rely on soft law, standards, and other types of non-legally binding rules. However, as AI systems become more powerful and the potential for misuse increases, countries and companies may want to create international rules of the road. The challenge lies in verifying that all parties adhere to these rules.
Yonadav Shavit, a Harvard PhD candidate, presents a promising roadmap in his paper (and accompanying Twitter thread) to address this issue: After governments agree on rules governing which AI systems are permissible for development, regulators would periodically inspect the chips used to train these systems. This would require the integration of a logging mechanism within the hardware, which is currently not the case, but Shavit says this technical solution is feasible (though cybersecurity and the economic costs may be challenges). This would allow a government to show show compliance with a potential international agreement by proving its companies aren’t developing undesirable systems.
Why focus on monitoring chips rather than algorithms or data? There are three key reasons: (i) chips have a physical presence that facilitates verification, (ii) AI labs utilize specific types of chips that rules could target (thus avoiding scrutiny of chips used in consumer applications), and (iii) the semiconductor supply chain is controlled by a limited number of companies; changing manufacturing processes wouldn’t require convincing too many actors.
What else?
China/US: China’s Cyberspace Administration published its draft rules on Generative AI, calling for public comments. The US Commerce Department also issued a public request for comments on policies to “support the development of AI audits, assessments, certifications and other mechanisms”.
UK: The UK government published a White Paper entitled “A pro-innovation approach to AI regulation”, where it lays out the country’s strategy for developing and governing AI.
US: The Federal Trade Commission released guidelines on marketing AI products and developing AI systems.
EU: Europol, the EU’s law enforcement agency, released a report entitled “ChatGPT - the impact of Large Language Models on Law Enforcement”, featuring an overview of how criminals may misuse ChatGPT.
US: A pitch deck for Anthropic’s next round of funding, consulted by TechCrunch, reveals that the company is planning to build an AI system “10 times more capable than today’s most powerful AI.”
Global: The US and allies launched the Code of Conduct of the Export Controls and Human Rights Initiative, a “multilateral effort intended to counter state and non-state actors’ misuse of goods and technology that violate human rights.”
What We’re Reading
Securing Liberal Democratic Control of AGI through UK Leadership and Response to Comments
Policymaking in the Pause: What can policymakers do now to combat risks from advanced AI systems?
Whether We Can and Should Develop Strong AI: A Survey in China
That’s a wrap for this first edition. You can share it using this link. Thanks a lot for reading us!
— Siméon, Henry, & Charles.
(If you want to meet us, you can book a 15-minute call with us right here.)
We will discuss those specific risks in future newsletters.