#8: New Export Controls + Antitrust Action On AI + Compute Governance
Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence.
This is the 8th edition of NAIR. We want to make this as useful as possible, so feel free to send us your feedback about what we can improve.
Let’s dive in!
In the Loop
The Export Controls Saga Goes On
Traditionally, export controls were used to prevent sensitive technologies from falling into the hands of adversaries. These controls were targeted mostly at hardware and tangible goods, like weaponry or nuclear technology. However, the distinction between 'weapon' and 'non-weapon' is becoming increasingly blurred. As the United States and China jostle for the top spot in AI, export controls have taken on an entirely new dimension. The now serve three main objectives::
specifically, and most relevantly in this context, controlling the semiconductors used to train both frontier and (less advanced) military-relevant AI systems
maintaining one’s strategic economic & technological lead,
guard against other security risks.
The United States imposed groundbreaking export controls towards China on chips and the equipment used to manufacture them in October 2022, and later persuaded the Netherlands and Japan, two key semiconductor powerhouses, to impose similar controls.
After the October 2022 controls, Nvidia designed new chips to keep selling to the Chinese market. The US will likely implement new controls that include these chips, possibly by late July. It’s also looking to control Chinese companies’ access to advanced chips through American cloud computing providers.
Major economies around the world are implementing restrictions on semiconductor and related exports. The Netherlands will more closely control the exports of crucial chip-making equipment supplier ASML. Though China is not explicitly targeted, the Chinese Embassy in the Netherlands reacted by saying these restrictions were “completely unreasonable and untenable”, with “no legal or moral basis”. China also announced it would restrict export of critical raw materials used to make chips and electric vehicles, a move largely seen as retaliation for US attempts to cut off its access to advanced chips.
EU institutions have little influence over export control rules, as security remains a national responsibility. Its export controls regulation is largely limited to coordinating between member states and implementing export controls agreed upon in one of 4 multilateral export control regimes. Nevertheless, last week, the EU announced its 'Economic Security Strategy', which calls for updates to the bloc's investment screening and export control mechanisms. Like the US government, the European Commission is also strongly considering limiting the investments of European companies in strategic technology sectors abroad.
There are no signs of a slowdown in the global trend of increasing export controls. One reason for this, particularly from the US perspective, is the near impossibility of determining how China uses chips: the country’s civil-military fusion strategy aims to create an integrated ecosystem where advances in civilian technologies can be readily absorbed by the military, and vice versa. The US regulates exports to prevent its companies' cutting-edge technology from benefiting the Chinese military. There may be ways out of this conundrum: a track II diplomatic dialogue between senior Chinese and American business and government leaders called for a program to verify to what end semiconductors are used, thereby enabling their exports in certain conditions. Their communiqué states:
“If one of the U.S. government’s concerns is the military end-use of advanced semiconductor technology, then the two countries should explore credible and constructive implementation mechanisms for end-use verification monitoring for advanced chips”. The two countries should “establish a trusted "white list" of permitted semiconductors by reaching verification agreements and on-the-ground review.”
Is Antitrust Coming for AI?
Some are saying that the high financial barrier for AI development threatens to foster monopolistic practices, with only a few well-funded firms dominating the AI landscape. The rising consolidation of the industry through acquisitions and partnerships suggests that these concerns are likely to intensify.
Strategic partnerships (generally combined with equity investments) allow general-purpose AI labs to access the resources of large compute providers, who, in exchange, gain privileged access to these labs’ AI models or acquire equity in those companies (or both). That pattern played out between Microsoft and OpenAI, and will likely repeat in the future. Because of the high costs of developing and operating large AI models, many AI labs are looking to partner with cloud computing providers.
Another competition concern is that large companies, able to absorb short-term losses, may stifle competition by offering free products and services. For example, Dylan Patel, chief analyst at SemiAnalysis, a semiconductor research firm, estimated that the free version of OpenAI’s ChatGPT costs up to $700,000 per day (or $255 million per year).
Others argue these fears might be premature, noting the continual stream of new entrants in the AI market. They posit that there is no apparent bottleneck, whether in talent, funding, or computation, in the expanding AI sector.
The Federal Trade Commission has been one of the most active US agencies on AI, though mostly because of its consumer protection responsibilities. It’s now looking at enforcing its core business: antitrust law. The basic case made by the FTC looks like this:
“Generative AI depends on a set of necessary inputs. If a single company or a handful of firms control one or several of these essential inputs, they may be able to leverage their control to dampen or distort competition in generative AI markets. And if generative AI itself becomes an increasingly critical tool, then those who control its essential inputs could wield outsized influence over a significant swath of economic activity.”
Competition concerns relate to AI’s three building blocks, also known as the AI Triad: Data, computing power, and algorithm. Those three inputs are combined by researchers and engineers to create new models. According to the FTC, control over any one of these three inputs may pose competition concerns (though it discusses control over talent and not algorithms, perhaps because it also points to open-source AI systems as being able to help “open up the playing field” when it comes to algorithms).
The FTC also warns that it “will use [its] full range of tools to identify and address unfair methods of competition”. It has demonstrated this approach in the past; last year, to avoid excessive concentration of computational resources, it blocked Nvidia’s $40 billion acquisition of semiconductor design company Arm, which “would have stifled competition in multiple processor markets”.
Antitrust is likely to be a relevant lever regarding the governance of transformative AI risks. OpenAI’s charter contains a so-called “Assist Clause” whereby “if a value-aligned, safety-conscious project comes close to building AGI before we do, [the company pledges] to stop competing with and start assisting this project.” The reasoning behind this provision is their concern about “late-stage AGI development becoming a competitive race without time for adequate safety precautions”. Haydn Belfield and Shin-Shin Hua, researchers affiliated with the Centre for the Study of Existential Risk, have found that the Assist Clause may trigger antitrust investigations under EU competition law (which also applies to American companies that operate in Europe).
Such considerations are becoming increasingly important. Last month, Senator Warner, Chairman of Senate Intelligence Committee, asked at an event whether it would be “in the national security interest of our country to [merge] OpenAI/Microsoft, Anthropic/Google, maybe throw in Amazon [...] We didn’t have 3 Manhattan Projects, we had 1”.
What else?
Global: The United Nations Security Council will hold a meeting on the potential threats of artificial intelligence to international peace and security.
EU: More than 160 executives from large EU companies signed an open letter opposing the current version of the AI Act because it would have “catastrophic implications for European competitiveness”
Industry/US: OpenAI faces a class action lawsuit over alleged violations of privacy and copyright laws after it scraped data from the internet to train its AI systems.
Global: UNESCO & the EU announced a partnership to support “AI Ethics” legislation in developing countries.
EU: The EU released an updated Digital Diplomacy Strategy; in a blog post, the chief of the EU’s diplomatic service, Josep Borrell, points to “authoritative voices from inside the tech industry [...] warning us about potential existential risks”.
Industry: OpenAI opened an office in London, which now hosts offices from all 3 of the world's most advanced AI labs (all American).
US: Building on its Risk Management Framework, the National Institute of Standards and Technology launched a new working group “to tackle risks of rapidly advancing generative AI.”
Global: Civil society organizations released a statement on the current negotiations over an international AI treaty (led by the Council of Europe, see NAIR #7)
Japan/EU: The EU and Japan signed a memorandum of understanding on semiconductor supply chain cooperation, including research, training, and an early warning system for supply shortages.
Industry: Microsoft CEO suggested creating a CERN-like global AI laboratory to solve the “alignment problem” (ensuring that advanced AI systems work according to the intentions of their operators).
China: The Chinese semiconductor industry received $291 billion in subsidies in 2021-2022.
Industry: According to its CEO, Google DeepMind is training a system, Gemini, that will be more capable than OpenAI’s GPT-4.
US: Senator Schumer launched his SAFE Innovation framework for regulating AI, which stands for: Security (for the US); Accountability (regarding issues ranging from copyright to misinformation); protecting our Foundations (e.g. democracy); Explainability (the unsolved technical problem of understanding why AI systems do what they do).
EU: The United States has implemented a new framework agreed upon with the EU to govern transatlantic cross-border data flows. Considering that previous, similar frameworks were struck down by the EU’s Court of Justice, this may be a short-lived agreement.
EU: Spain, in charge of finalizing the EU’s AI Act, has proposed certain changes to the current draft to foster a compromise. what they are proposing.
Japan: Japan is developing rules and regulations for AI, notably through its AI strategy council. An official said its rules would be softer than the EU’s.
Deep Dive: Compute Governance
AI’s three building blocks are data, computing power (or “compute”), and algorithms. Compute, which refers broadly to physical resources like servers and chips used for computation, is an ideal lever for governance because of its fundamental properties:
It’s a physically centralized requirement for training: Substantial amounts of highly specialized centralized computing power are needed to train frontier AI systems.
It has a globally centralized supply chain: The supply chain of highly specialized chips has several chokepoints: several entities building essential components of the computing power like ASML, TSMC, NVIDIA and AMD are monopolies or highly concentrated oligopolies. They are ideal levers to enforce governance rules for all the users of these specialized chips.
It has an irreplaceable supply chain: The supply chain of highly specialized computing power is extremely difficult to replicate1. It means that a country couldn't unilaterally start building its own chips within a few months or years. So it allows governance to be potentially global.
As seen in the graph below2, the amount of compute used for cutting-edge AI systems is doubling roughly every 6 months. And it will probably stay a primary factor of the capabilities of future systems.
Today, compute governance measures can be understood along two main axis: mapping and monitoring.
Mapping is a crucial first step: making sure the key chokepoints we discussed implement KYC measures in a way which allows to know which entity (e.g. individual, lab or country) owns how much compute. That allows us to determine who's in power to train powerful models. In case of doubts over the quantity an entity owns, inspections could be mandated.
The second step of the process involves monitoring the training runs of powerful AI systems, made possible by mapping the entities who have the capability to develop them. As the training of an AI model is impossible without compute, without access to these resources, any further development must cease. Compute can thus be used as a core lever to make sure all labs comply with safety regulations in all countries. As fleshed out by a paper written by Y. Shavit in 2023, sophisticated hardware mechanisms could even be implemented straight away on AI chips in order to constrain training runs to be provably safe.
Overall, while compute governance is not necessarily useful at the national level (because a country just has laws and can enforce them on its own territory), it could be a very useful tool for international cooperation to allow two countries with low trust (e.g. China and the US) to make each others’ claim verifiable, i.e. China can trust the US when the latter say that they’re not training a new unregulated frontier AI system and vice versa. That’s the core use case of compute governance, which seems very hard to replace. If policymakers want to increase their chances of success, they should consider investing time in technical groundwork. This could involve several years of dedicated focus and adequate funding towards relevant R&D.
What We’re Reading
FAQ on Catastrophic AI Risks on answers to many objections to AI extinction risks (Yoshua Bengio, Turing Prize winner)
Allocating accountability in AI supply chains, on AI supply chains for regulating and assigning responsibilities to different actors throughout the supply chain (Ada Lovelace Institute)
Towards Measuring the Representation of Subjective Global Opinions in Language Models, looking at which opinions from regions around the world the responses of AI chatbots are most similar to (Anthropic)
Artificial intelligence and biological misuse, on the effects of AI on biological risks: large language models make it easier to engineer pathogens and/or build bioweapons while narrower AI systems created to design new proteins or biological agents could be used to create new pathogens worse than the worst existing ones (Jonas Sandbrink)
How to Audit an AI Model Owned by Someone Else, on a new way to audit AI systems: “an external auditor will be able to propose a question about an AI system to its owner and related third parties and — if they approve the question — the auditor will be able to download the answer to that question without the auditor, AI owner, or third parties learning anything beyond what the group explicitly approved” (OpenMined)
Building a Culture of Safety for AI: Perspectives and Challenges, on the concept of safety culture, used in industries like nuclear power or healthcare, which “emphasizes preventive measures, standardized procedures, and a strong commitment to safety” (David Manheim)
Going public: the role of public participation approaches in commercial AI labs, on participatory AI approaches and the obstacles faced by commercial AI labs in implementing these practices in the development of AI systems and research (Groves et al.)
The Scramble for AI, a magazine edition with 7 articles about geopolitics and AI (Foreign Policy)
The Race to Regulate Artificial Intelligence, on why Europe Has an Edge Over America and China (Anu Bradford in Foreign Affairs)
Competition Between AI Foundation Models, on the increasing returns and antitrust implications of foundation models (Network Law Review)
That’s a wrap for this 8th edition. You can share it using this link. Thanks a lot for reading us!
— Siméon, Henry, & Charles.
Some claim that it is one of the most sophisticated supply chains humans have built to maintain over 60 years the famous Moore’s law.
Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, & Pablo Villalobos. (2022). Compute Trends Across Three Eras of Machine Learning.