#3 - Big AI Goes to the White House + Pandora's AI Box? + AI Treaties
Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence.
Once again, it's been a week filled with numerous developments in AI. In light of this, you’ll now receive the newsletter on a weekly basis. For this 3rd edition, we’ll talk about leaked documents, high-level meetings, labor displacement, international agreements, and more.
Governance Matters, our long-form section where we delve into fundamental questions in AI governance, will be published separately from the weekly newsletter.
Let’s dive in!
In the Loop
The White House has entered the AI Game
Yesterday, the CEOs of OpenAI, Anthropic, Google, and Microsoft, the world’s most advanced AI companies, went up to the White House. At the meeting, hosted by US Vice-President Kamala Harris, they met with the President of the United States.
According to the official readout, the goal of the meeting was to “share concerns about the risks associated with AI.” Three key areas were mentioned: corporate transparency about developing and deploying AI models; the need to “evaluate, verify, and validate the safety, security, and efficacy of AI systems;” and the risk of AI misuse.
That same day, the Biden administration made important announcements: it will publish guidelines on government use of AI, and create 7 new National AI Research Institutes, in addition to the 18 already existing institutes part of the country’s basic AI research architecture.
Lastly, and perhaps most notably, the White House announced “an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI” to participate in DEF CON, a hacker convention, in August. Their AI systems will be “evaluated thoroughly by thousands of community partners and AI experts.” Some are already calling this the "biggest ever public safety and security test of artificially intelligent models."
The organizers of the conference see it as a way to find bugs within the models and “red team” them, i.e. identify security and safety vulnerabilities. This seems somewhat out of line with the goal of the administration, which is to evaluate AI models’ alignment with the Blueprint for an AI Bill of Rights and NIST’s AI Risk Management Framework, two sets of voluntary rules much broader in scope than just cybersecurity. As underlined by Alexandra Reeve-Givens, CEO of the Center for Democracy and Technology, these frameworks are also untested until now. As they were not designed for generative AI systems, one may wonder whether they are “specific enough to be useful & prompt meaningful review”.
Joe Biden’s intervention is also noteworthy. While recognizing the “enormous dangers” posed by AI (as well as its substantial benefits), the US President asked these top AI CEOS to “educate” the executive branch about what they “think is most needed to protect society”. This reflects a common theme in the administration’s remarks on AI in the past few days; that “companies have a fundamental responsibility to make sure their products are safe”. For now, the large amount of failed AI model releases and lack of prioritization of the work of responsible AI teams at several of these companies do not bode well for their ability to fulfill this “fundamental responsibility.” Still, the possibility that the government should take up the mantle if they fail to do so is seldom put forward. In other words: the US federal government will refrain from regulating until it has no other choice. As pointed out by French mathematician and science communicator Lê Nguyên Hoang:
"In mature industries (such as aviation or pharmaceuticals), independent regulators verify product safety before commercialization. But in the field of AI, the President of the United States is relying on Big Tech to evaluate the safety of their own products. Imagine if the President surrounded himself with airlines to "educate" him on the environmental consequences of their activity and solutions to mitigate environmental risks, without consulting independent experts."
Indeed, to deliberate on the right safety measures, independent AI experts such as Geoffrey Hinton, Dan Hendyrycks or Paul Christiano would be well-positioned to inform the government’s approach to AI risks. One way to do that would be to make this sort of meeting a regular occurrence while broadening the guest list to include other stakeholders and experts, all while paying attention to regulatory capture.
Is Pandora's AI Box Unleashed? Insights from a leaked internal Google document
A senior software engineer from Google wrote in a leaked internal document that the open-source AI community is rapidly catching up to the company – and large AI companies altogether – in developing advanced AI models. The document is an interesting window into how people within AI companies think about competitive dynamics in the industry.
“Who would pay for a Google product with usage restrictions if there is a free, high quality alternative without them?”, asks the author, apparently worried that Google is behind not only OpenAI, but also the open-source AI community. He takes the example of LLaMa, a model made by Meta that was leaked in March, after which independent researchers and tinkerers started revising the models and creating novel applications. Although the culture of AI research is very much open and collaborative, it was the first time researchers outside of major AI labs had complete access to an advanced AI model: “The barrier to entry for training and experimentation dropped from the total output of a major research organization to one person, an evening, and a beefy laptop”.
It is true that the open-source AI community has caught up remarkably fast to the performance of close-source models created by AI companies. However, training state-of-the-art (SOTA) systems beyond current capabilities will probably require significant amounts of computational and financial resources. Catching up is much easier than pushing the SOTA. OpenAI’s CEO said that training their most powerful model, GPT-4, cost $100 million, demonstrating financial firepower that can’t be matched by open-source projects. Independent researchers and small companies have innovated a lot, by creating downstream applications of models made by large AI firms; but it’s unlikely that the next model that moves the needle will be developed by any other actor than the latter.
Open-source research has many benefits: it helps widely diffuse innovations, including those that may not necessarily come to fruition solely from a profit-seeking perspective. It gives researchers the opportunity to study and improve the models. And importantly, it greatly facilitates access to those models, even to non-technical profiles. But in the case of powerful AI models, accessibility seems riskier than in other areas of science.
When a company controls access to an AI model, often through an API, it can set limits on certain use-cases. For example, OpenAI forbids users and developers from using any of its models to generate political campaign materials or to give financial or medical advice without human oversight. If someone is found to be in violation of these rules, their account may be terminated. Such mechanisms don’t exist for open-sourced models. Anyone can potentially access and use them. As pointed out by the author of the leaked document, you can now run powerful large language models on your iPhone. What will happen when progress in computing capacity or algorithmic improvements allow anyone to run an AI model that can generate “40,000 chemical warfare agents entirely on its own” in under 6 hours? Or a model that significantly reduces the costs of carrying out cyberattacks? The risk that such models may easily proliferate because they are open-sourced, and may amplify malicious actors’ ability to impose harm on others, raises many important questions.
Generative AI is starting to replace jobs
Until recently, worries that AI might replace jobs rested mostly on arguments made by academics or foresight studies. Today, the evidence for this trend is growing, especially for cognitive work. The CEO of IBM plans to slow down or stop hiring in “non-customer-facing” jobs, replacing them with AI systems, and eliminating up to 7800 jobs over the next five years. Walmart is starting to use a chatbot to negotiate with its suppliers, gaining an average of 1.5% of savings. Relatedly, the stock of Chegg, a company that provides online homework help, fell by half because students are increasingly turning to ChatGPT and other generative AI tools.
The creative industries are also particularly at risk of seeing their jobs automated. Professionals in the music, illustration, and screenwriting industries are starting to demand regulation to prevent that from happening, raising copyright concerns, including that the companies train AI systems using their work.
Compared with past (but still ongoing) waves of automation in robotics and software, generative AI will mostly be used to automate tasks done by educated workers. For now, there is no evidence of widespread or cross-industry unemployment – and in the past, technology has created more jobs than it automated. But there’s no certainty that this will remain the case as AI takes up an increasingly large part of humans’ cognitive and labor work.
There are also concerns that AI will lead to more income going to those who own capital (such as the shareholders of AI companies) relative to those who use their skills to earn a wage. That dynamic will amplify income inequality, because capital ownership is highly concentrated.
Governments that look at the problem are mostly thinking about upskilling workers and strengthening traditional safety nets. In the long-term, in case of widespread unemployment or a massive increase in income inequality, more ambition will be needed. Researchers from Oxford University have one ambitious idea. They propose that AI firms adopt a Windfall clause, committing to “donate a significant amount of any eventual extremely large profits” generated by “transformative breakthroughs in AI capabilities.” Public policies requiring companies to share such windfall profits may also be a way to enforce or mandate such a commitment.
What else?
US: White House Office of Science and Technology Policy releases request for information on automated surveillance techniques in the workplace.
Industry: Turing award winner and legendary AI research Geoffrey Hinton leaves Google, citing AI risk concerns.
US: The world’s first entirely AI-generated campaign ad made some waves last week.
UK: The United Kingdom announces new funding for a compute fund “to establish the UK as a world leader in foundation models.”
World: The last meeting of the G7 summit resulted in a joint declaration, with leaders of the world’s wealthiest countries agreeing on 5 high-level principles for AI governance.
US: New proposed bill would restrict the use of automated decision-making systems in nuclear systems -- in effect banning the use of AI to launch nuclear weapons.
EU: ChatGPT resumes service in Italy after adding privacy disclosures and controls
China: US export controls on semiconductors are not very effective at slowing down China’s AI industry.
Africa: 150 content moderation workers from Africa create the African Content Moderators Union.
UK: The country’s antitrust authority starts drafting a review of the market for AI foundation models.
US: Pointing to the crucial role of competition policy in AI governance, FTC Commissioner Khan wants the F.T.C. to “vigorously enforce the laws we are charged with administering” on AI.
Deep Dive: Nuclear Arms Control Verification and Lessons for AI Treaties by Mauricio Baker
“Security risks from AI have motivated calls for international agreements that guardrail the technology. However, even if states could agree on what rules to set on AI, the problem of verifying compliance with those rules might make these agreements unenforceable. To help clarify the difficulty of verifying agreements on AI–and identify actions that might reduce this difficulty–this report examines the case study of verification in nuclear arms control. We review the implementation, track records, and politics of verification across three types of nuclear arms control agreements. Then, we consider implications for the case of AI. [...] The case study suggests that, with certain preparations, the foreseeable challenges of verification would be reduced to levels that were successfully managed in nuclear arms control. To avoid even worse challenges, substantial preparations are needed: (1) developing privacy-preserving, secure, and acceptably priced methods for verifying the compliance of hardware, given inspection access; and (2) building an initial, incomplete verification system, with authorities and precedents that allow its gaps to be quickly closed if and when the political will arises”.
Abstract of ‘Nuclear Arms Control Verification and Lessons for AI Treaties’ by Mauricio Baker
It is difficult to say whether competition for technology leadership in AI is strongest between US AI companies, or between the Chinese and American governments. At any rate, the US government has the power to eventually intervene in the corporate race, establish rules, and enforce them. On the international stage, the story is more complicated. Anarchy is the rule: no global authority or sovereignty exists that can impose rules on states. Despite the misuse, accident, and structural risks posed by AI, there are natural barriers to setting joint rules. Even if China and the United States somehow found themselves in a position to agree to stop a race-to-the-bottom and, for instance, pool their efforts on AI safety research, they might not trust each other that they will respect their side of the agreement.
This is why proposals such as Baker’s are important. They’re a step towards implementing stringent international agreements on rules of the road for AI. Like Shavit, whose proposal we surveyed in our first edition, Baker focuses on hardware as a key lever for international AI governance; more particularly, his proposal is about the feasibility of “verifying rules on one increasingly important AI activity: training machine learning models with industrial-scale, specialized computer chips.” While he doesn’t focus on any specific rules that could be the focus of an agreement, two options he mentions are (i) limiting the training of a model when it reaches a certain level of hacking capability, or (ii) models for lethal autonomous weapons should be designed using specific technical approaches. Whereas Shavit concludes that it seems technically feasible to monitor compute usage to check if these rules were respected, Baker draws lessons from the history and politics of nuclear arms control for potential future AI agreements and finds it is politically feasible to enter into agreements for verifying rules on compute usage.
He suggests that “the main foreseeable challenges of hardware-based AI treaty verification would be ones that were manageable in nuclear arms control”:
The cost of verifying compliance: to check that a country is abiding by rules to train only certain models, you’d have to go there physically and inspect its data centers. Baker suggests that “direct inspection costs would be lower than or roughly similar to those which states accepted for nonproliferation verification”.
The transparency-verification tradeoff: a country wants to show it complies with the agreement by allowing access to what’s being verified, but it also doesn’t want other parties to steal confidential and/or strategically valuable information. Baker says that disclosing the location of data centers would be less sensitive than that of nuclear energy facilities (which, depending on the level of access, seems plausible); and that technical progress in developing “privacy-preserving and secure methods for inspecting AI chips” would substantially mitigate those concerns.
What We’re Reading
“The Main Resource is the Human”: A Survey of AI Researchers on the Importance of Compute
Pausing AI? The Ethics, History, Epistemology, and Strategy of Technological Restraint
Understanding what nations include in their artificial intelligence plans
The EU and U.S. diverge on AI regulation: A transatlantic comparison and steps to alignment
Regulatory sandboxes for Artificial Intelligence – hype or solution?
From Fear to Action: AI Governance and Opportunities for All
That’s a wrap for this third edition. You can share it using this link. Thanks a lot for reading us!
— Siméon, Henry, & Charles.
(If you want to meet us, you can book a 15-minute call with us right here.)