#15 - Altman, Jinping, Biden, and O
Welcome to Navigating AI Risks, where we explore how to govern the risks posed by transformative artificial intelligence.
In this week’s newsletter:
The OpenAI Debacle
Governance of AI with Chinese Characteristics
The White House Tightens AI Oversight
Europe's Big Three Propose Voluntary AI Oversight
and more
“The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. When playing Russian roulette the fact that the first shot got off safely is little comfort for the next.”
Richard Feynman, theoretical physicist and Nobel Prize winner
The OpenAI Debacle
Last Friday at 3:28pm (eastern time), the board of directors of OpenAI fired CEO Sam Altman for not being “consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”
This was unexpected, to say the least. Rumors over the weekend suggested that under investors’ pressure, Altman was negotiating with the board to return as CEO. On Sunday night, he joined Microsoft, OpenAI’s backer and compute provider. Finally, on Wednesday, Altman rejoined OpenAI as CEO, with a reshuffled board of directors (that does not include him anymore).
Given the perception of Altman as a primary driver of the company’s success, and thus of the decision as highly consequential, the lack of a good explanation from the board for the ousting did not bode well with many. On Monday, around 740 of OpenAI’s 770 employees gave a choice to the board: resign, reinstate Altman, or we’ll maybe go to Microsoft, which guaranteed them a job at the new advanced AI research team led by Altman and Greg Brockman, OpenAI’s president and chairman until he quit.
And yet, the board resisted public pressure. The board’s reluctance to cave to calls for reinstating Altman, or for them to resign, stems in large part from the fact that “the board resigning in response to investor pressure is like the exact thing this corporate structure was designed to avoid”, as Dylan Matthews, lead writer at Vox, says. What does this structure look like?
As you can see, the board of directors controls the OpenAI nonprofit, OpenAI, Inc, which itself controls the for-profit OpenAI LLC. According to the operating agreement between these two structures, the LLC’s duty to the “mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity” and to the OpenAI charter takes “precedence over any obligation to generate a profit”.
This seems to be the ground over which the board decided to fire Altman. The CEO tried to remove board member Helen Toner, from the Center for Security and Emerging Technology, who wrote a paper that implicitly criticized OpenAI’s approach to AI safety. Combined with Altman’s reportedly repeated omissions to the board, the latter lost trust in his ability to pursue OpenAI’s mission, seemingly seen as incompatible with the profit- and product-driven approach Altman was increasingly leading OpenAI to take, symbolized by the release of ChatGPT a year ago. On Wednesday, Toner was no longer included in the newly-reformed board.
The backlash over the decision points to a real problem. Setting aside the validity of the many criticisms on Twitter regarding the board members' choices and character, the mere existence and extent of this criticism points to a lack of legitimacy that they weren’t able to remedy through communication.
The decision was made without the board being seen as legitimate enough to take a consequential decision such as firing the CEO (which is one of its few powers). On the investor side, this lack of legitimacy comes in large part from OpenAI’s structure: unlike traditional corporate boards, OpenAI’s has no fiduciary duty to shareholders. Its sole duty is to uphold OpenAI’s mission. On the employee side, it is because apart from Altman itself and Ilya Sutskever, Chief Scientist, no one on the board worked at or came from the company itself.
As a result, it was difficult for most to understand the board’s decision at a time when the company was doing really well, en route to reaching a $86 billion valuation.
The corporate structure was indeed designed for the type of situation where a CEO did not fulfill the company’s mission satisfactorily. If the board sensed this was the case, they are, in principle, legally obligated to take action.
The main problem is that the board communicated very poorly, and according to Emmett Shear, interim CEO until Altman returned, the board “did *not* remove Sam over any specific disagreement on safety”. The board's reluctance to cave under the weight of public pressure is understandable given its duty to OpenAI’s mission, yet they didn’t communicate their concerns to the public.
Another problem lies in Microsoft’s influence since its billion-dollar investment in 2019. Journalist Jeremy Kahn puts it well:
“By turning to a single corporate entity, Microsoft, for the majority of the cash and computing power OpenAI needed to achieve its mission, it was essentially handling control to Microsoft, even if that control wasn’t codified in any formal governance mechanism.”
Despite OpenAI’s mission for AI safety and corporate structure, ultimately, Altman and OpenAI were not only accountable to the board; they were also de facto beholden to Microsoft. Satya Nadella, the company’s CEO, is now reportedly trying to obtain a board seat and other governance changes at OpenAI.
A section of the operating agreement between OpenAI, LLC and OpenAI, Inc.
Should Altman’s ousting lead to weaker nonprofit oversight of the for-profit company, the board’s decision might ironically give him more control over OpenAI. This scenario is bad for the future of governance structures that ensure AI safety. After this debacle, other firms might avoid non-traditional structures. Future decisions like deploying advanced AI systems or activating a windfall clause need transparency and legitimacy. This might not be the case if what companies learn from this historic counterexample is that the worst can happen when a company is governed by a non-profit board.
Indeed, if the board had been tasked with the traditional mission of ensuring profit above all, they might have thought twice about such a high-profile firing, considering its potential to tarnish OpenAI's reputation and bottom line. Yet profit-making motives could easily conflict with OpenAI’s mission of safely and responsibly developing advanced AI.
Observers, including tech executives and policymakers, are likely to draw conclusions from this episode. One crucial question remains: Will the board be seen as being right in thinking that Sam Altman’s behavior threatened OpenAI’s mission ? The answers to this question will shape future approaches to AI governance and safety. Watch this space.
Governance of AI with Chinese Characteristics
On November 15, Joe Biden and Xi Jinping met on the occasion of the Asia-Pacific Economic Cooperation summit, an intergovernmental forum. Among other things, they discussed artificial intelligence and announced a communication channel between the two countries on the issue.
The White House readout of the meeting states that the two Presidents “affirmed the need to address the risks of advanced AI systems and improve AI safety through U.S.-China government talks.” This follows the earlier launch of new working groups to discuss trade, investment, and export controls issues (NAIR #8). But we know remarkably little about how the Chinese government is thinking about AI safety and what it is doing about it. Is there a growing realization among Chinese elites that matters of AI safety are worth attending to?
China is already regulating AI. The country is already asking tech companies to register their algorithms, including AI models, with a regulator. China’s draft rules on generative AI require security assessments and impose limits on the types of data that can be used to train AI models (with the purpose of maintaining “social stability”). China is also implementing a system for ethics review during the R&D phase for risky AI models. Now, China is looking at a new AI law. An expert draft written by legal experts (as is common in China) is a preview of what type of rules may be coming. Among these rules, the law could build upon the algorithm registry to require licenses for the riskiest use cases.
AI safety measures: In April 2023, the top 24 officials in China's leadership came together for a session of the Politburo of the Communist Party to discuss the economy. They highlighted the need to focus on advanced AI and manage its potential risks (while promoting innovation). This message from the top trickled down quickly. Within a month, Beijing's government rolled out new policies to boost AI innovation, such as improving computing and data resources and enhancing research on large AI models. They also introduced safety measures, like third-party evaluations and model security checks, to address concerns about AI safety. This is one among the many lessons of a comprehensive report on the state of AI safety in China by Concordia Consulting, a policy advisory group.
Until recently, China focused almost exclusively on maintaining “social stability” (read: censorship and surveillance) and encouraged its companies to focus on making increasingly capable systems. But they're now starting to show interest in AI safety. At two major business forums in the past year, the Zhongguancun Forum and the BAAI Conference, there were detailed debates on this issue. AI risks are a hot topic, with many experts joining global calls for caution, such as the Future of Life Institute’s call for an AI pause (NAIR #1) and the Center for AI Safety’s “Statement on AI risk” (NAIR #6).
Right before the UK’s AI Safety Summit, experts from around the world, including China, urged governments to enact policies to mitigate risks from advanced AI. Their plan included the compulsory registration of cutting-edge AI models, strict safety red lines that would lead to the discontinuation of models if breached, and firms dedicating an investment of at least one-third of AI research and development funds toward AI safety. This agreement took place at the first International Dialogue on AI Safety, led by Yoshua Bengio, Stuart Russell, Ya-Qin Zhang, and other prominent scientists from the United States, the United Kingdom, Canada, the European Union, and China (The Institute for AI Policy and Strategy, a think-tank, recently released a guide to holding such track II diplomatic dialogues around AI safety).
International Initiative: On July 18, at the first UN Security Council session on artificial intelligence (NAIR #8), renowned Chinese scientist Zeng Yi voiced a serious concern that AI could pose a threat to human survival. Finally, in October 2023, President Xi Jinping introduced the (aptly-named) Global AI Governance Initiative, stressing China's commitment to working on AI at the international level. The initiative’s inaugural statement calls for promoting “the establishment of a testing and assessment system based on AI risk levels” and for “R&D entities” to “ensure that AI always remains under human control.”
AI governance with Chinese characteristics: Concordia Consulting’s report highlights the concept of "bottom-line thinking" as a key way the Chinese government approaches AI governance. The idea gained traction through President Xi Jinping and has been a frequent reference point by both Xi and the Chinese Communist Party in various situations, including dealing with pandemics and financial uncertainties. While the term isn't strictly defined, the underlying intuition is that China should recognize potential extreme negative outcomes and actively work to prevent them. This approach has not only become a part of broader discussions in China concerning AI dangers but is also evident in key national and international policy documents focusing on AI governance, emphasizing a careful and safety-first approach to AI development. This cautious stance resonates with the precautionary principle, a parallel that could pave the way for collaborative efforts in managing AI advancements responsibly.
Unique selling point?: Last month, the US updated controls of semiconductor and chip-making equipment exports to China. This map by the Rhodium group, a policy research advisory, shows the countries (in orange) where those new controls are the most stringent:
BIS is the “Bureau of Industry and Security”, an entity within the US Department of Commerce in charge of export control policy. Depending on which category a country belongs to, it will have easier (in blue) or harder (in yellow and orange) access to US-made chips and chip-making equipment (source)
In its statement announcing the Global AI Governance Initiative, whose contours are still vague, China said it opposes “creating barriers and disrupting the global AI supply chain through technological monopolies and unilateral coercive measures.” In other words, they’re not happy with US export controls, an issue they’ve repeatedly brought up, publicly and at the diplomatic level, since the US first imposed such restrictions in October 2022. The Chinese ambassador at the UN said in July that a “certain developed country” was obstructing China’s technological development.
Through its new diplomatic initiative on AI governance, China seeks to attract those countries that resent Washington’s way of playing economic warfare and that feel the negative impact of such restrictions. The countries in red and orange in the map above are all potentially concerned. This focus on issues of “fairness” and global equity in access to AI helps China distinguish its initiative from Western-led ones.
The level of public and elite awareness regarding AI safety in China should not be exaggerated. The country still has a long way coming, even compared to today’s United States and United Kingdom. We don't really know what the Chinese public thinks about advanced AI because there isn't much data. From the little we know, it seems the public and AI experts acknowledge the risks of AI that can think like humans but believe those risks can be managed and development should continue. That's important: if China is an autocracy, there is still a semblance of public accountability over government policies.
The White House Tightens AI Oversight
Biden’s White House released on October 30 an executive order on artificial intelligence.
This executive order may be the first truly significant step by the White House to tackle the risks of advanced AI.
A large part of the directive tasks various agencies and entities with conducting studies and drafting guidelines. But for the first time in US AI safety policy, a significant set of provisions is legally binding. As outlined in the order’s fact sheet, “companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests” before releasing a new model.
In other words, AI companies will need to keep the US government in the loop about how they’re keeping their AI tech safe and what tests they’ve done. Important detail: These rules will temporarily apply to AI models whose training used "a quantity of computing power greater than 10^26 FLOP." This is a placeholder figure, until the Department of Commerce comes up with something more permanent.
But this threshold excludes all currently available AI services. GPT-4, OpenAI’s most advanced publicly available model, was trained using less than 5 times this amount of computing power. If companies are worried about keeping their secrets safe, they might also try to make powerful AIs that don’t go over the computing limit. Still, these reporting requirements may well apply to the next generation of AI models.
The National Institute of Standards and Technology will develop standards for red-team testing of these models by August 2024, while the executive will use its powers under the Defense Production Act to compel AI developers to share the results.
This reporting requirement will give the government a lot of information about the frontier of AI capabilities, enough information to inform future regulations, and perhaps trigger executive action if (or when) a model’s capabilities reach dangerous levels.
However, the 63-page order does not specify the consequences if a company reports a potentially dangerous model, leading to speculation about the White House's limited capacity to address certain AI issues. What can the government actually do if it doesn’t like a company's safety test results?
Experts can't agree on what would happen next. Some think the government would intervene, possibly stopping or even scrapping the AI model, using the broad powers afforded to the executive by the Defense Production Act. But this might get legal pushback from AI developers. Most affected companies are already working with the government on AI safety, but they may object to such far-reaching decisions.
Additionally, potential loopholes exist: while the safety testing disclosure requirements apply to new models before their release, it is unclear if companies must report safety test results for each subsequent update of the model.
The second big policy in the order: By January 30, 2024, the Secretary of Commerce will have to propose regulations that require US cloud service providers to report to the government when a foreign company uses that provider’s computing power to “train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity”. The order also suggests that if the foreign company in question refuses to provide information to the cloud provider, the latter won’t be able to sell its access to AI chips. These providers now have to make sure to confirm the identities of anyone from outside the US who uses their services. In short, this is a requirement for cloud providers to implement the “know-your-customer” policies that we previously discussed (NAIR #11).
Lastly, the order sets a threshold over which computing clusters way above what even the biggest known clusters can handle today will have to report usage information to the government. The threshold is a whopping 100 exaFLOPS, equivalent to the power of 50,000 cutting-edge chips (which cost approximately 1 billion dollars):
Aggregated Computational Performance (FLOP/s) of Publicly Known AI Computing Clusters (Source)
If Congress doesn't come together on the hot-button topic of AI regulation, president Biden's executive order may be the only rule on AI we'll see in the US for quite a while. The fact that executive orders can't be enforced as strongly as laws is a big deal. They serve as a stopgap measure, used when passing a law through Congress is not feasible. The executive order can also be canceled by future presidents without any conditions. This shows the limits of executive action.
(There are many themes and priorities addressed by the order, so we focused on a few specific examples. If you want to learn more, see this breakdown of policies, this tool to navigate the text of the executive order itself, and this analysis by think-tank CNAS.)
Europe's Big Three Propose Voluntary AI Oversight, Resisting Stricter Rules
The EU AI Act is almost at the finish line, but hurdles remain on the track.
Europe's big three economies, France, Germany, and Italy, are making waves in the EU tech world. They're pushing back on stringent regulations for cutting-edge artificial intelligence, particularly foundation models such as OpenAI's GPT-4 or Google's Bard.
These countries recently shared a document advocating a self-regulatory approach for AI firms. They're floating the idea of AI companies voluntarily disclosing key details about their models and adhering to voluntary codes of conduct. Initially, there would be no sanction for non-compliance, but that could change if companies consistently play fast and loose with these guidelines.
European lawmakers had a different take. They wanted to make AI developers jump through more hoops, regardless of the AI's intended use. This included mandatory third-party testing and extra rules for more potent models. The European Commission, too, wanted strict requirements for foundation model providers. But the European trio isn't buying it, dismissing these ideas in their proposal.
In response, the European Commission tweaked their own proposal on November 19, softening their approach and taking up Germany’s and France’s idea of a voluntary code of conduct for the most advanced AI models. The only mandatory aspect of these codes of conduct would be for AI companies, through model cards, to report information about their most powerful models, including their risk assessment and mitigation measures, strategies for safe deployment, and even more details for foundation models that pose a “systemic risk”.
The Parliament is adamant that leaving foundation models unregulated isn't an option. Both Germany and France have a vested interest in a more relaxed regulatory environment, as they don’t want barriers to the innovative capacity of their AI “champions” Aleph Alpha and Mistral AI, who openly criticize the idea of strict foundation model regulation. Cedric O, an advisor to Mistral, was previously France’s secretary of state for digital policy.
The clock's ticking, with a December 6 deadline breathing down the necks of negotiators. The stakes are high, especially with the European Parliament's elections looming in mid-2024, squeezing the window to get this law across the finish line.
What else
On November 30th and December 1, policymakers and technical experts will take part in the 5th edition of the Athens Roundtable, a global conference focused this year on the governance of foundation models. Speakers will include US Senator Richard Blumenthal, Turing Award winner Yoshua Bengio, UN Secretary-General’s Envoy on Technology Amandeep Singh Gill, European MEP Dragoș Tudorache, and many others. You can register to attend online or in-person here.
United States
During his second ‘AI Insight Forum’, senator Schumer suggests dedicating $32 billion for AI innovation.
Nvidia may have to cancel up to $5 billion in orders for advanced chips from China after the US’ export controls updates in October.
Yoshua Bengio, Geoffrey Hinton, Stuart Russell, Daniel Kahneman, and many other researchers signed an open letter titled “Managing AI Risks in an Era of Rapid Progress”, where they call for companies to reorient technical R&D towards AI safety, and for states to create international agreements, and compute monitoring, whistle-blower protection, and other measures.
The US Federal Trade Commission publishes a blog post summarizing the results of a public consultation on cloud computing regulation, hinting where its efforts will lie in the future.
The U.S.-China Economic and Security Review Commission releases its 2023 Annual Report to Congress.
China
Nvidia develops (yet again) new chips under the thresholds of recently-updated US export controls to keep selling to Chinese companies.
China's chipmaking equipment imports surge 93%, as the country expects increasingly tough semiconductors export controls.
China launches a new data agency, the ‘National Data Administration’, reportedly to strengthen regulation of the country’s vast data pool.
Europe
UK Prime Minister Rishi Sunak wants to spend £400 million on AI chips and supercomputers.
The European Commission releases rules related to third-party auditing designed to check the compliance of large online platforms with the bloc’s Digital Services Act. See the text and the audit report template here.
The UK Prime Minister says the country will refrain from regulating AI in the short-term.
Global & Geopolitics
G7 countries release non-binding 11 guiding principles and a (more detailed) code of conduct for companies that develop AI models, which includes requirements to “Publicly report advanced AI systems’ capabilities, limitations,” or “Develop, implement, and disclose AI governance and risk management policies”, largely based on the US’ voluntary commitments (NAIR #10).
31 countries endorse the US’ “Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy”, which includes norms on properly training personnel, building in critical safeguards, and subjecting capabilities to rigorous testing and legal review.
The United States drops its proposals made to members of the World Trade Organization that cross-border data flows should be unconstrained and that national requirements for data localization should be prohibited, so that the country has more room to design tech regulation.
The EU and US are welcoming inputs on their common terminology of 65 key AI terms, in a bid to foster mutual understanding of risk-based approaches to regulating AI.
The OECD launches the ‘AI Incidents Monitor’, an online repository of AI incidents.
Industry & Capabilities
As AI regulation looks increasingly likely, companies from all sectors are hiring AI lobbyists.
Elon Musk’s X.ai announces Grok, an AI chatbot based on a model with 33 billion parameters.
Meta’s Yann LeCun and 70 others sign a letter calling for more openness in AI development.
Through his $1 billion start-up, Chinese AI investor Kai-Fu Lee unveils an open-source model as capable as Meta’s Llama 2.
Google announces new ‘Secure AI Framework’ for the cybersecurity of AI models.
Scale AI launches SEAL, a new “frontier safety lab” that will work on “evaluations, red-teaming, and scalable oversight”.
OpenAI holds its first developer conference (DevDay) making several announcements, including a way for customers to create new “agent-like” AI systems (see this thread on the governance implications of such systems).
Google and Anthropic expand their existing partnership, increasing Anthropic’s access to Google’s AI chips.
Meta / Facebook moves employees of its ‘Responsible AI’ team to another team.
Google is in talks to invest in AI startup Character.AI
By the numbers
Change in employment and earnings from writing and editing jobs on an online freelancing platform after the launch of ChatGPT (source)
What We’re Reading
Regulating the AI Frontier: Design Choices and Constraints, comprehensive summary of a workshop on regulatory definitions for “frontier AI” and the requirements that could be placed on frontier AI developers now and in the future.
Oversight for Frontier AI through a Know-Your-Customer Scheme for Compute Providers, on what needs to be done by the government to enact a scheme where compute providers can verify their clients’ identity, with the goal of ensuring visibility into frontier AI development and close loopholes in existing export controls.
An International Consortium for Evaluations of Societal-Scale Risks from Advanced AI, a paper that discusses the current AI evaluation ecosystem, proposes an international consortium for advanced AI risk evaluations, discusses lessons that can be learnt from previous institutions and proposals for new ones, and suggests what should be done to establish a consortium.
Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries, on the governance challenges posed by platforms, like HuggingFace, that provide easy access to AI models, and how to control such access (including through “licensing, access and use restrictions, automated content moderation”)
Adapting Cybersecurity Frameworks to Manage Frontier AI Risks: A defense-in-depth approach, outline three approaches that can help identify gaps in the management of AI-related risks (functional approach – cover categories of activities –, lifecycle approach – assign security and safety activities throughout the lifecycle of AI development –, and threat-based approach – identify the techniques and procedures used by malicious actors)
The state of implementation of the OECD AI Principles four years on, a report by the OECD on the implementation of one of the most influential set of AI principles.
Structured access for third-party research on frontier AI models: Investigating researchers’ model access requirements, looks at the right balance between countering the proliferation of dangerous models through restricted release strategies and providing sufficient access to the model in order to enable external research and evaluation.
What We Can Learn About Regulating AI from the Military, suggests that the U.S. military's approach to managing powerful technologies through qualifications, standard operating procedures (SOPs), and delineated authorities can serve as a model for regulating AI.
AI is like… A literature review of AI metaphors and why they matter for policy, reviews why and how metaphors used by policymakers and the public matter to both the study and practice of AI governance, including a survey of cases where the choice of analogy materially influenced the regulation of internet issues and a discussion of the risks of bad analogies.
Levels of AGI: Operationalizing Progress on the Path to AGI, proposed a framework for classifying the capabilities and behaviour of advanced AI systems, based on depth (performance) and breadth (generality) of capabilities, and reflects on how current systems fit into this ontology.
That’s a wrap for this 15th edition. You can share it using this link. Thanks a lot for reading us!
— Siméon, Henry, & Charles.