Discover more from Navigating AI Risks
#11 - AI in the Public Eye + AI Governance is (mostly) Compute Governance
Welcome back to NAIR after the summer break. This will be a pivotal year for AI governance, and we’re all here for it.
Subscribe to receive updates and analysis about transformative AI governance every two weeks :
Let’s dive in!
The Growing Consensus on Compute Governance
Nvidia, the company behind the world’s most sought-after AI chips, is reportedly asking cloud providers to disclose the identity of their customers. Although this could raise antitrust concerns, some see this practice as a promising way to ensure AI is used safely.
My customer? Know Your Customer (KYC), widely employed and legally mandated in the financial sector, is the process of verifying a customer's identity and evaluating the risks with selling them a service or product. How could this be used for governing AI? One idea is for chip manufacturers to implement KYC, so that they sell compute only to selected companies (those with robust safety practices or located in “trusted” jurisdictions).
Who wants this? In an interview with the Financial Times, Mustafa Suleyman, chief executive of Inflection AI and Deepmind co-founder came out in favour of the idea:
Washington should restrict sales of the Nvidia chips that play a dominant role in training advanced AI systems to buyers who agree to safe and ethical uses of the technology. At a minimum that should mean agreeing to abide by the same undertakings that some of the leading US AI companies made to the White House in July, such as allowing external tests before releasing a new AI system.
Researchers (as we will see below) and experts testifying before the Senate have also called for this mechanism to be implemented. Policymakers are already thinking about extensive monitoring of tech companies: a leaked draft agreement between the US government and TikTok would see the company agree to “supervision by an array of independent investigative bodies, including a third-party monitor, a third-party auditor, a cybersecurity auditor and a source code inspector” in order to remain in business in the US.
Why use this on compute? As one of the three major inputs to AI development, along with data and algorithms, compute is one way to make sure AI is developed, deployed, and used safely. But it also seems like the best way: it has a physical presence and is created through a highly concentrated supply chain, unlike data or algorithms; and thus, it’s a more controllable input. Many AI policy proposals, such as tracking advanced chips usage and controlling their export, or licensing access to chip clusters, are compute governance proposals.
Compute governance in US-China strategic competition: KYC requirements often work well; but there are obstacles to implementing them between countries. It’s difficult for the US and China to agree on the level of transparency required for KYC to work. But the promise is there: US export controls have in large part been imposed because of fears that advanced semiconductors will be used for the modernization of the Chinese military. If there was a way to verify that the end-users of such chips were commercial actors, much of those concerns would be alleviated. That’s the conclusion of a recent diplomatic dialogue between civil society and business leaders on both sides of the Pacific:
In view of the U.S. side's concern regarding the dual-use nature of high-performance semiconductors on which it has imposed controls, we recommend consideration of a pilot program to permit limited Chinese users who can submit to a robust end-user verification and auditing program to use such chips strictly for civilian purposes. Such a pilot program could help build trust and confidence so that, in some limited circumstances, exports could be permitted of these high-performance semiconductors.
The problem: Only one part of the equation is solvable for now. An end-user verification program could be carried out through governance audits, where an organization’s safety and security processes (and in this case, its independence from the military) are verified by competent and neutral inspectors. That would be a significant step forward. But this is no bullet-proof solution. The company could seem perfectly responsible on Monday, and decide on Tuesday to train a system to develop biological weapons.
One solution? Instead of auditing companies, how about auditing states? That’s the core idea of a recent paper published by expert AI governance researchers, which they call the “jurisdictional certification approach” to international AI governance:
We propose that states establish an International AI Organization (IAIO) to certify state jurisdictions (not firms or AI projects) for compliance with international oversight standards. Jurisdictions that fail to receive certifications (e.g. because their regulations are too lax or they fail to enforce them) are excluded from valuable trade relationships – or otherwise suffer negative consequences.
One of the standards would be a commitment to ban the import of goods that integrate AI systems from uncertified jurisdictions. Another standard could be a commitment to ban the export of AI inputs (such as specialised chips) to uncertified jurisdictions. The participating states’ trade policies would thereby incentivise other states to join the IAIO themselves and receive certifications.
The best solution? Another approach to verifying safe AI development is to check on an individual chip what types of computations are performed, and compare those against the claims of the developer. As we previously reported, researcher Yonadav Shavit proposed one such framework, which involves a privacy- and confidentiality- preserving mechanism to reliably determine what chips are used for. However, a lot more research and development remains to be done before his framework can be implemented.
Public Sentiments on AI Risk and Governance
Surveys are a useful tool in understanding where AI policy is likely to go next and what concerns will be addressed. Here is an update on what the (American) people are thinking.
Concerns over existential risk from AI:
How concerned, if at all, are you about the possibility that AI will cause the end of the human race on Earth?1
Very concerned: 19%
Somewhat concerned: 27%
Not very concerned: 23%
Not at all concerned: 17%
How likely do you think it is that artificial intelligence (AI) will eventually become more intelligent than people?2
Very likely: 27%
Somewhat likely: 30%
Not very likely: 14%
Not likely at all: 9%
It is already more intelligent than people: 6%
US citizens seem surprisingly fine with regulating AI even if it means China gets to catch up:
Which of the following do you think is the bigger risk?3
Government regulation slowing down the development of AI in the U.S. allowing other countries to dominate the space: 21%
Unchecked development of AI driving disinformation and economic chaos: 75%
Do you agree with the following statement4:
An international moratorium on advancing AI capabilities is viable and can be effective: 44% (down from 47% in June 2023)
The US government is doing enough to regulate the AI industry: 20% (down from 25% in June 2023)
On accountable AI development:
Would you support or oppose a six-month pause on some kinds of AI development?5
Strongly support: 41%
Somewhat support: 28%
Somewhat oppose: 9%
Strongly oppose: 4%
How much do you trust, if at all, the companies developing AI systems to do so carefully and with the public's well-being in mind?6
A great deal: 2%
A little: 36%
Not at all: 39%
For more, the website AI Impacts has a long list of surveys of US public opinion on AI.
The mandatory word of caution: Similar surveys have different results. As we know from many studies, people reply very differently to surveys with even the slightest change to the wording of the question asked. It's nearly impossible to say whether a given survey truly tells "the truth” about public opinion.
One example: According to the above YouGov poll, in total 46% of respondents are somewhat or very concerned about the possibility that AI will cause the end of the human race on Earth. Another poll by NGO Rethink Priorities tells another (possibly contradictory) side to the story: when asked to choose the most likely cause of human extinction, people ranked AI last, behind nuclear war, climate change, an asteroid impact, a pandemic, and even “some other cause”. Although public awareness is growing, the general public is not convinced7 that AI poses an existential risk.
Elite-public divide? As noted by Jack Clark, CEO of Anthropic, there appears to be “a divergence between elite opinion and popular opinion. [...] Private company leaders are racing with one another to aggressively develop and deploy AI systems and lock-in various economic and ecosystem advantages, and many policymakers are adopting policies to encourage the development of AI sectors. Normal people are much more cautious in their outlook about the technology and more likely to adopt or prefer a precautionary principle.” Still, many AI researchers and experts, for their part, seem at least equally, if not more worried than the general public.
The White House announces an executive order on outbound investment screening to limit US investments in so-called “countries of concern” (read: China), notably in the AI, semiconductors, and quantum computing industries. The move is scaring investors away from China (although the long-term effects are debated).
In September, Senator Schumer will convene top AI CEOs, experts, and civil society to “AI Insights Forums” to learn about and discuss AI regulation. Here is the invite list for the first forum. One notable absent is Google DeepMind CEO Demis Hassabis.
The “Generative Red Team Challenge” backed by the White House convened hackers, students, experts, and others to try to find security or safety breaches in the systems of Google, OpenAI, Anthropic and Stability. The companies have a few months to fix the issues found during the event before they are made public.
The White House released its research and development priorities for 2024, calling notably for the development of “trustworthy, powerful advanced AI systems that help achieve the Nation’s great aspirations.”
Anthropic, Google, Microsoft, and OpenAI will participate in a White House-sponsored and DARPA-led 2-year competition to fix software vulnerabilities using AI.
Huawei releases a smartphone that features a 7-nanometer processor manufactured by China’s leading chip company SMIC, seen by many as a sign that US export controls are not working.
Chinese regulators refuse to approve Intel’s $5.4 billion acquisition of Israeli semiconductor company Tower Semiconductor.
The UK’s AI Safety Summit is scheduled for the 1st and 2nd November of this year. The government has laid out what it wants to achieve, and reportedly plans to invite China (though no final decision has been made).
In a state-led effort to ramp up AI development, the UK will spend £100 million, notably to acquire 5,000 H100 (a model of graphics processing units) from Nvidia. For reference, Stability AI CEO Emad Mostaque said on Twitter that the largest order of H100 he ever saw was 80 000, and that there were “plenty of 20k+ ones”.
Spain establishes an Agency for the Supervision of Artificial Intelligence, notably tasked with enforcing the EU’s AI Act when it comes into force in 2024.
TSMC, the Taiwanese global leader in semiconductor manufacturing, invests €10 billion in a new chip factory in Germany.
Global & Geopolitics
The United States and China launch new channels of communication, notably on economic security and export controls. The US sees the new dialogue as “a platform to reduce misunderstanding of US national security policies”. Both countries also renewed for 6 months a key international agreement on science & technology cooperation, which some analysts were worried would not happen.
Leading AI expert Yoshua Bengio has been appointed member of the United Nations’ new Scientific Advisory Board for Science and Technology by UN Secretary-General Antonio Guterres.
The BRICS group of developing countries, which recently accepted several new members, has created a committee to study the implications of generative AI and “track and evaluate the development and evolution of AI technologies”.
Industry & Capabilities
Nvidia announces it will release next year its most powerful AI chip so far.
Meta disbands its protein-folding team to focus on commercial applications of AI.
Controversy over Zoom’s use of video data to train its AI models forces the video-conferencing company to backtrack
OpenAI files an “intent-to-use” trademark for GPT-5, signaling the company will very probably release the next iteration of the model behind ChatGPT within the next 3 years (CEO Sam Altman said in April it wouldn’t train such a system “for some time”). The trademark probably includes AI research automation, as the model is planned to be capable of “developing and implementing artificial neural networks”.
A high-speed AI drone beats the world’s best racers for the first time
Ex-Google CEO Eric Schmidt plans to launch a science-focused nonprofit AI lab.
A team of researchers releases AgentBench, a benchmark designed to assess language models’ ability to assist humans in “real-world pragmatic missions”. OpenAI’s GPT-4 is found to be the most capable.
By the numbers
Average time between invention or patenting and first major federal regulation
Source: New York Times.
What We’re Reading
The Ever Changing Theories of China and AI (Yiqin Fu), on the various theories put forth in recent years to explain China’s lead or lag in AI compared to the U.S.
There’s Only One Way to Control AI: Nationalization (Politico), on the benefits of public ownership of AI
Going Nuclear? (CIRSD), on international governance models for AI
The Heated Debate Over Who Should Control Access to AI (Time), on the risks and benefits of open-sourcing AI
Can We Red Team Our Way to AI Accountability? (Tech Policy Press), about the benefits and limits of red-teaming AI systems
A comprehensive and distributed approach to AI regulation (Alex Engler), proposing AI regulation that is neither like the EU’s AI Act or the FDA-style “test-and-approve” model.
Matt Sheehan on how China is Shaping its AI World (The Wire China), on Beijing's approach to regulating artificial intelligence, and how competitive Chinese AI firms could be
Reclaiming the Digital Commons: A Public Data Trust for Training Data (paper), on fostering access to data for training AI models in the public interest.
China’s new scientists: The emerging leaders behind Beijing’s drive for technological self-reliance (Chatham House), on the growing focus of Beijing on science and technology modernization and the figures behind this drive
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities (paper), proposing a taxonomy describing the relationship between threats caused by LLMs, prevention measures, and the vulnerabilities arising from imperfect efforts.
Found in the above-mentioned AI Impacts blog post: “Alexia Georgiadis's The Effectiveness of AI Existential Risk Communication to the American and Dutch Public (2023) evaluates the effect of various media interventions on AI risk awareness. See also Otto Barten's AI X-risk in the News: How Effective are Recent Media Items and How is Awareness Changing? Our New Survey Results and Unveiling the American Public Opinion on AI Moratorium and Government Intervention: The Impact of Media Exposure (2023)”.