Advertisement
Advertisement
US-China relations
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Illustration: Davies Christian Surya

Nuclear weapons and poison pills: Washington, Beijing warily circle AI talks

  • Bilateral dialogue on automated weapons and artificial intelligence is expected to take place this spring, with parameters yet to be established
  • Both China and the US are wary of giving their adversary an advantage by limiting their own capability

The United States and China have a shared interest in sitting down to discuss automated weapons, artificial intelligence and its many potential and unforeseen abuses. Less clear is whether the two global AI superpowers and their huge militaries have common interests or goals coming into the talks, which are expected to take place this spring, according to analysts and experts involved in informal sessions between the two nations.

“The good news, which has been a really, really rare thing these days, is that the AI dialogue is seeing some hope,” said Xiaomeng Lu, director of Eurasia Group’s geo-technology practice, who is involved with US-China “Track 2” talks among former government officials, experts and analysts. “Both sides have an interest in preventing unintended consequences.”

Washington and Beijing have agreed to sit down in the next few months and discuss AI issues, National Security Adviser Jake Sullivan said in late January after meeting with top Chinese diplomat Wang Yi in Thailand. Non-official Track 2 and semi-official Track 1.5 talks often act as a preamble to formal negotiations. The decision to establish a working group on AI was reached during the November summit between Presidents Xi Jinping and Joe Biden in northern California.
Chinese Foreign Minister Wang Yi (third from left) meets US National Security Adviser Jake Sullivan (third from right) in Bangkok on January 27. Photo: Xinhua

Beijing and Washington have both become increasingly uneasy about the effect that artificial intelligence could have on warfare, governance and society as the technology threatens to eclipse mankind’s ability to control or fully understand it, even as they are wary of giving their adversary an advantage by limiting their own capability.

Lu said both sides in the discussions she has participated in appear engaged and intent on defining what constitutes an autonomous weapon and what it means to have a human in the loop for weapons of mass destruction.

“I can sense the energy; we’re all trying to throw ideas at this,” she said. “The common threat to the US and China these days is what AI can unleash, let’s say nuclear weapons, like a nuclear missile. That’s a very dangerous threshold, and both sides have an interest in preventing unintended consequences.”

Less clear is how that would translate.

AI deal shows China, US can cooperate on tech rules despite rivalry: analysts

“Autonomous weapons are not really that much about AI, but about some level of autonomous decision-making with respect to things like command and control of nuclear weapons,” said Paul Triolo, technology policy lead with the Albright Stonebridge Group consultancy.

“It seems to be more about … reassuring both sides that the other doesn’t have some doomsday machine secretly under development. There is general agreement that allowing any kind of automation here would be a bad idea.”

In January, Arati Prabhakar, director of the White House Office of Science and Technology Policy, suggested that the US-China talks would address some as-yet unknown elements of AI safety.

This followed the signing in November of the Bletchley Declaration by the US, China and 26 other countries that calls for the responsible development of AI. Of particular concern, it noted, were “frontier AI models” – cutting-edge systems that pose the most urgent and dangerous risks.

High-level principles are fine, analysts add, but hammering out details in bilateral talks is a far tougher challenge. Beijing would probably be more inclined to talk about the safety of frontier models than any restrictions on AI in the broader defence realm.

This mirrors the protracted tussle between the two countries in recent years over “guard rails” and military hotlines involving conventional forces. Even as Washington is keen to hammer out rules preventing inadvertent contact between ships and planes in the disputed and increasingly crowded South China Sea, Beijing has been reluctant to give Washington any assurance that could make the US more assertive.

Bonnie Glaser, Washington-based managing director of the German Marshall Fund, said it remained to be seen what level of officials Beijing sends to any talks, in particular whether these will be military-to-military discussions, which are seen as more meaningful than default diplomatic channels.

“There’s often progress in Track 2 or 1.5 discussions, but it goes up to Beijing and doesn’t go anywhere,” said Glaser, who is frequently involved in sub-official talks. “But if Xi Jinping said we’ll cooperate, it does give people incentive to get things done.”

US-China decoupling would harm AI governance: Henry Kissinger

So far, there’s no evidence of any army worldwide using or planning to use frontier AI models for military use, analysts said.

Separately, the world’s two largest economies – key to any meaningful global deal – are also circling around a framework for AI control in the commercial sphere, an issue former secretary of state Henry Kissinger raised last summer on a trip to Beijing four months before his death, reportedly in close consultation with former Google chief executive Eric Schmidt.

Track 2 talks in the commercial area have seen extremely limited progress, with the Chinese side arguing that the best way to ensure safety is for both sides to fully share their technology and halt export restrictions on key AI technologies.

“That’s a non-starter, a poison pill for the US,” said Lu.

This comes as Washington released a proposed rule in late January requiring cloud service providers such as Microsoft and Amazon to identify and actively investigate foreign clients developing AI applications on their platforms, seen as targeting China.

US wants cloud firms to report foreign users building AI applications

The US should cooperate on AI “rather than decoupling, breaking chains and building fences”, countered Chinese Foreign Ministry spokesman Wang Wenbin at a press briefing in Beijing.

But both sides also share common interests in the commercial realm, including the idea of traceable watermarks on AI imagery and concern over data //and// trademarks.

One area for discussion could be control over frontier models that depend heavily on graphics processing units, or GPUs, the specialised semiconductors sometimes referred to as the rare earths of AI given their instrumental role in “teaching” computers.

But in December, US Commerce Secretary Gina Raimondo for the first time directly linked American export controls on GPUs to an effort to prevent Chinese companies from training frontier AI models, rather than past references to unspecified military use. And analysts expect more restrictions to follow this election year under pressure from Congress.

“This would seem to make it difficult to talk to the Chinese side about the safety of frontier AI models, the topic of the Bletchley Park agreement and drain the limited reservoir of goodwill built up by recent high-level diplomatic engagement, further limiting progress,” said Triolo, who has participated in Track 2 talks.

US Commerce Secretary Gina Raimondo speaks during the UK Artificial Intelligence Safety Summit at Bletchley Park in Britain on November 1. Photo: AFP

With the outlook for official progress limited, and any negotiations likely to be hard fought, the less official Track 2 channels – which involve academics, think tanks and trade groups – are important to release pressure, test proposals and keep communication channels open.

Among the participants in recent AI Track 2 discussions were representatives from the Carnegie Endowment for International Peace, George Washington University, the Brookings Institution, several other US think tanks, Tsinghua University, and Chinese think tanks affiliated with the Ministry of Science and Technology and the Ministry of Industry.

Both sides have very different core interests, and also are competing for third-country support for their global AI security blueprints.

While the immediate priority for Chinese regulators is almost exclusively about generating content, the US side is focused on significant national security issues, including cybersecurity and the potential for AI models to design chemical, biological or nuclear weapons.

China, US agree on AI risks, but can they see past military tech rivalry?

That national security focus is likely to intensify given AI’s ability to amplify disinformation as billions of citizens in over 60 countries go to the polls this year, relative to last year’s focus on AI’s “existential risks” and whether humans will survive the technology.

“US officials also worry about traditional things like bias, disinformation, et cetera associated with AI platforms, but Chinese regulators care much less about these things,” said Triolo, adding that he suspects the idea of US-China AI talks was first hammered out when Raimondo visited Beijing in July in the lead-up to the Xi-Biden summit.

“And now both sides are scrambling trying to figure out what it is they are going to talk about.”

Also weighing on the talks are very different views of transparency, decision-making and centralised authority. In the past with the advance of new technologies, from mobile phones and fax machines to the internet and cryptocurrency, Beijing has moved slowly to study and control their use and ensure they do not represent a threat to the Communist Party.

US Secretary of Commerce Gina Raimondo (left) and British Secretary of State for Science, Innovation and Technology Michelle Donelan listen as Chinese vice-minister of science and technology Wu Zhaohui speaks at the AI Safety Summit at Bletchley Park in Britain on November 1. Photo: Reuters

The US, with its more decentralised system, has more often allowed companies and individuals to explore and exploit their uses, regulating their use after problems and abuses surface.

As AI has burst onto the scene in the past year, US chief executives from Google, Meta, ChatGPT and elsewhere have appeared at events like the United Kingdom AI Safety Summit in November, which gave rise to the Bletchley agreement. And the Pentagon has been fairly open about its approach, including guidelines outlined last month by a deputy assistant defence secretary, Michael Horowitz.

The People’s Liberation Army has been far more circumspect about its thinking

“The PLA is nowhere close to doing something similar, so some type of quid pro quo on that front would help, if the PLA came through and said here’s our framework,” said Martijn Rasser, managing director of Datenna, a Dutch-based open-source intelligence software company. “What the US is most interested in is getting some transparency from the Chinese side on how they’re thinking about these issues.”

US, China and EU agree to work together on AI risks at UK summit

Likewise with Chinese companies, which are reluctant to engage in international meetings without more guidance from Beijing, even as it remains unclear who has authority over international engagement on AI within the government. Beijing sent Wu Zhaohui, a vice-minister of science and technology, to the UK AI Summit, but the lead regulator is the cyberspace administration of China, which does not have much of an international presence, complicating efforts to engage on AI outside China.

On other fronts, China has a growing body of quasi non-government organisations and think tanks linked to leading AI companies that are working on issues related to AI safety. It also has a relatively few civil society organisations, the groups that tend to participate in these debates in Western countries. “All of this complicate Chinese participation in both multilateral and bilateral dialogues around AI,” said Triolo.

The Xi-Biden summit represented a bid to stem the rapid slide in bilateral relations and lower the temperature. “AI safety is not a bad place to figure out, can we establish some kind of dialogue,” said Rasser, a former intelligence analyst. “There’s so much distrust on both sides that it’s a very steep hill to climb to some sort of agreement.

“But if at minimum they’re having discussions, dialogue is better than no dialogue. All in all, it’s not a bad thing that they’re exploring the potential.”

4