Advertisement
Advertisement
A discussion at the AI Safety Summit at Bletchley Park in Milton Keynes, UK, on November 2, where countries including the US and China signed the Bletchley Declaration. Photo: DPA
Opinion
Shayan Hassan Jamy
Shayan Hassan Jamy

Emerging US-China AI arms race undermines their leadership in global standards

  • Both recognise the revolutionary potential of AI and want to lead the global AI debate but are unwilling to engage directly
  • But with both also intent on integrating AI into their respective militaries, the task of bringing the international community together might have to rest with others
This has truly been the year of artificial intelligence. From the massive leap in generative AI systems such as ChatGPT to the advancements in robotics and military use of AI in the Russia-Ukraine conflict, the world has now very much understood that AI is a reality, not a possibility.
It is within this context that the United States is attempting to shape global AI standards. On October 30, US President Joe Biden issued a landmark executive order on “safe, secure and trustworthy AI”. The White House calls the order “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems”.
Under it, AI developers are required to share “safety test results and other critical information” with the US government. This is an interesting development. Direct government oversight of commercial AI systems has been seen in China – which recently introduced its own regulations for generative AI – but was thought to be unlikely in the US.

Other provisions in the executive order include establishing safety standards to be followed before an AI system can be publicly released, and the labelling of AI-generated content. Exactly how such standards would be implemented by the US, in a country that prides itself on allowing its technology ecosystem the creative freedom to innovate, remains to be seen.

Given the political influence that tech companies in the US possess, it’s likely that these standards would prove merely symbolic. Still, the intent is clear; the US wants to be at the forefront of the global AI debate.

Other major AI initiatives were also announced by US Vice-President Kamala Harris on November 1. Of particular importance was the endorsement of its “Political Declaration on the Responsible Military Use of AI and Autonomy” by 31 countries. The declaration, made in February, aims to “build international consensus around responsible behavior and guide states’ development, deployment, and use of military AI”.

A robotic dog is shown at the Responsible AI in the Military (REAIM) summit in The Hague, Netherlands, on February 15. Photo: Reuters

Both Biden’s executive order and Harris’ announcement also emphasised the importance of working with other states to build trustworthy AI. Biden’s order vowed to ensure AI is “interoperable” with its international partners, and to work with its “allies and partners” to build a “strong international framework to govern the development and use of AI”.

Many of these points were echoed in the first global AI Safety Summit held in Britain from November 1-2. A major development of the summit was the signing of the Bletchley Declaration by over 28 countries, including the US and China. The declaration states that AI should be “designed, developed, deployed, and used in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

China’s endorsement of the declaration is interesting. Speaking on the matter, China’s Vice-Minister of Science and Technology Wu Zhaohui called for “global collaboration to share knowledge and make AI technologies available to the public under open source terms”.

This is not a new stance for China, which recently announced its Global AI Governance Initiative. The initiative calls on states to “enhance information exchange and technological cooperation on the governance of AI”. This is a continuation of China’s strategic thinking on AI, laid out in its 2017 New Generation AI Development Plan. By 2030, China aims to become the global leader in AI, have a US$150 billion AI industry, and develop global AI standards.

Biden’s executive order can be seen as a direct response to China’s AI ambitions. Both states recognise the potential for AI to be the most revolutionary technology in human history, and as such, want to lead the global AI debate.

As was the case with nuclear weapons after World War II, states that attained nuclear capability early on were able to set the international standards for their use and proliferation. Likewise, whichever nation is at the forefront of the global AI debate would gain a significant amount of say in any future global order.
Important to note is that, despite the positive developments surrounding global AI standards, both the US and China are focusing on the integration of AI within their respective militaries.

01:15

This Chinese-made robot dog is a combat specialist

This Chinese-made robot dog is a combat specialist
Just last month, the US announced its Replicator initiative, which aims to field autonomous systems “at a scale of multiple thousands” and “in multiple domains, within the next 18 to 24 months”. How the US would ensure safe and trustworthy AI while also aiming to deploy thousands of military AI systems within the next two years must be questioned.
Ultimately, it seems as if the US and China are in a race to set the global AI standards, and neither is willing to engage directly with the other. The onus of bringing the international community together on AI, then, might rest with other states.

Last week, the first UN resolution on lethal autonomous weapons systems was adopted, with the vote being passed 164-5, with eight abstentions. The resolution stressed the urgent need to address the concerns raised by lethal autonomous weapons systems, including “the risk of an emerging arms race” and “lowering the threshold for conflict and proliferation”.

While this is a positive first step, the resolution needs to be backed up by further action. The unfortunate reality is that, with major states investing heavily in the military applications of AI, it might take a major catastrophe before they properly understand the serious risks associated with AI.

Shayan Hassan Jamy is a research officer at the Strategic Vision Institute (SVI), Islamabad

2