Opinion | Cooperation for AI safety must transcend geopolitical interference
Amid a tech explosion, when we need to mobilise all our wisdom and energy for cooperation, some countries are trying to shut down collaborative platforms
As the global community grapples with artificial intelligence (AI) governance, a crucial dialogue is emerging on how different nations approach AI safety and development. At a recent Paris AI Action Summit side event, these discussions highlighted the urgent need for international cooperation despite geopolitical headwinds.
Advertisement
Drawing on my participation in international AI forums and multilateral discussions, I believe China’s experience offers valuable insights into balancing technological advancement with safety considerations while highlighting the challenges and opportunities in global AI governance.
China’s approach to AI safety reflects its national reality and interests. The AI Safety and Development Network, established with government support, is equivalent to other countries’ AI safety institutes. China has a diverse and pluralistic ecosystem of AI application and safety governance, with government departments, institutions and enterprises focusing on and investing in AI safety issues.
The network’s establishment enables every stakeholder to participate, share knowledge and information, and enhance capacity building; there is active participation in international dialogue and cooperation in spite of external challenges. China joined Britain’s Bletchley summit and has followed the development of other countries’ AI safety institutes. China’s technological community also maintains close communication with international counterparts.
The concern about AI safety in China is mostly on two levels. One is in application. China’s State Council released the New Generation AI Development Plan in 2017, emphasising safe, controlled and sustainable AI progress. China’s AI applications are spreading fast, including in finance, urban management, healthcare and scientific research. Risks and challenges have emerged, with urgent demands for government regulation and technical solutions to address these risks.
01:20
China’s Alibaba releases new AI model, said to outperform competitors Deepseek and OpenAI’s GPT-4o
China’s Alibaba releases new AI model, said to outperform competitors Deepseek and OpenAI’s GPT-4o
Based on its experience in cyber regulation, the Chinese government issued AI laws and normative documents, guided by the principle of maintaining a balance between encouraging innovation and mitigating risks. Meanwhile, tech companies specialising in addressing AI safety issues have emerged to develop and deploy risk mitigation.