Advertisement
DeepSeek
Tech

Exclusive | DeepSeek evaluates AI models for ‘frontier risks’, source says, as China promotes safety

Unlike US AI firms which publish findings of their frontier risk evaluations, Chinese companies do not announce such details

Reading Time:2 minutes
Why you can trust SCMP
1
A visitor looks at a DeepSeek poster during the Global Developer Conference in Shanghai, on February 22, 2025. Photo: VCG via Getty Images
Vincent Chow

Chinese artificial intelligence start-up DeepSeek has conducted internal evaluations on the “frontier risks” of its AI models, according to a person familiar with the matter.

The development, not previously reported, comes as Beijing seeks to promote awareness of such risks within China’s AI industry. Frontier risks refer to AI systems that pose potentially significant threats to public safety and social stability.

The Hangzhou-based company, which has been China’s poster child for AI development ever since it released its R1 reasoning model in January, conducted evaluations of the models’ self-replication and cyber-offensive capabilities in particular, according to the person who requested anonymity.
Advertisement

The results were not publicised. It was not clear when the evaluations were completed or which of the company’s models were involved. DeepSeek did not respond to a request for comment on Tuesday.

The DeepSeek app is seen on a smartphone screen in Beijing, January 28, 2025. Photo: AP
The DeepSeek app is seen on a smartphone screen in Beijing, January 28, 2025. Photo: AP

Unlike US AI firms such as Anthropic and OpenAI, which regularly publish the findings of their frontier risk evaluations, Chinese companies have not announced such details.

Advertisement
Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x