Tencent boosts AI training efficiency without Nvidia’s most advanced chips
- The company has focused on speeding up network communications to access idling GPU capacity, Tencent said, offering a 20 per cent improvement in LLM training

The 2.0 version of Tencent’s Intelligent High-Performance Network, known as Xingmai in Chinese, will improve the efficiency of network communications and LLM training by 60 per cent and 20 per cent, respectively, the company’s cloud unit said on Monday.
An HPC network connects clusters of powerful graphics processing units (GPUs) to process data and solve problems at extremely high speeds.
Under pre-existing HPC networking technologies, computing clusters were spending too much time communicating with other clusters, leaving a significant portion of GPU capacity idling, according to Tencent. So the company upgraded its network to speed up the communications process while reducing costs, it said.
The Xingmai network can support a single computing cluster with more than 100,000 GPUs, according to the company, doubling the scale from the initial version of the network released in 2023. The improved performance shortens the time needed for identifying problems to just minutes, down from days previously, Tencent said.
Tencent has recently made a big push to strengthen its technologies in the rapidly growing AI field. The Shenzhen-based firm has been promoting its in-house LLMs for enterprise use, and also has services helping other companies to build their own models.