Advertisement
Alibaba
TechBig Tech

Alibaba Cloud claims to slash Nvidia GPU use by 82% with new pooling system

The new Aegaeon system can serve dozens of large language models using a fraction of the GPUs previously required, potentially reshaping AI workloads

Reading Time:2 minutes
Why you can trust SCMP
6
Flag of America and China on Chipset for semiconductor industry, 3d rendering
Vincent Chow

Alibaba Group Holding has introduced a computing pooling solution that it said led to an 82 per cent cut in the number of Nvidia graphics processing units (GPUs) needed to serve its artificial intelligence models.

The system, called Aegaeon, was beta tested in Alibaba Cloud’s model marketplace for more than three months, where it reduced the number of Nvidia H20 GPUs required to serve dozens of models of up to 72 billion parameters from 1,192 to 213, according to a research paper presented this week at the 31st Symposium on Operating Systems Principles (SOSP) in Seoul, South Korea.

“Aegaeon is the first work to reveal the excessive costs associated with serving concurrent LLM workloads on the market,” the researchers from Peking University and Alibaba Cloud wrote.

Advertisement

Alibaba Cloud is the AI and cloud services unit of Hangzhou-based Alibaba, which owns the Post. Its chief technology officer, Zhou Jingren, is one of the paper’s authors.

Cloud services providers, such as Alibaba Cloud and ByteDance’s Volcano Engine, serve thousands of AI models to users concurrently, meaning that many application programming interface calls are handled at the same time.

Advertisement

However, a small handful of models such as Alibaba’s Qwen and DeepSeek are most popular for inference, with most other models only sporadically called upon. This leads to resource inefficiency, with 17.7 per cent of GPUs allocated to serve only 1.35 per cent of requests in Alibaba Cloud’s marketplace, the researchers found.

Researchers globally have sought to improve efficiency by pooling GPU power, allowing one GPU to serve multiple models, for instance.

Advertisement
Select Voice
Choose your listening speed
Get through articles 2x faster
1.25x
250 WPM
Slow
Average
Fast
1.25x