- Nvidia will offer optimal power by ensuring the chip is never idle
- Nvidia said its new A100 chip can be split into seven “instances”
- Customers who want to test the theory will pay a steep price of $200,000
Semiconductor firm Nvidia on Thursday announced a new chip that can be digitally split up to run several different programs on one physical chip, a first for the company that matches a key capability on many of Intel’s chips.
The notion behind what the Santa Clara, California-based company calls its A100 chip is simple: Help the owners of data centres get every bit of computing power possible out of the physical chips they purchase by ensuring the chip never sits idle. The same principle helped power the rise of cloud computing over the past two decades and helped Intel build a massive data centre business.
When software developers turn to a cloud computing provider such as Amazon or Microsoft for computing power, they do not rent a full physical server inside a data centre. Instead, they rent a software-based slice of a physical server called a “virtual machine.”
Such virtualisation technology came about because software developers realised that powerful and pricey servers often ran far below full computing capacity. By slicing physical machines into smaller virtual ones, developers could cram more software on to them, similar to the puzzle game Tetris. Amazon, Microsoft and others built profitable cloud businesses out of wringing every bit of computing power from their hardware and selling that power to millions of customers.
But the technology has been mostly limited to processor chips from Intel and similar chips such as those from Advanced Micro Devices (AMD). Nvidia said Thursday that its new A100 chip can be split into seven “instances.”
For Nvida, that solves a practical problem. Nvidia sells chips for artificial intelligence (AI)] tasks. The market for those chips breaks into two parts. “Training” requires a powerful chip to, for example, analyse millions of images to train an algorithm to recognise faces. But once the algorithm is trained, “inference” tasks need only a fraction of the computing power to scan a single image and spot a face.
Nvidia is hoping the A100 can replace both, being used as a big single chip for training and split into smaller inference chips.
Customers who want to test the theory will pay a steep price of $200,000 (roughly Rs. 1.5 crores) for Nvidia’s DGX server built around the A100 chips. In a call with reporters, Chief Executive Jensen Huang argued the math will work in Nvidia’s favour, saying the computing power in the DGX A100 was equal to that of 75 traditional servers that would cost $5,000 (roughly Rs. 3.77 lakh) each.
“Because it’s fungible, you don’t have to buy all these different types of servers. Utilization will be higher,” he said. “You’ve got 75 times the performance of a $5,000 (roughly Rs. 3.77 lakh) server, and you don’t have to buy all the cables.”