
Nvidia Launches Data Center Economy Chip

[ad_1]
Semiconductor company Nvidia announced Thursday a new chip that can be digitally split to run multiple different programs on a physical chip, the first for the company to match key capacity on many Intel chips.
The idea behind what the Santa Clara, California-based company calls its A100 chip is simple: helping data center owners get the most computing power out of the physical chips they buy, ensuring that the chip never remains inactive. The same principle has helped fuel the rise of cloud computing over the past two decades and has helped Intel build a massive data center business.
When software developers turn to a cloud provider like Amazon or Microsoft for computing power, they aren't renting a full physical server from a data center. Instead, they rent a piece of software from a physical server called a "virtual machine".
Such virtualization technology appeared because software developers realized that powerful and expensive servers often operate well below full computing power. By dividing physical machines into small virtual machines, developers could include more software, similar to the Tetris puzzle game. Amazon, Microsoft and others have built profitable cloud businesses by extracting all the computing power from their hardware and selling it to millions of customers.
But the technology has been mainly limited to Intel processor chips and similar chips, such as those from Advanced Micro Devices (AMD). Nvidia told me Thursday that its new A100 chip can be divided into seven "instances".
For Nvida, this solves a practical problem. Nvidia sells chips for artificial intelligence (AI) tasks. The market for these chips is divided into two parts. "Training" requires a powerful chip to, for example, analyze millions of images to form an algorithm to recognize faces. But once the algorithm is formed, the "inference" tasks require only a fraction of the computing power to scan a single image and detect a face.
Nvidia hopes that the A100 can replace the two, being used as a single large training chip and divided into smaller inference chips.
Customers who want to test the theory will pay a high price of $ 200,000 (around Rs. 1.5 crores) for the Nvidia DGX server built around the A100 chips. In an appeal to journalists, CEO Jensen Huang argued that the calculations would work in favor of Nvidia, claiming that the computing power of the DGX A100 was equal to that of 75 traditional servers which would cost $ 5,000 (approximately Rs. 3, 77 lakh) each.
"Because it is fungible, you don't have to buy all of these different types of servers. Usage will be higher," he said. "It has 75 times the performance of a $ 5,000 server (around Rs. 3.77 lakh), and you don't have to buy all the cables."
© Thomson Reuters 2020
What is the best selling Vivo smartphone in India? Why hasn't Vivo made premium phones? We interviewed Vivo's brand strategy director, Nipun Marya, to find out and talk about the company's strategy in India in the future. We talk about it on Orbital, our weekly technology podcast, which you can subscribe to via Apple podcasts