Nvidia has developed what it calls “the perfect starter recipe” in plans to help original design manufacturers boost AI cloud computing for hyperscale data centres.
In doing this, Nvidia has set up a partner program which will provide each manufacturer with early access to the new Nvidia HGX reference architecture, GPU computing technologies and design guidelines.
The HGX reference design is built to enhance high-performance, efficiency and scaling requirements that are unique to hyperscale cloud environments.
HGX is powered by eight Tesla GPUs and Nvidia’s NVLink interconnect technology, which enabled to be deployed in existing data centre racks around the world using hyperscale CPU nodes.
Ian Buck, GM of Accelerated Computing, Nvidia said: “Accelerated computing is evolving rapidly- in just one year we tripled the deep learning performance in our Tesla GPUs- and this is having a significant impact on the way systems are designed.
“Through our HGX partner program, device makers can ensure they’re offering the latest AI technologies to the growing community of cloud computing providers.”
Read more: IBM Cloud boosts AI computing with Nvidia Tesla GPU
Nvidia says that 10 of the world’s top 10 hyperscale businesses are currently using Nvidia GPU accelerators in their data centres.
Also, HGX is the same design used in Microsoft’s Project Olympus, Facebook Big Basin Systems and Nvidia’s DGX-1 AI supercomputers.
The Nvidia Tesla P100 and V100 GPU accelerators are both compatible with HGX, which offers users with immediate upgrades of all HGX-based products once the expected V100 GPUs become available later in the year.
IBM Cloud recently made Nvidia’s Tesla P100 GPU accelerator available on cloud, enables companies to have near to instant access to Tesla P100.
With the addition of the HGX architecture, it offers benefits for cloud providers that are looking to host the new Nvidia GPU Cloud platform.
This manages fully integrated and optimised deep learning framework containers like Caffe2, Cognitive Toolkit, MXNet and TensorFlow.
Taiyu Chou, GM, Foxconn said: “Early access to Nvidia GPU technologies and design guidelines will help us more rapidly introduce innovative products for our customers’’ growing AI computing needs.”