Be part of top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for achievements. Master Much more
As Nvidia’s once-a-year GTC convention will get underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical inexperienced wall at Nvidia headquarters in Santa Clara, California, shipped a highly-expected keynote that targeted nearly solely on AI. His presentation declared partnerships with Google, Microsoft and Oracle, between other people, to deliver new AI, simulation and collaboration capabilities to “every field.”
“The warp push motor is accelerated computing, and the electricity resource is AI,” Huang claimed. Generative AI abilities, he reported, have “created a sense of urgency for organizations to reimagine their solutions and business enterprise designs. Industrial companies are racing to digitalize and reinvent into software package-pushed tech providers to be the disrupter and not the disrupted.”
>>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<<
Huang’s keynote kicked off with the iconic “I am AI” opening (that launched in 2017) with music that this time around was apparently composed by AI, and arranged by composer John Naesano. Then, Huang launched into a dizzying array of announcements. These included everything from training to deployment for cutting-edge AI services new semiconductors and software libraries and a complete set of systems and services for startups and enterprises.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
The announcements at GTC, which targets Nvidia’s community of over four million developers, come in the context of Nvidia’s continued AI dominance, particularly in the latest era of generative AI.
As detailed in VentureBeat’s recent in-depth feature story, Nvidia got a massive AI head start when the hardware and software company helped power the deep learning “revolution” of a decade ago, and shows few signs of losing its lead as generative AI explodes with tools like ChatGPT.
In fact, Nvidia powers ChatGPT: According to UBS analyst Timothy Arcuri, ChatGPT used 10,000 Nvidia GPUs to train the model.
Nvidia’s technologies are fundamental to AI, said Huang, recounting how Nvidia was there at the very beginning of the generative AI revolution. In his keynote, Huang recounted how back in 2016 he hand-delivered to OpenAI the first Nvidia DGX AI supercomputer — the engine behind the large language model powering ChatGPT.
Nvidia DGX supercomputers, originally used as AI research instruments, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers. “DGX supercomputers are modern AI factories,” Huang said.
Nvidia calls DGX the blueprint for AI infrastructure
The latest version of DGX features eight Nvidia H100 GPUs linked together to work as one giant GPU. “Nvidia DGX H100 is the blueprint for customers building AI infrastructure worldwide,” Huang said, sharing that Nvidia DGX H100 is now in full production.
H100 AI supercomputers are already coming online, he added. Oracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs. And Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs. This follows Microsoft Azure’s private preview announcement last week for its H100 virtual machine, ND H100 v5.
Meta has now deployed its H100-powered “Grand Teton” AI supercomputer internally for its AI production and research teams. And OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research.
Nvidia DGX cloud to bring AI supercomputers ‘to every company’
To speed DGX capabilities to startups and enterprises building new products and developing AI strategies, Huang announced Nvidia DGX Cloud. Through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, Nvidia DGX Cloud will bring Nvidia DGX AI supercomputers “to every company, from a browser.”
DGX Cloud is optimized to run Nvidia AI Enterprise, the world’s leading acceleration software suite for end-to-end development and deployment of AI. Nvidia is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud.
This partnership brings Nvidia’s ecosystem to cloud service providers while amplifying Nvidia’s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis.
Custom LLMs and generative AI for enterprises
To accelerate the work of those seeking to harness generative AI, Huang announced Nvidia AI Foundations, a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks.
AI Foundations services include Nvidia NeMo for building custom language text-to-text generative models Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content and BioNeMo, to help researchers in the $2 trillion drug discovery industry.
Huang announced an Adobe-Nvidia partnership to build a set of next-generation AI capabilities. Getty Images is collaborating with Nvidia to train responsible generative text-to-image and text-to-video foundation models. And Shutterstock is working with Nvidia to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets.
Nvidia invented accelerated computing for AI, including deep learning
Nvidia invented accelerated computing to solve problems that normal computers can’t, said Huang. “It requires full-stack invention from chips, systems, networking, acceleration libraries, to refactoring the applications.”
Each optimized stack, he explained, accelerates an application domain — from graphics, imaging and quantum physics to machine learning. “The application can enjoy incredible speed-up as well as scale-up across many computers. This enabled us to achieve a million X for many applications over the past decade,” he said.
The most famous application of Nvidia’s accelerated computing, he noted, was deep learning.
In 2012, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton needed an insanely fast computer to train the AlexNet computer vision model. The researchers trained AlexNet, Huang explained, with 14 million images on GeForce GTX 580 processing and 262 quadrillion floating point operations. The trained model won the ImageNet challenge by a wide margin and, Huang said, “ignited the big bang of AI.”
A decade later, the Transformer model was invented and Sutskever, now at OpenAI, trained the GPT-3 large language model to predict the next word. 323 sextillion floating point operations were required to train GPT-3, Huang said — a million times more floating point operations than to train AlexNet.
“The result is ChatGPT, the AI heard around the world,” he said.
Huang and Sutskever will surely discuss it all, and more, at their Fireside Chat, scheduled for tomorrow at 9 a.m. Pacific.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.