fbpx
Press Release

AWS and NVIDIA extend collaboration to advance generative AI innovation

Thu 21 Mar 2024

Amazon Web Services (AWS) and NVIDIA have announced the new NVIDIA Blackwell GPU platform is coming to AWS.

AWS is expanding its offerings with the inclusion of the NVIDIA GB200 Grace Blackwell Superchip and B100 Tensor Core GPUs, further solidifying their ongoing partnership. 

“AI is driving breakthroughs at an unprecedented pace, leading to new applications, business models, and innovation across industries,” said Jensen Huang, Founder and CEO at NVIDIA.

This collaboration aims to provide customers with enhanced infrastructure, software, and services, facilitating the adoption of advanced generative artificial intelligence (AI) capabilities in a secure environment.

NVIDIA and AWS continue to leverage their respective technologies, combining NVIDIA’s latest multi-node systems with the next-generation NVIDIA Blackwell platform and AI software, along with AWS’s Nitro System and advanced security features like AWS Key Management Service (AWS KMS).

NVIDIA and AWS will utilize Elastic Fabric Adapter (EFA) for petabit-scale networking and Amazon Elastic Compute Cloud (Amazon EC2) UltraCluster for hyper-scale clustering. 

This collaboration allies customers to develop and deploy real-time inference on multi-trillion parameter large language models (LLMs) more efficiently, at a larger scale, and with reduced costs compared to previous-generation NVIDIA GPUs on Amazon EC2.

AWS, NVIDIA Speed up LLM Training

AWS introduced the NVIDIA Blackwell platform, featuring GB200 NVL72 GPUs and Grace CPUs interconnected by fifth-generation NVIDIA NVLink™. Supported by Amazon’s networking (EFA), virtualisation (AWS Nitro System), and clustering (Amazon EC2 UltraClusters), scalability to thousands of GB200 Superchips is facilitated.

 NVIDIA Blackwell on AWS significantly enhances inference workloads for resource-intensive, multi-trillion-parameter language models.

Building on the success of NVIDIA H100-powered EC2 P5 instances AWS intends to introduce EC2 instances featuring new B100 GPUs deployed in EC2 UltraClusters. These aim to enhance generative AI training and inference at a large scale.

GB200s will also be accessible on NVIDIA DGX™ Cloud, an AI platform co-engineered on AWS, providing enterprise developers with infrastructure and software for advanced generative AI models. Blackwell-powered DGX Cloud instances on AWS aim to expedite the development of cutting-edge generative AI and LLMs surpassing 1 trillion parameters.

Products Aim to Improve AI Security

NVIDIA said as organisations embrace AI, ensuring secure data handling during training is paramount. Protecting model weights is crucial for safeguarding intellectual property.

AWS already offers security features, providing customers with data control. AWS Nitro System, combined with NVIDIA GB200, strengthens AI security, preventing unauthorised access to model weights. GB200 enables encryption of NVLink connections and data transfers, while EFA encrypts distributed training and inference data.

With GB200 on Amazon EC2, AWS offers an execution environment using Nitro Enclaves and AWS KMS. Nitro Enclaves encrypt training data and weights with KMS, ensuring secure communication with GB200.

NVIDIA said Project Ceiba, a collaboration between NVIDIA and AWS, introduces one of the world’s fastest AI supercomputers. Hosted on AWS, it features 20,736 B200 GPUs connected to 10,368 NVIDIA Grace CPUs, scaling with EFA networking for high throughput. 

This supercomputer is intended to advance AI research for LLMs, graphics, simulation, and more, driving future generative AI innovation.

Partnership Speeds Up GenAI in Healthcare and Life Sciences

The collaboration intends to provide high-performance, cost-effective inference for generative AI through Amazon SageMaker integration with NVIDIA NIM™ inference microservices, available with NVIDIA AI Enterprise. 

This combination enables customers to swiftly deploy pre-compiled and optimised FMs to SageMaker, aiming to reduce time-to-market for generative AI applications.

AWS and NVIDIA  have expanded their computer-aided drug discovery with new NVIDIA BioNeMo™ FMs for generative chemistry, protein structure prediction, and drug-target interactions. 

These models will soon be accessible on AWS HealthOmics, facilitating genomic, transcriptomic, and omics data storage, query, and analysis for healthcare and life sciences organisations.

AWS HealthOmics and NVIDIA Healthcare teams have also collaborated on launching generative AI microservices to enhance drug discovery, MedTech, and digital health. 

This effort includes a new catalogue of GPU-accelerated cloud endpoints for biology, chemistry, imaging, and healthcare data, aiming to enable healthcare enterprises to utilise advancements in generative AI on AWS.

Join Big Data & AI World

6-7 March 2024, ExCeL London

Be at the forefront of change with thousands of technologists, data specialists, and AI pioneers.

Don’t miss the biggest opportunities to advance your business into the future.

Cloud & Cyber Security World

Article

Join Cloud & Cyber Security Expo

6-7 March 2024, ExCeL London

Cloud & Cyber Security Expo is one of the largest IT security events in Europe.

Don’t miss the chance to build partnerships and discover solutions to protect your business.


Send us a correction Send us a news tip