AI Inference Solutions

# NVIDIA Inference Platform

Powering the most performant, efficient, and profitable AI factories.

[Get Started](#next-steps)

[Read Series](https://blogs.nvidia.com/blog/tag/think-smart/)   |    [Performance Benchmarks](https://developer.nvidia.com/deep-learning-performance-training-inference/ai-inference)    |    [For Developers](https://developer.nvidia.com/topics/ai/ai-inference/)

Overview

## How to Profitably Scale AI Inference?

AI inference—how we experience AI through chatbots, copilots, and creative tools—is scaling at a double exponential pace. User adoption is accelerating while the AI tokens generated per interaction, driven by agentic workflows, long-thinking reasoning, and [mixture-of-experts](https://www.nvidia.com/en-us/glossary/mixture-of-experts.md) (MoE) models, soars in parallel. 

To enable inference at this massive scale, NVIDIA delivers data-center-scale architecture on an annual rhythm. Our extreme hardware and software codesign delivers order-of-magnitude leaps in performance and drives down the cost per token, making advanced AI experiences economically viable at scale.

NVIDIA GB300 NVL72 delivers 50x tokens per watt and 35x lower token cost over Hopper™, maximizing revenue within the same power budget and driving higher profit margins. Continuous software optimizations extract maximum performance at chip, rack, and data center scale, further improving return on investment over time.

### NVIDIA Vera Rubin Opens Next AI Frontier

The NVIDIA Vera Rubin platform consists of seven new chips now in full production to scale the world’s largest AI factories.

[Read Blog](https://nvidianews.nvidia.com/news/nvidia-vera-rubin-platform)

### Leading Inference Providers Cut AI Costs by up to 10x With Open Source Models on NVIDIA Blackwell

Baseten, Deep Infra, Fireworks AI, and Together AI are reducing cost per token across industries with optimized inference stacks running on the NVIDIA Blackwell platform.

[Read Blog](https://blogs.nvidia.com/blog/blackwell-inferencemax-benchmark-results/)

## Inference Performance Drives Down Token Cost

[Click to Enlarge Image](#enlarge)

[Click to Enlarge Image](#enlarge_second)

DeepSeek-R1 8K/1K results show a 15x performance benefit and revenue opportunity for NVIDIA Blackwell GB200 NVL72 over Hopper H200.

Benefits

### Highest Performance Maximizes Revenue

With extreme hardware and software codesign, NVIDIA GB300 NVL72 delivers 50x tokens per watt over Hopper, maximizing AI factory revenue within the same power budget. Continuous software optimizations extract maximum performance at chip, rack, and data center scale, further improving return on investment over time.

### Lowest Token Cost Expands Profit Margins

NVIDIA GB300 NVL72 system delivers 35x lower cost per token over NVIDIA Hopper platform, driving higher profit margins for AI factories. With each generation, performance improvements far outpace infrastructure costs, creating better economics to enable advanced AI experiences at massive scale.

### Full Stack Optimizes Every Model and Use Case

NVIDIA supports every model across generative AI, traditional ML, scientific computing, biology, and physical AI. From latency-sensitive real-time applications to high-throughput batch processing, NVIDIA delivers the best performance for every use case. The platform provides maximum flexibility and programmability to choose the optimal configuration for evolving workload and business requirements.

### Native Integration Accelerates Deployment

NVIDIA’s production-ready software, including Dynamo and TensorRT™ LLM, and native integration with leading frameworks such as PyTorch, vLLM, SGLang, and llm-d, deliver the most robust AI inference stack. As model architectures and inference techniques rapidly evolve, NVIDIA’s stack ensures the fastest path from innovation to production.

Platform

## Extreme Hardware–Software Codesign

Powerful hardware without smart orchestration wastes potential; great software without fast hardware means sluggish inference performance. NVIDIA’s inference platform delivers a continuously optimized full-stack solution with codesigned compute, networking, storage, and software to enable the highest performance across diverse workloads. 

Explore some of the key NVIDIA hardware and software innovations.

### NVIDIA Vera Rubin NVL72

The NVIDIA Vera Rubin platform delivers 10x better performance per watt and 10x lower cost per token than Blackwell. Through extreme codesign, the platform pairs Rubin GPUs for massive context prefill with LPX for fast decode, eliminating the trade-off between speed and scale.

[Explore Seven New Chips, One AI Supercomputer](https://www.nvidia.com/en-us/data-center/gb300-nvl72.md)

### NVIDIA Grace Blackwell Ultra NVL72

GB300 NVL72 features 72 B300 GPUs connected with 130 TB/s NVLink™, so they can communicate seamlessly with each other, and unlock massive mixture-of-experts models at scale.

[Experience Superior AI Reasoning Performance About GB200 NVL72](https://www.nvidia.com/en-us/data-center/gb300-nvl72.md)

### NVIDIA Dynamo

NVIDIA Dynamo is an open source distributed inference-serving framework to deploy models in multi-node environments at AI-factory-scale. It streamlines distributed serving by disaggregating inference, optimizing routing, and extending memory through data caching to cost-effective storage tiers.

[Seamlessly Deploy Across Multiple Nodes With Dynamo](https://www.nvidia.com/en-us/ai/dynamo.md)

### TensorRT LLM

TensorRT LLM is an open source library for continuously optimized high-performance, real-time LLM inference on NVIDIA GPUs. With a modular Python runtime, PyTorch-native authoring, and a stable production API, it’s optimized to maximize throughput, minimize costs, and deliver fast user experiences.

[Optimize Inference With TensorRT LLM](https://developer.nvidia.com/tensorrt-llm)

## Decoding the Performance Paretos

Ever wonder how complex AI trade-offs translate into real-world outcomes? Explore different points across the performance curves below to see firsthand how extreme hardware and software codesign make NVIDIA Blackwell Ultra the most performant, efficient, and profitable choice.

Explore with TJ!

TPS / user

–

TPS / MW

–

Simulated Chat Experience

DeepSeek R1 ISL = 32K, OSL = 8K, GB300 NVL72 with FP4 Dynamo disaggregation. H100 with FP8 in-flight batching. Projected performance subject to change.

Wondering how each configuration translates to real user experiences? Explore the curves solo or with TJ’s guidance by clicking “Explore with TJ”, and see it brought to life in the simulated chat on the right.

[Explore More with NVIDIA Dynamo AI Configurator](https://youtu.be/KuZXeol0fKk)

Customer Stories

## How Industry Leaders Are Driving Innovation With AI Inference

[More Customer Stories](https://www.nvidia.com/en-us/case-studies/?filters=%7B%22Products%22%3A%5B%22Customized+Inference%22%5D%7D)

Amdocs

### Accelerate Generative AI Performance and Lower Costs

Read how Amdocs built amAIz, a domain-specific generative AI platform for telcos, using NVIDIA DGX™ Cloud and NVIDIA NIM inference microservices to improve latency, boost accuracy, and reduce costs.

[Read Case Study](https://developer.nvidia.com/blog/amdocs-accelerates-generative-ai-performance-and-lowers-costs-with-nvidia-nim/)

Snapchat

### Enhancing Apparel Shopping With AI

Learn how Snapchat enhanced the clothes shopping experience and emoji-aware optical character recognition using Triton Inference Server to scale, reduce costs, and accelerate time to production.

[Read Case Study](https://developer.nvidia.com/blog/enhancing-the-apparel-shopping-experience-with-ai-emoji-aware-ocr-and-snapchats-screenshop/)

Amazon

### Accelerate Customer Satisfaction

Discover how Amazon improved customer satisfaction by accelerating their inference 5X faster with TensorRT.

[Read Case Study](https://aws.amazon.com/blogs/machine-learning/how-amazon-music-uses-sagemaker-with-nvidia-to-optimize-ml-training-and-inference-performance-and-cost/)

Resources

## The Latest in AI Inference Resources

1. Blogs
2. Sessions
3. Training
4. Videos

Load More

[View More Sessions](https://www.nvidia.com/en-us/on-demand/search/?facet.mimetype%5B%5D=event%20session&layout=list&page=1&q=inference&sort=relevance&sortDir=desc)

### Get Started With Inference on NVIDIA LaunchPad

Have an existing AI project? Apply to get hands-on experience testing and prototyping your AI solutions.

[Apply Now](https://www.nvidia.com/en-us/launchpad/ai/inference.md)

### Explore Generative AI and LLM Learning Paths

Elevate your technical skills in generative AI and large language models with our comprehensive learning paths.

[Explore Now](https://www.nvidia.com/en-us/learn/learning-path/generative-ai-llm.md)

### Get Started With Generative AI Inference on NVIDIA LaunchPad

Fast-track your generative AI journey with immediate, short-term access to NVIDIA NIM inference microservices and AI models—for free.

[Get Started](https://www.nvidia.com/en-us/launchpad/ai/generative-ai-inference.md)

[View More Training](https://www.nvidia.com/en-us/launchpad.md)

### Deploying Generative AI in Production With NVIDIA NIM

Unlock the potential of generative AI with NVIDIA NIM. This video dives into how NVIDIA NIM microservices can transform your AI deployment into a production-ready powerhouse.

[Watch Video (01:55)](https://www.youtube.com/watch?v=bpOvayHifNQ)

### Top 5 Reasons Why Triton Is Simplifying Inference

Triton Inference Server simplifies the deployment of AI models at scale in production. Open-source inference-serving software lets teams deploy trained AI models from any framework—from local storage or cloud platform—on any GPU- or CPU-based infrastructure.

[Watch Video (01:59)](https://www.youtube.com/watch?v=1kOaYiNVgFs)

UneeQ

### NVIDIA Unveils NIMs

Ever wondered what NVIDIA’s NIM technology is capable of? Delve into the world of mind-blowing digital humans and robots to see what NIMs make possible.

[Watch Video (13:42)](https://www.youtube.com/watch?v=NJl660hIVKo)

[View More Videos](https://www.youtube.com/watch?v=9nJ-_F12Duc&list=PLZHnYvH1qtOa0nKlHJuG3pZ2OIYd6OoxP)

Next Steps

## Ready to Get Started?

Explore everything you need to start developing your AI application, including the latest documentation, tutorials, technical blogs, and more.

[Start Developing](https://developer.nvidia.com/topics/ai/ai-inference)
[Start Building](https://build.nvidia.com/)

### Find the Right Hardware for Your Inference Workloads

NVIDIA data center solutions are available through select NVIDIA Partner Network (NPN) partners. Explore flexible and affordable options for accessing the latest NVIDIA data center technologies through our network of partners.

[Browse NVIDIA Marketplace](https://marketplace.nvidia.com/en-us/enterprise/)

### Get the Latest on NVIDIA AI Inference

Sign up for the latest AI inference news, updates, and more from NVIDIA.

[Stay Informed](#sign-up-form)

## Get the latest from NVIDIA on AI Inference

Welcome back.
Not you? Log Out

Welcome
back. Not you? Clear form