NVIDIA Dynamo is a new modular inference framework designed for serving large language models (LLMs) in multi-node
distributed environments. It enables seamless scaling of inference workloads across GPU nodes and the dynamic allocation
of GPU workers to address traffic bottlenecks at various stages of the model pipeline.
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
NVIDIA Dynamo also features LLM-specific capabilities, such as disaggregated serving, which separates the context
(prefill) and generation (decode) steps of inference requests onto distinct GPUs and GPU nodes to optimize performance.
-**Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
-**Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
-**Accelerated data transfer** – Reduces inference response time using NIXL.
-**KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput
NVIDIA Dynamo includes four key innovations:
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
***Smart Router**: An LLM-aware router that directs requests across large GPU fleets to minimize costly key-value (KV)
cache recomputations for repeat or overlapping requests, freeing up GPUs to respond to new incoming requests
***Low-Latency Communication Library**: An inference optimized library that supports state-of-the-art GPU-to-GPU
communication and abstracts complexity of data exchange across heterogenous devices and networking protocols,
accelerating data transfers
***Memory Manager**: An engine that intelligently offloads and reloads inference data (KV cache) to and from lower-cost memory and storage devices using NVIDIA NIXL without impacting user experiences
> [!NOTE]
> This project is currently in the alpha / experimental /
> rapid-prototyping stage and we are actively looking for feedback and
You can build the Dynamo container using the build scripts
in `container/` (or directly with `docker build`).
> [!NOTE]
> TensorRT-LLM Support is currently available on a [branch](https://github.com/ai-dynamo/dynamo/tree/dynamo/trtllm_llmapi_v1/examples/trtllm#building-the-environment)
We provide 2 types of builds:
### Running and Interacting with an LLM Locally
1.`VLLM` which includes our VLLM backend using new NIXL communication library.
2.`TENSORRTLLM` which includes our TRT-LLM backend
To run a model and interact with it locally you can call `dynamo
run` with a hugging face model. `dynamo run` supports several backends
including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.
For example, if you want to build a container for the `VLLM` backend you can run
#### Example Command
<!--pytest.mark.skip-->
```bash
./container/build.sh
```
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
```
Please see the instructions in the corresponding example for specific build instructions.
```
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
```
## Running Dynamo for Local Testing and Development
### LLM Serving
You can run the Dynamo container using the run scripts in
`container/` (or directly with `docker run`).
Dynamo provides a simple way to spin up a local set of inference
components including:
The run script offers a few common workflows:
-**OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
-**Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
-**Workers** – Set of pre-configured LLM serving engines.
1. Running a command in a container and exiting.
To run a minimal configuration you can use a pre-configured