README.md 6.86 KB
Newer Older
Neelay Shah's avatar
Neelay Shah committed
1
<!--
Neelay Shah's avatar
Neelay Shah committed
2
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Neelay Shah's avatar
Neelay Shah committed
3
SPDX-License-Identifier: Apache-2.0
4
5
6
7
8
9
10
11
12
13
14
15

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Neelay Shah's avatar
Neelay Shah committed
16
17
-->

Meenakshi Sharma's avatar
Meenakshi Sharma committed
18
# NVIDIA Dynamo
Neelay Shah's avatar
Neelay Shah committed
19

20
21
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)
[![GitHub Release](https://img.shields.io/github/v/release/ai-dynamo/dynamo)](https://github.com/ai-dynamo/dynamo/releases/latest)
22
[![Discord](https://dcbadge.limes.pink/api/server/D92uqZRjCZ?style=flat)](https://discord.gg/nvidia-dynamo)
Meenakshi Sharma's avatar
Meenakshi Sharma committed
23

24
| **[Roadmap](https://github.com/ai-dynamo/dynamo/issues/762)** | **[Support Matrix](support_matrix.md)** | **[Guides](docs/guides)** | **[Architecture and Features](docs/architecture.md)** | **[APIs](lib/bindings/python/README.md)** | **[SDK](deploy/dynamo/sdk/README.md)** |
Neelay Shah's avatar
Neelay Shah committed
25

Neelay Shah's avatar
Neelay Shah committed
26
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
27

Neelay Shah's avatar
Neelay Shah committed
28
29
30
31
32
- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput
33

Neelay Shah's avatar
Neelay Shah committed
34
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
Neelay Shah's avatar
Neelay Shah committed
35

Neelay Shah's avatar
Neelay Shah committed
36
### Installation
37

Neelay Shah's avatar
Neelay Shah committed
38
The following examples require a few system level packages.
39
Recommended to use Ubuntu 24.04 with a x86_64 CPU. See [support_matrix.md](support_matrix.md)
40

Neelay Shah's avatar
Neelay Shah committed
41
42
```
apt-get update
43
44
45
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev python3-pip python3-venv libucx0
python3 -m venv venv
source venv/bin/activate
46

47
pip install ai-dynamo[all]
Neelay Shah's avatar
Neelay Shah committed
48
```
49
50
> [!NOTE]
> To ensure compatibility, please refer to the examples in the release branch or tag that matches the version you installed.
51

52
### Building the Dynamo Base Image
53

54
55
56
57
Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
- Docker Hub (docker.io)
- NVIDIA NGC Container Registry (nvcr.io)
- Any private registry
58

59
Here's how to build it:
60

61
```bash
62
63
64
65
./container/build.sh
docker tag dynamo:latest-vllm <your-registry>/dynamo-base:latest-vllm
docker login <your-registry>
docker push <your-registry>/dynamo-base:latest-vllm
66
67
```

68
69
70
71
Notes about builds for specific frameworks:
- For specific details on the `--framework vllm` build, see [here](examples/llm/README.md).
- For specific details on the `--framework tensorrtllm` build, see [here](examples/tensorrt_llm/README.md).

72
73
After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
```bash
74
export DYNAMO_IMAGE=<your-registry>/dynamo-base:latest-vllm
75
```
76

77
78
79
> [!NOTE]
> We are working on leaner base images that can be built using the targets in the top-level Earthfile.

Neelay Shah's avatar
Neelay Shah committed
80
### Running and Interacting with an LLM Locally
81

Neelay Shah's avatar
Neelay Shah committed
82
83
84
To run a model and interact with it locally you can call `dynamo
run` with a hugging face model. `dynamo run` supports several backends
including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.
85

Neelay Shah's avatar
Neelay Shah committed
86
#### Example Command
87

Neelay Shah's avatar
Neelay Shah committed
88
89
```
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
90
```
91

Neelay Shah's avatar
Neelay Shah committed
92
93
94
95
96
```
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
```
97

Neelay Shah's avatar
Neelay Shah committed
98
### LLM Serving
99

Neelay Shah's avatar
Neelay Shah committed
100
101
Dynamo provides a simple way to spin up a local set of inference
components including:
102

Neelay Shah's avatar
Neelay Shah committed
103
104
105
- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
- **Workers** – Set of pre-configured LLM serving engines.
106

Neelay Shah's avatar
Neelay Shah committed
107
108
To run a minimal configuration you can use a pre-configured
example.
109

Neelay Shah's avatar
Neelay Shah committed
110
#### Start Dynamo Distributed Runtime Services
111

Neelay Shah's avatar
Neelay Shah committed
112
First start the Dynamo Distributed Runtime services:
113
114

```bash
Neelay Shah's avatar
Neelay Shah committed
115
docker compose -f deploy/docker-compose.yml up -d
116
```
Neelay Shah's avatar
Neelay Shah committed
117
118
119
120
#### Start Dynamo LLM Serving Components

Next serve a minimal configuration with an http server, basic
round-robin router, and a single worker.
121
122

```bash
Neelay Shah's avatar
Neelay Shah committed
123
124
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
125
126
```

Neelay Shah's avatar
Neelay Shah committed
127
#### Send a Request
128

129
```bash
Neelay Shah's avatar
Neelay Shah committed
130
131
132
133
134
135
136
137
138
139
140
curl localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
    "messages": [
    {
        "role": "user",
        "content": "Hello, how are you?"
    }
    ],
    "stream":false,
    "max_tokens": 300
  }' | jq
141
```
142
143
144

### Local Development

145
If you use vscode or cursor, we have a .devcontainer folder built on [Microsofts Extension](https://code.visualstudio.com/docs/devcontainers/containers). For instructions see the [ReadMe](.devcontainer/README.md) for more details.
146

147
Otherwise, to develop locally, we recommend working inside of the container
148
149
150
151
152
153
154
155
156

```bash
./container/build.sh
./container/run.sh -it --mount-workspace

cargo build --release
mkdir -p /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/http /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
cp /workspace/target/release/llmctl /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
157
cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/sdk/cli/bin
158
159

uv pip install -e .
160
export PYTHONPATH=$PYTHONPATH:/workspace/deploy/dynamo/sdk/src:/workspace/components/planner/src
161
```
162

163

164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
#### Conda Environment

Alternately, you can use a conda environment

```bash
conda activate <ENV_NAME>

pip install nixl # Or install https://github.com/ai-dynamo/nixl from source

cargo build --release

# To install ai-dynamo-runtime from source
cd lib/bindings/python
pip install .

cd ../../../
pip install .[all]

# To test
docker compose -f deploy/docker-compose.yml up -d
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
```