README.md 4.56 KB
Newer Older
Neelay Shah's avatar
Neelay Shah committed
1
<!--
Neelay Shah's avatar
Neelay Shah committed
2
SPDX-FileCopyrightText: Copyright (c) 2024-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
Neelay Shah's avatar
Neelay Shah committed
3
SPDX-License-Identifier: Apache-2.0
4
5
6
7
8
9
10
11
12
13
14
15

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Neelay Shah's avatar
Neelay Shah committed
16
17
-->

18
# NVIDIA Dynamo
Neelay Shah's avatar
Neelay Shah committed
19

20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
<table width="100%">
  <tr>
    <td align="left">
      <a href="https://opensource.org/licenses/Apache-2.0">
        <img src="https://img.shields.io/badge/License-Apache_2.0-blue.svg" alt="License">
      </a>
    </td>
    <td align="left">
      <a href="https://github.com/ai-dynamo/dynamo/releases/latest">
        <img src="https://img.shields.io/github/v/release/ai-dynamo/dynamo" alt="GitHub Release">
      </a>
    </td>
    <td align="right">
      <a href="https://discord.gg/nvidia-dynamo">
        <img src="https://discord.com/api/guilds/1351250028588437504/widget.png?style=banner2" alt="Discord">
      </a>
    </td>
  </tr>
</table>

40

Neelay Shah's avatar
Neelay Shah committed
41
| **[Guides](docs/guides)** | **[Architecture and Features](docs/architecture.md)** | **[APIs](lib/bindings/python/README.md)** | **[SDK](deploy/dynamo/sdk/README.md)** |
Neelay Shah's avatar
Neelay Shah committed
42

Neelay Shah's avatar
Neelay Shah committed
43
NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities such as:
44

Neelay Shah's avatar
Neelay Shah committed
45
46
47
48
49
- **Disaggregated prefill & decode inference** – Maximizes GPU throughput and facilitates trade off between throughput and latency.
- **Dynamic GPU scheduling** – Optimizes performance based on fluctuating demand
- **LLM-aware request routing** – Eliminates unnecessary KV cache re-computation
- **Accelerated data transfer** – Reduces inference response time using NIXL.
- **KV cache offloading** – Leverages multiple memory hierarchies for higher system throughput
50

Neelay Shah's avatar
Neelay Shah committed
51
Built in Rust for performance and in Python for extensibility, Dynamo is fully open-source and driven by a transparent, OSS (Open Source Software) first development approach.
Neelay Shah's avatar
Neelay Shah committed
52

Neelay Shah's avatar
Neelay Shah committed
53
### Installation
54

Neelay Shah's avatar
Neelay Shah committed
55
The following examples require a few system level packages.
56

Neelay Shah's avatar
Neelay Shah committed
57
58
59
```
apt-get update
DEBIAN_FRONTEND=noninteractive apt-get install -yq python3-dev libucx0
60

Neelay Shah's avatar
Neelay Shah committed
61
62
pip install ai-dynamo nixl vllm==0.7.2+dynamo
```
63

Neelay Shah's avatar
Neelay Shah committed
64
65
> [!NOTE]
> TensorRT-LLM Support is currently available on a [branch](https://github.com/ai-dynamo/dynamo/tree/dynamo/trtllm_llmapi_v1/examples/trtllm#building-the-environment)
66

Neelay Shah's avatar
Neelay Shah committed
67
### Running and Interacting with an LLM Locally
68

Neelay Shah's avatar
Neelay Shah committed
69
70
71
To run a model and interact with it locally you can call `dynamo
run` with a hugging face model. `dynamo run` supports several backends
including: `mistralrs`, `sglang`, `vllm`, and `tensorrtllm`.
72

Neelay Shah's avatar
Neelay Shah committed
73
#### Example Command
74

Neelay Shah's avatar
Neelay Shah committed
75
76
```
dynamo run out=vllm deepseek-ai/DeepSeek-R1-Distill-Llama-8B
77
```
78

Neelay Shah's avatar
Neelay Shah committed
79
80
81
82
83
```
? User › Hello, how are you?
✔ User · Hello, how are you?
Okay, so I'm trying to figure out how to respond to the user's greeting. They said, "Hello, how are you?" and then followed it with "Hello! I'm just a program, but thanks for asking." Hmm, I need to come up with a suitable reply. ...
```
84

Neelay Shah's avatar
Neelay Shah committed
85
### LLM Serving
86

Neelay Shah's avatar
Neelay Shah committed
87
88
Dynamo provides a simple way to spin up a local set of inference
components including:
89

Neelay Shah's avatar
Neelay Shah committed
90
91
92
- **OpenAI Compatible Frontend** – High performance OpenAI compatible http api server written in Rust.
- **Basic and Kv Aware Router** – Route and load balance traffic to a set of workers.
- **Workers** – Set of pre-configured LLM serving engines.
93

Neelay Shah's avatar
Neelay Shah committed
94
95
To run a minimal configuration you can use a pre-configured
example.
96

Neelay Shah's avatar
Neelay Shah committed
97
#### Start Dynamo Distributed Runtime Services
98

Neelay Shah's avatar
Neelay Shah committed
99
First start the Dynamo Distributed Runtime services:
100
101

```bash
Neelay Shah's avatar
Neelay Shah committed
102
docker compose -f deploy/docker-compose.yml up -d
103
```
104

Neelay Shah's avatar
Neelay Shah committed
105
106
107
108
#### Start Dynamo LLM Serving Components

Next serve a minimal configuration with an http server, basic
round-robin router, and a single worker.
109
110

```bash
Neelay Shah's avatar
Neelay Shah committed
111
112
cd examples/llm
dynamo serve graphs.agg:Frontend -f configs/agg.yaml
113
114
```

Neelay Shah's avatar
Neelay Shah committed
115
#### Send a Request
116

117
```bash
Neelay Shah's avatar
Neelay Shah committed
118
119
120
121
122
123
124
125
126
127
128
curl localhost:8000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
    "messages": [
    {
        "role": "user",
        "content": "Hello, how are you?"
    }
    ],
    "stream":false,
    "max_tokens": 300
  }' | jq
129
130
```