README.md 11.7 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
<!--
SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

# TensorRT-LLM Integration with Triton Distributed

This example demonstrates how to use Triton Distributed to serve large language models with the tensorrt_llm engine, enabling efficient model serving with both monolithic and disaggregated deployment options.

## Prerequisites

Start required services (etcd and NATS):

   Option A: Using [Docker Compose](/runtime/rust/docker-compose.yml) (Recommended)
   ```bash
   docker-compose up -d
   ```

   Option B: Manual Setup

    - [NATS.io](https://docs.nats.io/running-a-nats-service/introduction/installation) server with [Jetstream](https://docs.nats.io/nats-concepts/jetstream)
        - example: `nats-server -js --trace`
    - [etcd](https://etcd.io) server
        - follow instructions in [etcd installation](https://etcd.io/docs/v3.5/install/) to start an `etcd-server` locally
        - example: `etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379`


## Building the Environment

TODO: Remove the internal references below.

- Build TRT-LLM wheel using latest tensorrt_llm main

```
git clone https://github.com/NVIDIA/TensorRT-LLM.git
cd TensorRT-LLM

# Start a dev docker container. Dont forget to mount your home directory to /home in the docker run command.
make -C docker jenkins_run LOCAL_USER=1 DOCKER_RUN_ARGS="-v /user/home:/home"

# Build wheel for the GPU architecture you are currently using ("native").
# We use -f to run fast build which should speed up the build process. But it might not work for all GPUs and for full functionality you should disable it.
python3 scripts/build_wheel.py --clean --trt_root /usr/local/tensorrt -a native -i -p -ccache

# Copy wheel to your local directory
cp build/tensorrt_llm-*.whl /home
```

- Build the Triton Distributed container
```bash
# Build image
./container/build.sh --base-image gitlab-master.nvidia.com:5005/dl/dgx/tritonserver/tensorrt-llm/amd64 --base-image-tag krish-fix-trtllm-build.23766174
```

Alternatively, you can build with latest tensorrt_llm pipeline like below:
```bash
# Build image
./container/build.sh --framework TENSORRTLLM --skip-clone-tensorrtllm 1 --base-image urm.nvidia.com/sw-tensorrt-docker/tensorrt-llm-staging/release --base-image-tag main
```
**Note:** If you are using the latest tensorrt_llm image, you do not need to install the TRT-LLM wheel.

## Launching the Environment
```
# Run image interactively from with the triton distributed root directory.
./container/run.sh --framework TENSORRTLLM -it -v /home/:/home/

# Install the TRT-LLM wheel. No need to do this if you are using the latest tensorrt_llm image.
pip install /home/tensorrt_llm-*.whl
```

## Deployment Options

Note: NATS and ETCD servers should be running and accessible from the container as described in the [Prerequisites](#prerequisites) section.

87
### Monolithic Deployment
88

89
#### 1. HTTP Server
90

91
92
Run the server logging (with debug level logging):
```bash
93
DYN_LOG=DEBUG http &
94
95
96
97
98
```
By default the server will run on port 8080.

Add model to the server:
```bash
Neelay Shah's avatar
Neelay Shah committed
99
100
llmctl http add chat TinyLlama/TinyLlama-1.1B-Chat-v1.0 dynamo.tensorrt-llm.chat/completions
llmctl http add completion TinyLlama/TinyLlama-1.1B-Chat-v1.0 dynamo.tensorrt-llm.completions
101
102
103
```

#### 2. Workers
104
105
106

Note: The following commands are tested on machines withH100x8 GPUs

107
##### Option 2.1 Single-Node Single-GPU
108
109
110
111

```bash
# Launch worker
cd /workspace/examples/python_rs/llm/tensorrt_llm
112
mpirun --allow-run-as-root -n 1 --oversubscribe python3 -m monolith.worker --engine_args llm_api_config.yaml 1>agg_worker.log 2>&1 &
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
```

Upon successful launch, the output should look similar to:

```bash
[TensorRT-LLM][INFO] KV cache block reuse is disabled
[TensorRT-LLM][INFO] Max KV cache pages per sequence: 2048
[TensorRT-LLM][INFO] Number of tokens per block: 64.
[TensorRT-LLM][INFO] [MemUsageChange] Allocated 26.91 GiB for max tokens in paged KV cache (220480).
[02/14/2025-09:38:53] [TRT-LLM] [I] max_seq_len=131072, max_num_requests=2048, max_num_tokens=8192
[02/14/2025-09:38:53] [TRT-LLM] [I] Engine loaded and ready to serve...
```

`nvidia-smi` can be used to check the GPU usage and the model is loaded on single GPU.

128
##### Option 2.2 Single-Node Multi-GPU
129

130
Update `tensor_parallel_size` in the `llm_api_config.yaml` to load the model with the desired number of GPUs.
131
132
`nvidia-smi` can be used to check the GPU usage and the model is loaded on 4 GPUs.

133
##### Option 2.3 Multi-Node Multi-GPU
134

135
TODO: Add multi-node multi-GPU example
136

137
#### 3. Client
138
139

```bash
140
141
142
143
144
145
146
147
148
# Chat Completion
curl localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    "messages": [
      {"role": "user", "content": "What is the capital of France?"}
    ]
  }'
149
150
151
```

The output should look similar to:
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
```json
{
  "id": "ab013077-8fb2-433e-bd7d-88133fccd497",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "The capital of France is Paris."
      },
      "index": 0,
      "finish_reason": "stop"
    }
  ],
  "created": 1740617803,
  "model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
  "object": "chat.completion",
  "usage": null,
  "system_fingerprint": null
}
171
```
172
173
174
175
176
177
178
179
180
181
182

```bash
# Completion
curl localhost:8080/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
        "model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
        "prompt": "The capital of France is",
        "max_tokens": 1,
        "temperature": 0
    }'
183
184
```

185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
Output:
```json
{
  "id":"cmpl-e0d75aca1bd540399809c9b609eaf010",
  "choices":[
    {
      "text":"Paris",
      "index":0,
      "finish_reason":"length"
    }
  ],
  "created":1741024639,
  "model":"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
  "object":"text_completion",
  "usage":null
}
```
202

203
### Disaggregated Deployment
204
205
206
207
208
209
210
211
212

**Environment**
This is the latest image with tensorrt_llm supporting distributed serving with pytorch workflow in LLM API.

Run the container interactively with the following command:
```bash
./container/run.sh --image IMAGE -it
```

213
214
215
216
#### 1. HTTP Server

Run the server logging (with debug level logging):
```bash
217
DYN_LOG=DEBUG http &
218
219
220
221
222
```
By default the server will run on port 8080.

Add model to the server:
```bash
Neelay Shah's avatar
Neelay Shah committed
223
224
llmctl http add chat TinyLlama/TinyLlama-1.1B-Chat-v1.0 dynamo.router.chat/completions
llmctl http add completion TinyLlama/TinyLlama-1.1B-Chat-v1.0 dynamo.router.completions
225
226
227
228
229
230
```

#### 2. Workers

##### Option 2.1 Single-Node Disaggregated Deployment

231
232
233
234
**TRTLLM LLMAPI Disaggregated config file**
Define disaggregated config file similar to the example [single_node_config.yaml](disaggregated/llmapi_disaggregated_configs/single_node_config.yaml). The important sections are the model, context_servers and generation_servers.


235
1. **Launch the servers**
236
237
238
239
240
241
242

Launch context and generation servers.\
WORLD_SIZE is the total number of workers covering all the servers described in disaggregated configuration.\
For example, 2 TP2 generation servers are 2 servers but 4 workers/mpi executor.

```bash
cd /workspace/examples/python_rs/llm/tensorrt_llm/
243
mpirun --allow-run-as-root --oversubscribe -n WORLD_SIZE python3 -m disaggregated.worker --engine_args llm_api_config.yaml -c disaggregated/llmapi_disaggregated_configs/single_node_config.yaml 1>disagg_workers.log 2>&1 &
244
245
246
```
If using the provided [single_node_config.yaml](disaggregated/llmapi_disaggregated_configs/single_node_config.yaml), WORLD_SIZE should be 3 as it has 2 context servers(TP=1) and 1 generation server(TP=1).

247
2. **Launch the router**
248
249
250

```bash
cd /workspace/examples/python_rs/llm/tensorrt_llm/
251
python3 -m disaggregated.router 1>router.log 2>&1 &
252
253
```

254
255
3. **Send Requests**
Follow the instructions in the [Monolithic Deployment](#3-client) section to send requests to the router.
256
257
258
259
260


For more details on the disaggregated deployment, please refer to the [TRT-LLM example](#TODO).


261
### Multi-Node Disaggregated Deployment
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306

To run the disaggregated deployment across multiple nodes, we need to launch the servers using MPI, pass the correct NATS and etcd endpoints to each server and update the LLMAPI disaggregated config file to use the correct endpoints.

1. Allocate nodes
The following command allocates nodes for the job and returns the allocated nodes.
```bash
salloc -A ACCOUNT -N NUM_NODES -p batch -J JOB_NAME -t HH:MM:SS
```

You can use `squeue -u $USER` to check the URLs of the allocated nodes. These URLs should be added to the TRTLLM LLMAPI disaggregated config file as shown below.
```yaml
model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
...
context_servers:
  num_instances: 2
  gpu_fraction: 0.25
  tp_size: 2
  pp_size: 1
  urls:
      - "node1:8001"
      - "node2:8002"
generation_servers:
  num_instances: 2
  gpu_fraction: 0.25
  tp_size: 2
  pp_size: 1
  urls:
      - "node2:8003"
      - "node2:8004"
```

2. Start the NATS and ETCD endpoints

Use the following commands. These commands will require downloading [NATS.io](https://docs.nats.io/running-a-nats-service/introduction/installation) and [ETCD](https://etcd.io/docs/v3.5/install/):
```bash
./nats-server -js --trace
./etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379
```

Export the correct NATS and etcd endpoints.
```bash
export NATS_SERVER="nats://node1:4222"
export ETCD_ENDPOINTS="http://node1:2379,http://node2:2379"
```

307
3. Launch the workers from node1 or login node. WORLD_SIZE is similar to single node deployment.
308
```bash
309
srun --mpi pmix -N NUM_NODES --ntasks WORLD_SIZE --ntasks-per-node=WORLD_SIZE --no-container-mount-home --overlap --container-image IMAGE --output batch_%x_%j.log --err batch_%x_%j.err --container-mounts PATH_TO_TRITON_DISTRIBUTED:/workspace --container-env=NATS_SERVER,ETCD_ENDPOINTS bash -c 'cd /workspace/examples/python_rs/llm/tensorrt_llm && python3 -m disaggregated.worker --engine_args llm_api_config.yaml -c disaggregated/llmapi_disaggregated_configs/multi_node_config.yaml' &
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
```

Once the workers are launched, you should see the output similar to the following in the worker logs.
```
[TensorRT-LLM][INFO] [MemUsageChange] Allocated 18.88 GiB for max tokens in paged KV cache (1800032).
[02/20/2025-07:10:33] [TRT-LLM] [I] max_seq_len=2048, max_num_requests=2048, max_num_tokens=8192
[02/20/2025-07:10:33] [TRT-LLM] [I] Engine loaded and ready to serve...
[02/20/2025-07:10:33] [TRT-LLM] [I] max_seq_len=2048, max_num_requests=2048, max_num_tokens=8192
[TensorRT-LLM][INFO] Number of tokens per block: 32.
[TensorRT-LLM][INFO] [MemUsageChange] Allocated 18.88 GiB for max tokens in paged KV cache (1800032).
[02/20/2025-07:10:33] [TRT-LLM] [I] max_seq_len=2048, max_num_requests=2048, max_num_tokens=8192
[02/20/2025-07:10:33] [TRT-LLM] [I] Engine loaded and ready to serve...
```

4. Launch the router from node1 or login node.
```bash
326
srun --mpi pmix -N 1 --ntasks 1 --ntasks-per-node=1 --overlap --container-image IMAGE --output batch_router_%x_%j.log --err batch_router_%x_%j.err --container-mounts PATH_TO_TRITON_DISTRIBUTED:/workspace  --container-env=NATS_SERVER,ETCD_ENDPOINTS bash -c 'cd /workspace/examples/python_rs/llm/tensorrt_llm && python3 -m disaggregated.router' &
327
328
329
```

5. Send requests to the router.
330
The router will connect to the OAI compatible server. You can send requests to the router using the standard OAI format as shown in previous sections.