"applications/llm/count/vscode:/vscode.git/clone" did not exist on "08fcd7e93ba5df3093a8b54fe79e0895fe7a5f15"
dynamo_run.md 15.7 KB
Newer Older
1
# Dynamo Run
2

3
`dynamo-run` is a CLI tool for exploring the Dynamo components, and an example of how to use them from Rust. It is also available as `dynamo run` if using the Python wheel.
4

5
## Quickstart with pip and vllm
6

7
8
9
10
11
If you used `pip` to install `dynamo` you should have the `dynamo-run` binary pre-installed with the `vllm` engine. You must be in a virtual env with vllm installed to use this. For more options see "Full documentation" below.

### Automatically download a model from [Hugging Face](https://huggingface.co/models)

This will automatically download Qwen2.5 3B from Hugging Face (6 GiB download) and start it in interactive text mode:
12
```
13
dynamo run out=vllm Qwen/Qwen2.5-3B-Instruct
14
15
```

16
17
18
General format for HF download:
```
dynamo run out=<engine> <HUGGING_FACE_ORGANIZATION/MODEL_NAME>
19
20
```

21
For gated models (e.g. meta-llama/Llama-3.2-3B-Instruct) you have to have an `HF_TOKEN` environment variable set.
22

23
The parameter can be the ID of a HuggingFace repository (it will be downloaded), a GGUF file, or a folder containing safetensors, config.json, etc (a locally checked out HuggingFace repository).
24

25
26
27
28
29
30
## Manually download a model from Hugging Face

One of these models should be high quality and fast on almost any machine: https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF
E.g. https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf

Download model file:
31
```
32
curl -L -o Llama-3.2-3B-Instruct-Q4_K_M.gguf "https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/resolve/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf?download=true"
33
34
```

35
## Run a model from local file
36

37
*Text interface*
38
```
39
dynamo run out=vllm Llama-3.2-3B-Instruct-Q4_K_M.gguf # or path to a Hugging Face repo checkout instead of the GGUF
40
```
41
42

*HTTP interface*
43
```
44
dynamo run in=http out=vllm Llama-3.2-3B-Instruct-Q4_K_M.gguf
45
```
46

47
*List the models*
48
```
49
curl localhost:8080/v1/models
50
```
51

52
*Send a request*
53
```
54
curl -d '{"model": "Llama-3.2-3B-Instruct-Q4_K_M", "max_completion_tokens": 2049, "messages":[{"role":"user", "content": "What is the capital of South Africa?" }]}' -H 'Content-Type: application/json' http://localhost:8080/v1/chat/completions
55
```
56

57
58
59
60
61
*Multi-node*

You will need [etcd](https://etcd.io/) and [nats](https://nats.io) installed and accessible from both nodes.

Node 1:
62
```
63
dynamo run in=http out=dyn://llama3B_pool
64
```
65

66
Node 2:
67
```
68
dynamo run in=dyn://llama3B_pool out=vllm ~/llm_models/Llama-3.2-3B-Instruct
69
```
70

71
72
73
This will use etcd to auto-discover the model and NATS to talk to it. You can run multiple workers on the same endpoint and it will pick one at random each time.

The `llama3B_pool` name is purely symbolic, pick anything as long as it matches the other node.
74

75
76
77
78
79
80
81
82
83
Run `dynamo run --help` for more options.

# Full documentation

`dynamo-run` is what `dynamo run` executes. It is an example of what you can build in Rust with the `dynamo-llm` and `dynamo-runtime`. Here is a list of how to build from source and all the features.

## Setup

Libraries Ubuntu:
84
```
85
apt install -y build-essential libhwloc-dev libudev-dev pkg-config libssl-dev libclang-dev protobuf-compiler python3-dev
86
87
```

88
Libraries macOS:
89
90
91
92
93
94
95
- [Homebrew](https://brew.sh/)
```
# if brew is not installed on your system, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
- [Xcode](https://developer.apple.com/xcode/)

96
97
```
brew install cmake protobuf
98

99
# Check that Metal is accessible
100
101
xcrun -sdk macosx metal
```
102
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.
103

104
Install Rust:
105
```
106
107
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
108
```
109

110
## Build
111

112
Navigate to launch/ directory
113
```
114
cd launch/
115
```
116
Optionally can run `cargo build` from any location with arguments:
117
```
118
119
--target-dir /path/to/target_directory` specify target_directory with write privileges
--manifest-path /path/to/project/Cargo.toml` if cargo build is run outside of `launch/` directory
120
```
121

122
- Linux with GPU and CUDA (tested on Ubuntu):
123
```
124
cargo build --features cuda
125
```
126

127
- macOS with Metal:
128
```
129
cargo build --features metal
130
131
```

132
- CPU only:
133
```
134
cargo build
135
136
```

137
The binary will be called `dynamo-run` in `target/debug`
138
```
139
cd target/debug
140
141
```

142
143
Build with `--release` for a smaller binary and better performance, but longer build times. The binary will be in `target/release`.

144
145
## sglang

146
147
1. Setup the python virtual env:

148
149
150
151
152
153
154
```
uv venv
source .venv/bin/activate
uv pip install pip
uv pip install sgl-kernel --force-reinstall --no-deps
uv pip install "sglang[all]==0.4.2" --find-links https://flashinfer.ai/whl/cu124/torch2.4/flashinfer/
```
155
156
157
158

2. Build

```
159
cargo build --features sglang
160
161
162
163
164
165
166
167
```

3. Run

Any example above using `out=sglang` will work, but our sglang backend is also multi-gpu and multi-node.

Node 1:
```
Neelay Shah's avatar
Neelay Shah committed
168
dynamo-run in=http out=sglang --model-path ~/llm_models/DeepSeek-R1-Distill-Llama-70B/ --tensor-parallel-size 8 --num-nodes 2 --node-rank 0 --dist-init-addr 10.217.98.122:9876
169
170
171
172
```

Node 2:
```
Neelay Shah's avatar
Neelay Shah committed
173
dynamo-run in=none out=sglang --model-path ~/llm_models/DeepSeek-R1-Distill-Llama-70B/ --tensor-parallel-size 8 --num-nodes 2 --node-rank 1 --dist-init-addr 10.217.98.122:9876
174
175
176
177
```

## llama_cpp

178
- `cargo build --features llamacpp,cuda`
179

180
- `dynamo-run out=llama_cp ~/llm_models/Llama-3.2-3B-Instruct-Q6_K.gguf`
181

182
If the build step also builds llama_cpp libraries into the same folder as the binary ("libllama.so", "libggml.so", "libggml-base.so", "libggml-cpu.so", "libggml-cuda.so"), then `dynamo-run` will need to find those at runtime. Set `LD_LIBRARY_PATH`, and be sure to deploy them alongside the `dynamo-run` binary.
Graham King's avatar
Graham King committed
183
184
185
186
187
188
189
190
191
192
193
194

## vllm

Using the [vllm](https://github.com/vllm-project/vllm) Python library. We only use the back half of vllm, talking to it over `zmq`. Slow startup, fast inference. Supports both safetensors from HF and GGUF files.

We use [uv](https://docs.astral.sh/uv/) but any virtualenv manager should work.

Setup:
```
uv venv
source .venv/bin/activate
uv pip install pip
195
uv pip install vllm==0.7.3 setuptools
Graham King's avatar
Graham King committed
196
197
198
199
200
201
```

**Note: If you're on Ubuntu 22.04 or earlier, you will need to add `--python=python3.10` to your `uv venv` command**

Build:
```
202
cargo build --features vllm
Graham King's avatar
Graham King committed
203
204
205
206
```

Run (still inside that virtualenv) - HF repo:
```
207
./dynamo-run in=http out=vllm --model-path ~/llm_models/Llama-3.2-3B-Instruct/
Graham King's avatar
Graham King committed
208
209
210
211
212

```

Run (still inside that virtualenv) - GGUF:
```
213
./dynamo-run in=http out=vllm ~/llm_models/Llama-3.2-3B-Instruct-Q6_K.gguf
Graham King's avatar
Graham King committed
214
215
```

216
217
218
219
+ Multi-node:

Node 1:
```
Neelay Shah's avatar
Neelay Shah committed
220
dynamo-run in=text out=vllm ~/llm_models/Llama-3.2-3B-Instruct/ --tensor-parallel-size 8 --num-nodes 2 --leader-addr 10.217.98.122:6539 --node-rank 0
221
222
223
224
```

Node 2:
```
Neelay Shah's avatar
Neelay Shah committed
225
dynamo-run in=none out=vllm ~/llm_models/Llama-3.2-3B-Instruct/ --num-nodes 2 --leader-addr 10.217.98.122:6539 --node-rank 1
226
227
```

228
## Python bring-your-own-engine
229
230
231
232
233
234

You can provide your own engine in a Python file. The file must provide a generator with this signature:
```
async def generate(request):
```

235
Build: `cargo build --features python`
236
237
238
239
240
241

### Python does the pre-processing

If the Python engine wants to receive and returns strings - it will do the prompt templating and tokenization itself - run it like this:

```
242
dynamo-run out=pystr:/home/user/my_python_engine.py
243
244
```

245
246
- The `request` parameter is a map, an OpenAI compatible create chat completion request: https://platform.openai.com/docs/api-reference/chat/create
- The function must `yield` a series of maps conforming to create chat completion stream response (example below).
247
- If using an HTTP front-end add the `--model-name` flag. This is the name we serve the model under.
248
249
250
251
252
253
254
255

The file is loaded once at startup and kept in memory.

Example engine:
```
import asyncio

async def generate(request):
256
    yield {"id":"1","choices":[{"index":0,"delta":{"content":"The","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
257
    await asyncio.sleep(0.1)
258
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" capital","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
259
    await asyncio.sleep(0.1)
260
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" of","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
261
    await asyncio.sleep(0.1)
262
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" France","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
263
    await asyncio.sleep(0.1)
264
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" is","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
265
    await asyncio.sleep(0.1)
266
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" Paris","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
267
    await asyncio.sleep(0.1)
268
    yield {"id":"1","choices":[{"index":0,"delta":{"content":".","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
269
    await asyncio.sleep(0.1)
270
    yield {"id":"1","choices":[{"index":0,"delta":{"content":"","role":"assistant"},"finish_reason":"stop"}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
271
272
```

273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
Command line arguments are passed to the python engine like this:
```
dynamo-run out=pystr:my_python_engine.py -- -n 42 --custom-arg Orange --yes
```

The python engine receives the arguments in `sys.argv`. The argument list will include some standard ones as well as anything after the `--`.

This input:
```
dynamo-run out=pystr:my_engine.py /opt/models/Llama-3.2-3B-Instruct/ --model-name llama_3.2 --tensor-parallel-size 4 -- -n 1
```

is read like this:
```
async def generate(request):
    .. as before ..

if __name__ == "__main__":
    print(f"MAIN: {sys.argv}")
```

and produces this output:
```
MAIN: ['my_engine.py', '--model-path', '/opt/models/Llama-3.2-3B-Instruct/', '--model-name', 'llama3.2', '--http-port', '8080', '--tensor-parallel-size', '4', '--base-gpu-id', '0', '--num-nodes', '1', '--node-rank', '0', '-n', '1']
```

This allows quick iteration on the engine setup. Note how the `-n` `1` is included. Flags `--leader-addr` and `--model-config` will also be added if provided to `dynamo-run`.

Neelay Shah's avatar
Neelay Shah committed
301
### Dynamo does the pre-processing
302
303
304

If the Python engine wants to receive and return tokens - the prompt templating and tokenization is already done - run it like this:
```
Neelay Shah's avatar
Neelay Shah committed
305
dynamo-run out=pytok:/home/user/my_python_engine.py --model-path <hf-repo-checkout>
306
307
308
309
310
311
312
313
314
315
316
317
```

- The request parameter is a map that looks like this:
```
{'token_ids': [128000, 128006, 9125, 128007, ... lots more ... ], 'stop_conditions': {'max_tokens': 8192, 'stop': None, 'stop_token_ids_hidden': [128001, 128008, 128009], 'min_tokens': None, 'ignore_eos': None}, 'sampling_options': {'n': None, 'best_of': None, 'presence_penalty': None, 'frequency_penalty': None, 'repetition_penalty': None, 'temperature': None, 'top_p': None, 'top_k': None, 'min_p': None, 'use_beam_search': None, 'length_penalty': None, 'seed': None}, 'eos_token_ids': [128001, 128008, 128009], 'mdc_sum': 'f1cd44546fdcbd664189863b7daece0f139a962b89778469e4cffc9be58ccc88', 'annotations': []}
```

- The `generate` function must `yield` a series of maps that look like this:
```
{"token_ids":[791],"tokens":None,"text":None,"cum_log_probs":None,"log_probs":None,"finish_reason":None}
```

318
- Command like flag `--model-path` which must point to a Hugging Face repo checkout containing the `tokenizer.json`. The `--model-name` flag is optional. If not provided we use the HF repo name (directory name) as the model name.
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338

Example engine:
```
import asyncio

async def generate(request):
    yield {"token_ids":[791]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[6864]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[315]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[9822]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[374]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[12366]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[13]}
```
339

340
341
`pytok` supports the same ways of passing command line arguments as `pystr` - `initialize` or `main` with `sys.argv`.

Graham King's avatar
Graham King committed
342
343
344
345
346
347
## trtllm

TensorRT-LLM. Requires `clang` and `libclang-dev`.

Build:
```
348
cargo build --features trtllm
Graham King's avatar
Graham King committed
349
350
351
352
```

Run:
```
Neelay Shah's avatar
Neelay Shah committed
353
dynamo-run in=text out=trtllm --model-path /app/trtllm_engine/ --model-config ~/llm_models/Llama-3.2-3B-Instruct/
Graham King's avatar
Graham King committed
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
```

Note that TRT-LLM uses it's own `.engine` format for weights. Repo models must be converted like so:

+ Get the build container
```
docker run --gpus all -it nvcr.io/nvidian/nemo-llm/trtllm-engine-builder:0.2.0 bash
```

+ Fetch the model and convert
```
mkdir /tmp/model
huggingface-cli download meta-llama/Llama-3.2-3B-Instruct --local-dir /tmp/model
python convert_checkpoint.py --model_dir /tmp/model/ --output_dir ./converted --dtype [float16|bfloat16|whatever you want] --tp_size X --pp_size Y
trtllm-build --checkpoint_dir ./converted --output_dir ./final/trtllm_engine --use_paged_context_fmha enable --gemm_plugin auto
```

Neelay Shah's avatar
Neelay Shah committed
371
The `--model-path` you give to `dynamo-run` must contain the `config.json` (TRT-LLM's , not the model's) and `rank0.engine` (plus other ranks if relevant).
Graham King's avatar
Graham King committed
372
373
374
375

+ Execute
TRT-LLM is a C++ library that must have been previously built and installed. It needs a lot of memory to compile. Gitlab builds a container you can try:
```
376
sudo docker run --gpus all -it -v /home/user:/outside-home gitlab-master.nvidia.com:5005/dl/ai-services/libraries/rust/nim-nvllm/tensorrt_llm_runtime:85fa4a6f
Graham King's avatar
Graham King committed
377
378
379
```

Copy the trt-llm engine, the model's `.json` files (for the model deployment card) and the `nio` binary built for the correct glibc (container is Ubuntu 22.04 currently) into that container.
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397

## Echo Engines

Dynamo includes two echo engines for testing and debugging purposes:

### echo_core

The `echo_core` engine accepts pre-processed requests and echoes the tokens back as the response. This is useful for testing pre-processing functionality as the response will include the full prompt template.

```
dynamo-run in=http out=echo_core --model-path <hf-repo-checkout>
```

### echo_full

The `echo_full` engine accepts un-processed requests and echoes the prompt back as the response.

```
398
dynamo-run in=http out=echo_full --model-name my_model
399
400
401
402
403
404
405
406
407
408
409
410
```

### Configuration

Both echo engines use a configurable delay between tokens to simulate generation speed. You can adjust this using the `DYN_TOKEN_ECHO_DELAY_MS` environment variable:

```
# Set token echo delay to 1ms (1000 tokens per second)
DYN_TOKEN_ECHO_DELAY_MS=1 dynamo-run in=http out=echo_full
```

The default delay is 10ms, which produces approximately 100 tokens per second.
411

412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
## Batch mode

dynamo-run can take a jsonl file full of prompts and evaluate them all:

```
dynamo-run in=batch:prompts.jsonl out=llamacpp <model>
```

The input file should look like this:
```
{"text": "What is the capital of France?"}
{"text": "What is the capital of Spain?"}
```

Each one is passed as a prompt to the model. The output is written back to the same folder in `output.jsonl`. At the end of the run some statistics are printed.
The output looks like this:
```
{"text":"What is the capital of France?","response":"The capital of France is Paris.","tokens_in":7,"tokens_out":7,"elapsed_ms":1566}
{"text":"What is the capital of Spain?","response":".The capital of Spain is Madrid.","tokens_in":7,"tokens_out":7,"elapsed_ms":855}
```

433
434
## Defaults

435
The input defaults to `in=text`. The output will default to `mistralrs` engine. If not available whatever engine you have compiled in (so depending on `--features`).
436