dynamo_run.md 15.1 KB
Newer Older
1
# Dynamo Run
2

3
`dynamo-run` is a CLI tool for exploring the Dynamo components, and an example of how to use them from Rust. It is also available as `dynamo run` if using the Python wheel.
4

5
## Quickstart with pip and vllm
6

7
8
9
10
11
If you used `pip` to install `dynamo` you should have the `dynamo-run` binary pre-installed with the `vllm` engine. You must be in a virtual env with vllm installed to use this. For more options see "Full documentation" below.

### Automatically download a model from [Hugging Face](https://huggingface.co/models)

This will automatically download Qwen2.5 3B from Hugging Face (6 GiB download) and start it in interactive text mode:
12
```
13
dynamo run out=vllm Qwen/Qwen2.5-3B-Instruct
14
15
```

16
17
18
General format for HF download:
```
dynamo run out=<engine> <HUGGING_FACE_ORGANIZATION/MODEL_NAME>
19
20
```

21
For gated models (e.g. meta-llama/Llama-3.2-3B-Instruct) you have to have an `HF_TOKEN` environment variable set.
22

23
The parameter can be the ID of a HuggingFace repository (it will be downloaded), a GGUF file, or a folder containing safetensors, config.json, etc (a locally checked out HuggingFace repository).
24

25
26
27
28
29
30
## Manually download a model from Hugging Face

One of these models should be high quality and fast on almost any machine: https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF
E.g. https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/blob/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf

Download model file:
31
```
32
curl -L -o Llama-3.2-3B-Instruct-Q4_K_M.gguf "https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/resolve/main/Llama-3.2-3B-Instruct-Q4_K_M.gguf?download=true"
33
34
```

35
## Run a model from local file
36

37
*Text interface*
38
```
39
dynamo run out=vllm Llama-3.2-3B-Instruct-Q4_K_M.gguf # or path to a Hugging Face repo checkout instead of the GGUF
40
```
41
42

*HTTP interface*
43
```
44
dynamo run in=http out=vllm Llama-3.2-3B-Instruct-Q4_K_M.gguf
45
```
46

47
*List the models*
48
```
49
curl localhost:8080/v1/models
50
```
51

52
*Send a request*
53
```
54
curl -d '{"model": "Llama-3.2-3B-Instruct-Q4_K_M", "max_completion_tokens": 2049, "messages":[{"role":"user", "content": "What is the capital of South Africa?" }]}' -H 'Content-Type: application/json' http://localhost:8080/v1/chat/completions
55
```
56

57
58
59
60
61
*Multi-node*

You will need [etcd](https://etcd.io/) and [nats](https://nats.io) installed and accessible from both nodes.

Node 1:
62
```
63
dynamo run in=http out=dyn://llama3B_pool
64
```
65

66
Node 2:
67
```
68
dynamo run in=dyn://llama3B_pool out=vllm ~/llm_models/Llama-3.2-3B-Instruct
69
```
70

71
72
73
This will use etcd to auto-discover the model and NATS to talk to it. You can run multiple workers on the same endpoint and it will pick one at random each time.

The `llama3B_pool` name is purely symbolic, pick anything as long as it matches the other node.
74

75
76
77
78
79
80
81
82
83
Run `dynamo run --help` for more options.

# Full documentation

`dynamo-run` is what `dynamo run` executes. It is an example of what you can build in Rust with the `dynamo-llm` and `dynamo-runtime`. Here is a list of how to build from source and all the features.

## Setup

Libraries Ubuntu:
84
```
85
apt install -y build-essential libhwloc-dev libudev-dev pkg-config libssl-dev libclang-dev protobuf-compiler python3-dev
86
87
```

88
Libraries macOS:
89
90
91
92
93
94
95
- [Homebrew](https://brew.sh/)
```
# if brew is not installed on your system, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
- [Xcode](https://developer.apple.com/xcode/)

96
97
```
brew install cmake protobuf
98

99
# Check that Metal is accessible
100
101
xcrun -sdk macosx metal
```
102
If Metal is accessible, you should see an error like `metal: error: no input files`, which confirms it is installed correctly.
103

104
Install Rust:
105
```
106
107
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
108
```
109

110
## Build
111

112
Navigate to launch/ directory
113
```
114
cd launch/
115
```
116
Optionally can run `cargo build` from any location with arguments:
117
```
118
119
--target-dir /path/to/target_directory` specify target_directory with write privileges
--manifest-path /path/to/project/Cargo.toml` if cargo build is run outside of `launch/` directory
120
```
121

122
- Linux with GPU and CUDA (tested on Ubuntu):
123
```
124
cargo build --features cuda
125
```
126

127
- macOS with Metal:
128
```
129
cargo build --features metal
130
131
```

132
- CPU only:
133
```
134
cargo build
135
136
```

137
The binary will be called `dynamo-run` in `target/debug`
138
```
139
cd target/debug
140
141
```

142
143
Build with `--release` for a smaller binary and better performance, but longer build times. The binary will be in `target/release`.

144
145
## sglang

146
147
1. Setup the python virtual env:

148
149
150
151
152
153
154
```
uv venv
source .venv/bin/activate
uv pip install pip
uv pip install sgl-kernel --force-reinstall --no-deps
uv pip install "sglang[all]==0.4.2" --find-links https://flashinfer.ai/whl/cu124/torch2.4/flashinfer/
```
155
156
157
158

2. Build

```
159
cargo build --features sglang
160
161
162
163
164
165
166
167
```

3. Run

Any example above using `out=sglang` will work, but our sglang backend is also multi-gpu and multi-node.

Node 1:
```
168
dynamo-run in=http out=sglang --model-path ~/llm_models/DeepSeek-R1-Distill-Llama-70B/ --tensor-parallel-size 8 --num-nodes 2 --node-rank 0 --leader-addr 10.217.98.122:9876
169
170
171
172
```

Node 2:
```
173
dynamo-run in=none out=sglang --model-path ~/llm_models/DeepSeek-R1-Distill-Llama-70B/ --tensor-parallel-size 8 --num-nodes 2 --node-rank 1 --leader-addr 10.217.98.122:9876
174
175
```

176
177
To pass extra arguments to the sglang engine see *Extra engine arguments* below.

178
179
## llama_cpp

180
- `cargo build --features llamacpp,cuda`
181

182
- `dynamo-run out=llama_cp ~/llm_models/Llama-3.2-3B-Instruct-Q6_K.gguf`
183

184
If the build step also builds llama_cpp libraries into the same folder as the binary ("libllama.so", "libggml.so", "libggml-base.so", "libggml-cpu.so", "libggml-cuda.so"), then `dynamo-run` will need to find those at runtime. Set `LD_LIBRARY_PATH`, and be sure to deploy them alongside the `dynamo-run` binary.
Graham King's avatar
Graham King committed
185
186
187
188
189
190
191
192
193
194
195
196

## vllm

Using the [vllm](https://github.com/vllm-project/vllm) Python library. We only use the back half of vllm, talking to it over `zmq`. Slow startup, fast inference. Supports both safetensors from HF and GGUF files.

We use [uv](https://docs.astral.sh/uv/) but any virtualenv manager should work.

Setup:
```
uv venv
source .venv/bin/activate
uv pip install pip
197
uv pip install vllm==0.7.3 setuptools
Graham King's avatar
Graham King committed
198
199
200
201
202
203
```

**Note: If you're on Ubuntu 22.04 or earlier, you will need to add `--python=python3.10` to your `uv venv` command**

Build:
```
204
cargo build --features vllm
Graham King's avatar
Graham King committed
205
206
207
208
```

Run (still inside that virtualenv) - HF repo:
```
209
./dynamo-run in=http out=vllm --model-path ~/llm_models/Llama-3.2-3B-Instruct/
Graham King's avatar
Graham King committed
210
211
212
213
214

```

Run (still inside that virtualenv) - GGUF:
```
215
./dynamo-run in=http out=vllm ~/llm_models/Llama-3.2-3B-Instruct-Q6_K.gguf
Graham King's avatar
Graham King committed
216
217
```

218
219
220
221
+ Multi-node:

Node 1:
```
Neelay Shah's avatar
Neelay Shah committed
222
dynamo-run in=text out=vllm ~/llm_models/Llama-3.2-3B-Instruct/ --tensor-parallel-size 8 --num-nodes 2 --leader-addr 10.217.98.122:6539 --node-rank 0
223
224
225
226
```

Node 2:
```
Neelay Shah's avatar
Neelay Shah committed
227
dynamo-run in=none out=vllm ~/llm_models/Llama-3.2-3B-Instruct/ --num-nodes 2 --leader-addr 10.217.98.122:6539 --node-rank 1
228
229
```

230
231
To pass extra arguments to the vllm engine see *Extra engine arguments* below.

232
## Python bring-your-own-engine
233
234
235
236
237
238

You can provide your own engine in a Python file. The file must provide a generator with this signature:
```
async def generate(request):
```

239
Build: `cargo build --features python`
240
241
242
243
244
245

### Python does the pre-processing

If the Python engine wants to receive and returns strings - it will do the prompt templating and tokenization itself - run it like this:

```
246
dynamo-run out=pystr:/home/user/my_python_engine.py
247
248
```

249
250
- The `request` parameter is a map, an OpenAI compatible create chat completion request: https://platform.openai.com/docs/api-reference/chat/create
- The function must `yield` a series of maps conforming to create chat completion stream response (example below).
251
- If using an HTTP front-end add the `--model-name` flag. This is the name we serve the model under.
252
253
254
255
256
257
258
259

The file is loaded once at startup and kept in memory.

Example engine:
```
import asyncio

async def generate(request):
260
    yield {"id":"1","choices":[{"index":0,"delta":{"content":"The","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
261
    await asyncio.sleep(0.1)
262
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" capital","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
263
    await asyncio.sleep(0.1)
264
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" of","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
265
    await asyncio.sleep(0.1)
266
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" France","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
267
    await asyncio.sleep(0.1)
268
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" is","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
269
    await asyncio.sleep(0.1)
270
    yield {"id":"1","choices":[{"index":0,"delta":{"content":" Paris","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
271
    await asyncio.sleep(0.1)
272
    yield {"id":"1","choices":[{"index":0,"delta":{"content":".","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
273
    await asyncio.sleep(0.1)
274
    yield {"id":"1","choices":[{"index":0,"delta":{"content":"","role":"assistant"},"finish_reason":"stop"}],"created":1841762283,"model":"Llama-3.2-3B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
275
276
```

277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
Command line arguments are passed to the python engine like this:
```
dynamo-run out=pystr:my_python_engine.py -- -n 42 --custom-arg Orange --yes
```

The python engine receives the arguments in `sys.argv`. The argument list will include some standard ones as well as anything after the `--`.

This input:
```
dynamo-run out=pystr:my_engine.py /opt/models/Llama-3.2-3B-Instruct/ --model-name llama_3.2 --tensor-parallel-size 4 -- -n 1
```

is read like this:
```
async def generate(request):
    .. as before ..

if __name__ == "__main__":
    print(f"MAIN: {sys.argv}")
```

and produces this output:
```
MAIN: ['my_engine.py', '--model-path', '/opt/models/Llama-3.2-3B-Instruct/', '--model-name', 'llama3.2', '--http-port', '8080', '--tensor-parallel-size', '4', '--base-gpu-id', '0', '--num-nodes', '1', '--node-rank', '0', '-n', '1']
```

This allows quick iteration on the engine setup. Note how the `-n` `1` is included. Flags `--leader-addr` and `--model-config` will also be added if provided to `dynamo-run`.

Neelay Shah's avatar
Neelay Shah committed
305
### Dynamo does the pre-processing
306
307
308

If the Python engine wants to receive and return tokens - the prompt templating and tokenization is already done - run it like this:
```
Neelay Shah's avatar
Neelay Shah committed
309
dynamo-run out=pytok:/home/user/my_python_engine.py --model-path <hf-repo-checkout>
310
311
312
313
314
315
316
317
318
319
320
321
```

- The request parameter is a map that looks like this:
```
{'token_ids': [128000, 128006, 9125, 128007, ... lots more ... ], 'stop_conditions': {'max_tokens': 8192, 'stop': None, 'stop_token_ids_hidden': [128001, 128008, 128009], 'min_tokens': None, 'ignore_eos': None}, 'sampling_options': {'n': None, 'best_of': None, 'presence_penalty': None, 'frequency_penalty': None, 'repetition_penalty': None, 'temperature': None, 'top_p': None, 'top_k': None, 'min_p': None, 'use_beam_search': None, 'length_penalty': None, 'seed': None}, 'eos_token_ids': [128001, 128008, 128009], 'mdc_sum': 'f1cd44546fdcbd664189863b7daece0f139a962b89778469e4cffc9be58ccc88', 'annotations': []}
```

- The `generate` function must `yield` a series of maps that look like this:
```
{"token_ids":[791],"tokens":None,"text":None,"cum_log_probs":None,"log_probs":None,"finish_reason":None}
```

322
- Command like flag `--model-path` which must point to a Hugging Face repo checkout containing the `tokenizer.json`. The `--model-name` flag is optional. If not provided we use the HF repo name (directory name) as the model name.
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342

Example engine:
```
import asyncio

async def generate(request):
    yield {"token_ids":[791]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[6864]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[315]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[9822]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[374]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[12366]}
    await asyncio.sleep(0.1)
    yield {"token_ids":[13]}
```
343

344
345
`pytok` supports the same ways of passing command line arguments as `pystr` - `initialize` or `main` with `sys.argv`.

Graham King's avatar
Graham King committed
346
347
348
349
350
351
## trtllm

TensorRT-LLM. Requires `clang` and `libclang-dev`.

Build:
```
352
cargo build --features trtllm
Graham King's avatar
Graham King committed
353
354
355
356
```

Run:
```
Neelay Shah's avatar
Neelay Shah committed
357
dynamo-run in=text out=trtllm --model-path /app/trtllm_engine/ --model-config ~/llm_models/Llama-3.2-3B-Instruct/
Graham King's avatar
Graham King committed
358
359
```

Graham King's avatar
Graham King committed
360
Note that TRT-LLM uses it's own `.engine` format for weights.
Graham King's avatar
Graham King committed
361

Neelay Shah's avatar
Neelay Shah committed
362
The `--model-path` you give to `dynamo-run` must contain the `config.json` (TRT-LLM's , not the model's) and `rank0.engine` (plus other ranks if relevant).
Graham King's avatar
Graham King committed
363

364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
## Echo Engines

Dynamo includes two echo engines for testing and debugging purposes:

### echo_core

The `echo_core` engine accepts pre-processed requests and echoes the tokens back as the response. This is useful for testing pre-processing functionality as the response will include the full prompt template.

```
dynamo-run in=http out=echo_core --model-path <hf-repo-checkout>
```

### echo_full

The `echo_full` engine accepts un-processed requests and echoes the prompt back as the response.

```
381
dynamo-run in=http out=echo_full --model-name my_model
382
383
384
385
386
387
388
389
390
391
392
393
```

### Configuration

Both echo engines use a configurable delay between tokens to simulate generation speed. You can adjust this using the `DYN_TOKEN_ECHO_DELAY_MS` environment variable:

```
# Set token echo delay to 1ms (1000 tokens per second)
DYN_TOKEN_ECHO_DELAY_MS=1 dynamo-run in=http out=echo_full
```

The default delay is 10ms, which produces approximately 100 tokens per second.
394

395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
## Batch mode

dynamo-run can take a jsonl file full of prompts and evaluate them all:

```
dynamo-run in=batch:prompts.jsonl out=llamacpp <model>
```

The input file should look like this:
```
{"text": "What is the capital of France?"}
{"text": "What is the capital of Spain?"}
```

Each one is passed as a prompt to the model. The output is written back to the same folder in `output.jsonl`. At the end of the run some statistics are printed.
The output looks like this:
```
{"text":"What is the capital of France?","response":"The capital of France is Paris.","tokens_in":7,"tokens_out":7,"elapsed_ms":1566}
{"text":"What is the capital of Spain?","response":".The capital of Spain is Madrid.","tokens_in":7,"tokens_out":7,"elapsed_ms":855}
```

416
417
## Defaults

418
The input defaults to `in=text`. The output will default to `mistralrs` engine. If not available whatever engine you have compiled in (so depending on `--features`).
419

420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
## Extra engine arguments

The vllm and sglang backends support passing any argument the engine accepts.

Put the arguments in a JSON file:
```
{
    "dtype": "half",
    "trust_remote_code": true
}
```

Pass it like this:
```
dynamo-run out=sglang ~/llm_models/Llama-3.2-3B-Instruct --extra-engine-args sglang_extra.json
```