launcher.md 18.7 KB
Newer Older
Merve Noyan's avatar
Merve Noyan committed
1
# Text-generation-launcher arguments
Mishig's avatar
Mishig committed
2

3
4
<!-- WRAP CODE BLOCKS -->

5
```shell
Merve Noyan's avatar
Merve Noyan committed
6
7
8
9
10
Text Generation Launcher

Usage: text-generation-launcher [OPTIONS]

Options:
11
12
13
```
## MODEL_ID
```shell
Merve Noyan's avatar
Merve Noyan committed
14
15
16
17
18
19
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers
          
          [env: MODEL_ID=]
          [default: bigscience/bloom-560m]

20
21
22
```
## REVISION
```shell
Merve Noyan's avatar
Merve Noyan committed
23
24
25
26
27
      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2`
          
          [env: REVISION=]

28
29
30
```
## VALIDATION_WORKERS
```shell
Merve Noyan's avatar
Merve Noyan committed
31
32
33
34
35
36
      --validation-workers <VALIDATION_WORKERS>
          The number of tokenizer workers used for payload validation and truncation inside the router
          
          [env: VALIDATION_WORKERS=]
          [default: 2]

37
38
39
```
## SHARDED
```shell
Merve Noyan's avatar
Merve Noyan committed
40
41
42
43
44
45
      --sharded <SHARDED>
          Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard`
          
          [env: SHARDED=]
          [possible values: true, false]

46
47
48
```
## NUM_SHARD
```shell
Merve Noyan's avatar
Merve Noyan committed
49
50
51
52
53
      --num-shard <NUM_SHARD>
          The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance
          
          [env: NUM_SHARD=]

54
55
56
```
## QUANTIZE
```shell
Merve Noyan's avatar
Merve Noyan committed
57
      --quantize <QUANTIZE>
58
59
60
          Quantization method to use for the model. It is not necessary to specify this option for pre-quantized models, since the quantization method is read from the model configuration.
          
          Marlin kernels will be used automatically for GPTQ/AWQ models.
Merve Noyan's avatar
Merve Noyan committed
61
62
          
          [env: QUANTIZE=]
63
64

          Possible values:
65
66
67
68
69
70
71
72
73
74
          - awq:                4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency
          - compressed-tensors: Compressed tensors, which can be a mixture of different quantization methods
          - eetq:               8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
          - exl2:               Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1)
          - gptq:               4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
          - marlin:             4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin>
          - bitsandbytes:       Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-nf4:   Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-fp4:   Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
          - fp8:                [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations
Merve Noyan's avatar
Merve Noyan committed
75

Nicolas Patry's avatar
Nicolas Patry committed
76
77
78
79
80
81
82
83
```
## SPECULATE
```shell
      --speculate <SPECULATE>
          The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task
          
          [env: SPECULATE=]

84
85
86
```
## DTYPE
```shell
Merve Noyan's avatar
Merve Noyan committed
87
88
89
90
91
92
      --dtype <DTYPE>
          The dtype to be forced upon the model. This option cannot be used with `--quantize`
          
          [env: DTYPE=]
          [possible values: float16, bfloat16]

93
94
95
96
```
## KV_CACHE_DTYPE
```shell
      --kv-cache-dtype <KV_CACHE_DTYPE>
97
          Specify the dtype for the key-value cache. When this option is not provided, the dtype of the model is used (typically `float16` or `bfloat16`). Currently the only supported value are `fp8_e4m3fn` and `fp8_e5m2` on CUDA
98
99
          
          [env: KV_CACHE_DTYPE=]
100
          [possible values: fp8_e4m3fn, fp8_e5m2]
101

102
103
104
```
## TRUST_REMOTE_CODE
```shell
Merve Noyan's avatar
Merve Noyan committed
105
106
107
108
109
      --trust-remote-code
          Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision
          
          [env: TRUST_REMOTE_CODE=]

110
111
112
```
## MAX_CONCURRENT_REQUESTS
```shell
Merve Noyan's avatar
Merve Noyan committed
113
114
115
116
117
118
      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly
          
          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 128]

119
120
121
```
## MAX_BEST_OF
```shell
Merve Noyan's avatar
Merve Noyan committed
122
123
124
125
126
127
      --max-best-of <MAX_BEST_OF>
          This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence
          
          [env: MAX_BEST_OF=]
          [default: 2]

128
129
130
```
## MAX_STOP_SEQUENCES
```shell
Merve Noyan's avatar
Merve Noyan committed
131
132
133
134
135
136
      --max-stop-sequences <MAX_STOP_SEQUENCES>
          This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt
          
          [env: MAX_STOP_SEQUENCES=]
          [default: 4]

137
138
139
```
## MAX_TOP_N_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
140
      --max-top-n-tokens <MAX_TOP_N_TOKENS>
141
          This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens` is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
Merve Noyan's avatar
Merve Noyan committed
142
143
144
145
          
          [env: MAX_TOP_N_TOKENS=]
          [default: 5]

146
147
148
149
```
## MAX_INPUT_TOKENS
```shell
      --max-input-tokens <MAX_INPUT_TOKENS>
150
          This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle. Default to min(max_allocatable, max_position_embeddings) - 1
151
152
153
          
          [env: MAX_INPUT_TOKENS=]

154
155
156
```
## MAX_INPUT_LENGTH
```shell
Merve Noyan's avatar
Merve Noyan committed
157
      --max-input-length <MAX_INPUT_LENGTH>
158
          Legacy version of [`Args::max_input_tokens`]
Merve Noyan's avatar
Merve Noyan committed
159
160
161
          
          [env: MAX_INPUT_LENGTH=]

162
163
164
```
## MAX_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
165
      --max-total-tokens <MAX_TOTAL_TOKENS>
166
          This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be. Default to min(max_allocatable, max_position_embeddings)
Merve Noyan's avatar
Merve Noyan committed
167
168
169
          
          [env: MAX_TOTAL_TOKENS=]

170
171
172
```
## WAITING_SERVED_RATIO
```shell
Merve Noyan's avatar
Merve Noyan committed
173
174
175
176
177
178
      --waiting-served-ratio <WAITING_SERVED_RATIO>
          This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run.
          
          This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`.
          
          [env: WAITING_SERVED_RATIO=]
179
          [default: 0.3]
Merve Noyan's avatar
Merve Noyan committed
180

181
182
183
```
## MAX_BATCH_PREFILL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
184
      --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS>
185
          Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent. Default to `max_input_tokens + 50` to give a bit of room
Merve Noyan's avatar
Merve Noyan committed
186
187
188
          
          [env: MAX_BATCH_PREFILL_TOKENS=]

189
190
191
```
## MAX_BATCH_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
192
193
194
195
196
197
198
199
200
201
202
203
204
      --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.
          
          This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`.
          
          However in the non-padded (flash attention) version this can be much finer.
          
          For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.
          
          Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically.
          
          [env: MAX_BATCH_TOTAL_TOKENS=]

205
206
207
```
## MAX_WAITING_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
208
209
210
211
212
213
214
215
216
217
218
219
      --max-waiting-tokens <MAX_WAITING_TOKENS>
          This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch.
          
          With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot.
          
          With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s.
          
          This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users.
          
          [env: MAX_WAITING_TOKENS=]
          [default: 20]

220
221
222
223
224
225
226
227
```
## MAX_BATCH_SIZE
```shell
      --max-batch-size <MAX_BATCH_SIZE>
          Enforce a maximum number of requests per batch Specific flag for hardware targets that do not support unpadded inference
          
          [env: MAX_BATCH_SIZE=]

228
```
229
## CUDA_GRAPHS
230
```shell
231
      --cuda-graphs <CUDA_GRAPHS>
232
          Specify the batch sizes to compute cuda graphs for. Use "0" to disable. Default = "1,2,4,8,16,32"
233
          
234
          [env: CUDA_GRAPHS=]
235

236
237
238
```
## HOSTNAME
```shell
Merve Noyan's avatar
Merve Noyan committed
239
240
241
242
243
244
      --hostname <HOSTNAME>
          The IP address to listen on
          
          [env: HOSTNAME=]
          [default: 0.0.0.0]

245
246
247
```
## PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
248
249
250
251
252
253
  -p, --port <PORT>
          The port to listen on
          
          [env: PORT=]
          [default: 3000]

254
255
256
```
## SHARD_UDS_PATH
```shell
Merve Noyan's avatar
Merve Noyan committed
257
258
259
260
261
262
      --shard-uds-path <SHARD_UDS_PATH>
          The name of the socket for gRPC communication between the webserver and the shards
          
          [env: SHARD_UDS_PATH=]
          [default: /tmp/text-generation-server]

263
264
265
```
## MASTER_ADDR
```shell
Merve Noyan's avatar
Merve Noyan committed
266
267
268
269
270
271
      --master-addr <MASTER_ADDR>
          The address the master shard will listen on. (setting used by torch distributed)
          
          [env: MASTER_ADDR=]
          [default: localhost]

272
273
274
```
## MASTER_PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
275
276
277
278
279
280
      --master-port <MASTER_PORT>
          The address the master port will listen on. (setting used by torch distributed)
          
          [env: MASTER_PORT=]
          [default: 29500]

281
282
283
```
## HUGGINGFACE_HUB_CACHE
```shell
Merve Noyan's avatar
Merve Noyan committed
284
285
286
287
288
      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: HUGGINGFACE_HUB_CACHE=]

289
290
291
```
## WEIGHTS_CACHE_OVERRIDE
```shell
Merve Noyan's avatar
Merve Noyan committed
292
293
294
295
296
      --weights-cache-override <WEIGHTS_CACHE_OVERRIDE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: WEIGHTS_CACHE_OVERRIDE=]

297
298
299
```
## DISABLE_CUSTOM_KERNELS
```shell
Merve Noyan's avatar
Merve Noyan committed
300
301
302
303
304
      --disable-custom-kernels
          For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues
          
          [env: DISABLE_CUSTOM_KERNELS=]

305
306
307
```
## CUDA_MEMORY_FRACTION
```shell
Merve Noyan's avatar
Merve Noyan committed
308
309
310
311
312
313
      --cuda-memory-fraction <CUDA_MEMORY_FRACTION>
          Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction
          
          [env: CUDA_MEMORY_FRACTION=]
          [default: 1.0]

314
315
316
```
## ROPE_SCALING
```shell
Merve Noyan's avatar
Merve Noyan committed
317
318
319
320
321
322
323
324
325
326
327
328
      --rope-scaling <ROPE_SCALING>
          Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts.
          
          Goes together with `rope_factor`.
          
          `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically)
          
          `--rope-scaling linear --rope-factor` fully describes the scaling you want
          
          [env: ROPE_SCALING=]
          [possible values: linear, dynamic]

329
330
331
```
## ROPE_FACTOR
```shell
Merve Noyan's avatar
Merve Noyan committed
332
333
334
335
336
      --rope-factor <ROPE_FACTOR>
          Rope scaling will only be used for RoPE models See `rope_scaling`
          
          [env: ROPE_FACTOR=]

337
338
339
```
## JSON_OUTPUT
```shell
Merve Noyan's avatar
Merve Noyan committed
340
341
342
343
344
      --json-output
          Outputs the logs in JSON format (useful for telemetry)
          
          [env: JSON_OUTPUT=]

345
346
347
```
## OTLP_ENDPOINT
```shell
Merve Noyan's avatar
Merve Noyan committed
348
349
350
      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

351
352
353
354
355
356
357
```
## OTLP_SERVICE_NAME
```shell
      --otlp-service-name <OTLP_SERVICE_NAME>
          [env: OTLP_SERVICE_NAME=]
          [default: text-generation-inference.router]

358
359
360
```
## CORS_ALLOW_ORIGIN
```shell
Merve Noyan's avatar
Merve Noyan committed
361
362
363
      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

Erik Kaunismäki's avatar
Erik Kaunismäki committed
364
365
366
367
368
369
```
## API_KEY
```shell
      --api-key <API_KEY>
          [env: API_KEY=]

370
371
372
```
## WATERMARK_GAMMA
```shell
Merve Noyan's avatar
Merve Noyan committed
373
374
375
      --watermark-gamma <WATERMARK_GAMMA>
          [env: WATERMARK_GAMMA=]

376
377
378
```
## WATERMARK_DELTA
```shell
Merve Noyan's avatar
Merve Noyan committed
379
380
381
      --watermark-delta <WATERMARK_DELTA>
          [env: WATERMARK_DELTA=]

382
383
384
```
## NGROK
```shell
Merve Noyan's avatar
Merve Noyan committed
385
386
387
388
389
      --ngrok
          Enable ngrok tunneling
          
          [env: NGROK=]

390
391
392
```
## NGROK_AUTHTOKEN
```shell
Merve Noyan's avatar
Merve Noyan committed
393
394
395
396
397
      --ngrok-authtoken <NGROK_AUTHTOKEN>
          ngrok authentication token
          
          [env: NGROK_AUTHTOKEN=]

398
399
400
```
## NGROK_EDGE
```shell
Merve Noyan's avatar
Merve Noyan committed
401
402
403
404
405
      --ngrok-edge <NGROK_EDGE>
          ngrok edge
          
          [env: NGROK_EDGE=]

406
407
408
409
410
411
412
413
```
## TOKENIZER_CONFIG_PATH
```shell
      --tokenizer-config-path <TOKENIZER_CONFIG_PATH>
          The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub
          
          [env: TOKENIZER_CONFIG_PATH=]

drbh's avatar
drbh committed
414
415
416
417
418
419
420
421
```
## DISABLE_GRAMMAR_SUPPORT
```shell
      --disable-grammar-support
          Disable outlines grammar constrained generation. This is a feature that allows you to generate text that follows a specific grammar
          
          [env: DISABLE_GRAMMAR_SUPPORT=]

422
423
424
```
## ENV
```shell
Merve Noyan's avatar
Merve Noyan committed
425
426
427
  -e, --env
          Display a lot of information about your runtime environment

428
429
430
431
432
433
434
435
436
```
## MAX_CLIENT_BATCH_SIZE
```shell
      --max-client-batch-size <MAX_CLIENT_BATCH_SIZE>
          Control the maximum number of inputs that a client can send in a single request
          
          [env: MAX_CLIENT_BATCH_SIZE=]
          [default: 4]

437
438
439
440
441
442
443
444
```
## LORA_ADAPTERS
```shell
      --lora-adapters <LORA_ADAPTERS>
          Lora Adapters a list of adapter ids i.e. `repo/adapter1,repo/adapter2` to load during startup that will be available to callers via the `adapter_id` field in a request
          
          [env: LORA_ADAPTERS=]

445
```
446
## USAGE_STATS
447
```shell
448
449
      --usage-stats <USAGE_STATS>
          Control if anonymous usage stats are collected. Options are "on", "off" and "no-stack" Defaul is on
450
          
451
452
          [env: USAGE_STATS=]
          [default: on]
453

454
455
456
457
          Possible values:
          - on:       Default option, usage statistics are collected anonymously
          - off:      Disables all collection of usage statistics
          - no-stack: Doesn't send the error stack trace or error type, but allows sending a crash event
458

459
460
461
462
463
464
465
466
467
468
469
```
## PAYLOAD_LIMIT
```shell
      --payload-limit <PAYLOAD_LIMIT>
          Payload size limit in bytes
          
          Default is 2MB
          
          [env: PAYLOAD_LIMIT=]
          [default: 2000000]

Nicolas Patry's avatar
Nicolas Patry committed
470
471
472
473
474
475
476
477
478
479
```
## ENABLE_PREFILL_LOGPROBS
```shell
      --enable-prefill-logprobs
          Enables prefill logprobs
          
          Logprobs in the prompt are deactivated by default because they consume a large amount of VRAM (especially for long prompts). Using this flag reallows users to ask for them.
          
          [env: ENABLE_PREFILL_LOGPROBS=]

480
481
482
```
## HELP
```shell
Merve Noyan's avatar
Merve Noyan committed
483
484
485
  -h, --help
          Print help (see a summary with '-h')

486
487
488
```
## VERSION
```shell
Merve Noyan's avatar
Merve Noyan committed
489
490
491
  -V, --version
          Print version

492
```