launcher.md 18 KB
Newer Older
Merve Noyan's avatar
Merve Noyan committed
1
# Text-generation-launcher arguments
Mishig's avatar
Mishig committed
2

3
4
<!-- WRAP CODE BLOCKS -->

5
```shell
Merve Noyan's avatar
Merve Noyan committed
6
7
8
9
10
Text Generation Launcher

Usage: text-generation-launcher [OPTIONS]

Options:
11
12
13
```
## MODEL_ID
```shell
Merve Noyan's avatar
Merve Noyan committed
14
15
16
17
18
19
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers
          
          [env: MODEL_ID=]
          [default: bigscience/bloom-560m]

20
21
22
```
## REVISION
```shell
Merve Noyan's avatar
Merve Noyan committed
23
24
25
26
27
      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2`
          
          [env: REVISION=]

28
29
30
```
## VALIDATION_WORKERS
```shell
Merve Noyan's avatar
Merve Noyan committed
31
32
33
34
35
36
      --validation-workers <VALIDATION_WORKERS>
          The number of tokenizer workers used for payload validation and truncation inside the router
          
          [env: VALIDATION_WORKERS=]
          [default: 2]

37
38
39
```
## SHARDED
```shell
Merve Noyan's avatar
Merve Noyan committed
40
41
42
43
44
45
      --sharded <SHARDED>
          Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard`
          
          [env: SHARDED=]
          [possible values: true, false]

46
47
48
```
## NUM_SHARD
```shell
Merve Noyan's avatar
Merve Noyan committed
49
50
51
52
53
      --num-shard <NUM_SHARD>
          The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance
          
          [env: NUM_SHARD=]

54
55
56
```
## QUANTIZE
```shell
Merve Noyan's avatar
Merve Noyan committed
57
      --quantize <QUANTIZE>
58
59
60
          Quantization method to use for the model. It is not necessary to specify this option for pre-quantized models, since the quantization method is read from the model configuration.
          
          Marlin kernels will be used automatically for GPTQ/AWQ models.
Merve Noyan's avatar
Merve Noyan committed
61
62
          
          [env: QUANTIZE=]
63
64

          Possible values:
65
66
          - awq:              4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency
          - eetq:             8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
67
          - exl2:             Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1)
68
          - gptq:             4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
69
          - marlin:           4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin>
70
71
72
          - bitsandbytes:     Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
Nicolas Patry's avatar
Nicolas Patry committed
73
          - fp8:              [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations
Merve Noyan's avatar
Merve Noyan committed
74

Nicolas Patry's avatar
Nicolas Patry committed
75
76
77
78
79
80
81
82
```
## SPECULATE
```shell
      --speculate <SPECULATE>
          The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task
          
          [env: SPECULATE=]

83
84
85
```
## DTYPE
```shell
Merve Noyan's avatar
Merve Noyan committed
86
87
88
89
90
91
      --dtype <DTYPE>
          The dtype to be forced upon the model. This option cannot be used with `--quantize`
          
          [env: DTYPE=]
          [possible values: float16, bfloat16]

92
93
94
95
```
## KV_CACHE_DTYPE
```shell
      --kv-cache-dtype <KV_CACHE_DTYPE>
96
          Specify the dtype for the key-value cache. When this option is not provided, the dtype of the model is used (typically `float16` or `bfloat16`). Currently the only supported value are `fp8_e4m3fn` and `fp8_e5m2` on CUDA
97
98
          
          [env: KV_CACHE_DTYPE=]
99
          [possible values: fp8_e4m3fn, fp8_e5m2]
100

101
102
103
```
## TRUST_REMOTE_CODE
```shell
Merve Noyan's avatar
Merve Noyan committed
104
105
106
107
108
      --trust-remote-code
          Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision
          
          [env: TRUST_REMOTE_CODE=]

109
110
111
```
## MAX_CONCURRENT_REQUESTS
```shell
Merve Noyan's avatar
Merve Noyan committed
112
113
114
115
116
117
      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly
          
          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 128]

118
119
120
```
## MAX_BEST_OF
```shell
Merve Noyan's avatar
Merve Noyan committed
121
122
123
124
125
126
      --max-best-of <MAX_BEST_OF>
          This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence
          
          [env: MAX_BEST_OF=]
          [default: 2]

127
128
129
```
## MAX_STOP_SEQUENCES
```shell
Merve Noyan's avatar
Merve Noyan committed
130
131
132
133
134
135
      --max-stop-sequences <MAX_STOP_SEQUENCES>
          This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt
          
          [env: MAX_STOP_SEQUENCES=]
          [default: 4]

136
137
138
```
## MAX_TOP_N_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
139
      --max-top-n-tokens <MAX_TOP_N_TOKENS>
140
          This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens` is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
Merve Noyan's avatar
Merve Noyan committed
141
142
143
144
          
          [env: MAX_TOP_N_TOKENS=]
          [default: 5]

145
146
147
148
```
## MAX_INPUT_TOKENS
```shell
      --max-input-tokens <MAX_INPUT_TOKENS>
149
          This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle. Default to min(max_allocatable, max_position_embeddings) - 1
150
151
152
          
          [env: MAX_INPUT_TOKENS=]

153
154
155
```
## MAX_INPUT_LENGTH
```shell
Merve Noyan's avatar
Merve Noyan committed
156
      --max-input-length <MAX_INPUT_LENGTH>
157
          Legacy version of [`Args::max_input_tokens`]
Merve Noyan's avatar
Merve Noyan committed
158
159
160
          
          [env: MAX_INPUT_LENGTH=]

161
162
163
```
## MAX_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
164
      --max-total-tokens <MAX_TOTAL_TOKENS>
165
          This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be. Default to min(max_allocatable, max_position_embeddings)
Merve Noyan's avatar
Merve Noyan committed
166
167
168
          
          [env: MAX_TOTAL_TOKENS=]

169
170
171
```
## WAITING_SERVED_RATIO
```shell
Merve Noyan's avatar
Merve Noyan committed
172
173
174
175
176
177
      --waiting-served-ratio <WAITING_SERVED_RATIO>
          This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run.
          
          This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`.
          
          [env: WAITING_SERVED_RATIO=]
178
          [default: 0.3]
Merve Noyan's avatar
Merve Noyan committed
179

180
181
182
```
## MAX_BATCH_PREFILL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
183
      --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS>
184
          Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent. Default to `max_input_tokens + 50` to give a bit of room
Merve Noyan's avatar
Merve Noyan committed
185
186
187
          
          [env: MAX_BATCH_PREFILL_TOKENS=]

188
189
190
```
## MAX_BATCH_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
191
192
193
194
195
196
197
198
199
200
201
202
203
      --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.
          
          This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`.
          
          However in the non-padded (flash attention) version this can be much finer.
          
          For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.
          
          Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically.
          
          [env: MAX_BATCH_TOTAL_TOKENS=]

204
205
206
```
## MAX_WAITING_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
207
208
209
210
211
212
213
214
215
216
217
218
      --max-waiting-tokens <MAX_WAITING_TOKENS>
          This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch.
          
          With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot.
          
          With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s.
          
          This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users.
          
          [env: MAX_WAITING_TOKENS=]
          [default: 20]

219
220
221
222
223
224
225
226
```
## MAX_BATCH_SIZE
```shell
      --max-batch-size <MAX_BATCH_SIZE>
          Enforce a maximum number of requests per batch Specific flag for hardware targets that do not support unpadded inference
          
          [env: MAX_BATCH_SIZE=]

227
```
228
## CUDA_GRAPHS
229
```shell
230
      --cuda-graphs <CUDA_GRAPHS>
231
          Specify the batch sizes to compute cuda graphs for. Use "0" to disable. Default = "1,2,4,8,16,32"
232
          
233
          [env: CUDA_GRAPHS=]
234

235
236
237
```
## HOSTNAME
```shell
Merve Noyan's avatar
Merve Noyan committed
238
239
240
241
242
243
      --hostname <HOSTNAME>
          The IP address to listen on
          
          [env: HOSTNAME=]
          [default: 0.0.0.0]

244
245
246
```
## PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
247
248
249
250
251
252
  -p, --port <PORT>
          The port to listen on
          
          [env: PORT=]
          [default: 3000]

253
254
255
```
## SHARD_UDS_PATH
```shell
Merve Noyan's avatar
Merve Noyan committed
256
257
258
259
260
261
      --shard-uds-path <SHARD_UDS_PATH>
          The name of the socket for gRPC communication between the webserver and the shards
          
          [env: SHARD_UDS_PATH=]
          [default: /tmp/text-generation-server]

262
263
264
```
## MASTER_ADDR
```shell
Merve Noyan's avatar
Merve Noyan committed
265
266
267
268
269
270
      --master-addr <MASTER_ADDR>
          The address the master shard will listen on. (setting used by torch distributed)
          
          [env: MASTER_ADDR=]
          [default: localhost]

271
272
273
```
## MASTER_PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
274
275
276
277
278
279
      --master-port <MASTER_PORT>
          The address the master port will listen on. (setting used by torch distributed)
          
          [env: MASTER_PORT=]
          [default: 29500]

280
281
282
```
## HUGGINGFACE_HUB_CACHE
```shell
Merve Noyan's avatar
Merve Noyan committed
283
284
285
286
287
      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: HUGGINGFACE_HUB_CACHE=]

288
289
290
```
## WEIGHTS_CACHE_OVERRIDE
```shell
Merve Noyan's avatar
Merve Noyan committed
291
292
293
294
295
      --weights-cache-override <WEIGHTS_CACHE_OVERRIDE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: WEIGHTS_CACHE_OVERRIDE=]

296
297
298
```
## DISABLE_CUSTOM_KERNELS
```shell
Merve Noyan's avatar
Merve Noyan committed
299
300
301
302
303
      --disable-custom-kernels
          For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues
          
          [env: DISABLE_CUSTOM_KERNELS=]

304
305
306
```
## CUDA_MEMORY_FRACTION
```shell
Merve Noyan's avatar
Merve Noyan committed
307
308
309
310
311
312
      --cuda-memory-fraction <CUDA_MEMORY_FRACTION>
          Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction
          
          [env: CUDA_MEMORY_FRACTION=]
          [default: 1.0]

313
314
315
```
## ROPE_SCALING
```shell
Merve Noyan's avatar
Merve Noyan committed
316
317
318
319
320
321
322
323
324
325
326
327
      --rope-scaling <ROPE_SCALING>
          Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts.
          
          Goes together with `rope_factor`.
          
          `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically)
          
          `--rope-scaling linear --rope-factor` fully describes the scaling you want
          
          [env: ROPE_SCALING=]
          [possible values: linear, dynamic]

328
329
330
```
## ROPE_FACTOR
```shell
Merve Noyan's avatar
Merve Noyan committed
331
332
333
334
335
      --rope-factor <ROPE_FACTOR>
          Rope scaling will only be used for RoPE models See `rope_scaling`
          
          [env: ROPE_FACTOR=]

336
337
338
```
## JSON_OUTPUT
```shell
Merve Noyan's avatar
Merve Noyan committed
339
340
341
342
343
      --json-output
          Outputs the logs in JSON format (useful for telemetry)
          
          [env: JSON_OUTPUT=]

344
345
346
```
## OTLP_ENDPOINT
```shell
Merve Noyan's avatar
Merve Noyan committed
347
348
349
      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

350
351
352
353
354
355
356
```
## OTLP_SERVICE_NAME
```shell
      --otlp-service-name <OTLP_SERVICE_NAME>
          [env: OTLP_SERVICE_NAME=]
          [default: text-generation-inference.router]

357
358
359
```
## CORS_ALLOW_ORIGIN
```shell
Merve Noyan's avatar
Merve Noyan committed
360
361
362
      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

Erik Kaunismäki's avatar
Erik Kaunismäki committed
363
364
365
366
367
368
```
## API_KEY
```shell
      --api-key <API_KEY>
          [env: API_KEY=]

369
370
371
```
## WATERMARK_GAMMA
```shell
Merve Noyan's avatar
Merve Noyan committed
372
373
374
      --watermark-gamma <WATERMARK_GAMMA>
          [env: WATERMARK_GAMMA=]

375
376
377
```
## WATERMARK_DELTA
```shell
Merve Noyan's avatar
Merve Noyan committed
378
379
380
      --watermark-delta <WATERMARK_DELTA>
          [env: WATERMARK_DELTA=]

381
382
383
```
## NGROK
```shell
Merve Noyan's avatar
Merve Noyan committed
384
385
386
387
388
      --ngrok
          Enable ngrok tunneling
          
          [env: NGROK=]

389
390
391
```
## NGROK_AUTHTOKEN
```shell
Merve Noyan's avatar
Merve Noyan committed
392
393
394
395
396
      --ngrok-authtoken <NGROK_AUTHTOKEN>
          ngrok authentication token
          
          [env: NGROK_AUTHTOKEN=]

397
398
399
```
## NGROK_EDGE
```shell
Merve Noyan's avatar
Merve Noyan committed
400
401
402
403
404
      --ngrok-edge <NGROK_EDGE>
          ngrok edge
          
          [env: NGROK_EDGE=]

405
406
407
408
409
410
411
412
```
## TOKENIZER_CONFIG_PATH
```shell
      --tokenizer-config-path <TOKENIZER_CONFIG_PATH>
          The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub
          
          [env: TOKENIZER_CONFIG_PATH=]

drbh's avatar
drbh committed
413
414
415
416
417
418
419
420
```
## DISABLE_GRAMMAR_SUPPORT
```shell
      --disable-grammar-support
          Disable outlines grammar constrained generation. This is a feature that allows you to generate text that follows a specific grammar
          
          [env: DISABLE_GRAMMAR_SUPPORT=]

421
422
423
```
## ENV
```shell
Merve Noyan's avatar
Merve Noyan committed
424
425
426
  -e, --env
          Display a lot of information about your runtime environment

427
428
429
430
431
432
433
434
435
```
## MAX_CLIENT_BATCH_SIZE
```shell
      --max-client-batch-size <MAX_CLIENT_BATCH_SIZE>
          Control the maximum number of inputs that a client can send in a single request
          
          [env: MAX_CLIENT_BATCH_SIZE=]
          [default: 4]

436
437
438
439
440
441
442
443
```
## LORA_ADAPTERS
```shell
      --lora-adapters <LORA_ADAPTERS>
          Lora Adapters a list of adapter ids i.e. `repo/adapter1,repo/adapter2` to load during startup that will be available to callers via the `adapter_id` field in a request
          
          [env: LORA_ADAPTERS=]

444
```
445
## USAGE_STATS
446
```shell
447
448
      --usage-stats <USAGE_STATS>
          Control if anonymous usage stats are collected. Options are "on", "off" and "no-stack" Defaul is on
449
          
450
451
          [env: USAGE_STATS=]
          [default: on]
452

453
454
455
456
          Possible values:
          - on:       Default option, usage statistics are collected anonymously
          - off:      Disables all collection of usage statistics
          - no-stack: Doesn't send the error stack trace or error type, but allows sending a crash event
457

458
459
460
```
## HELP
```shell
Merve Noyan's avatar
Merve Noyan committed
461
462
463
  -h, --help
          Print help (see a summary with '-h')

464
465
466
```
## VERSION
```shell
Merve Noyan's avatar
Merve Noyan committed
467
468
469
  -V, --version
          Print version

470
```