launcher.md 17.6 KB
Newer Older
Merve Noyan's avatar
Merve Noyan committed
1
# Text-generation-launcher arguments
Mishig's avatar
Mishig committed
2

3
4
<!-- WRAP CODE BLOCKS -->

5
```shell
Merve Noyan's avatar
Merve Noyan committed
6
7
8
9
10
Text Generation Launcher

Usage: text-generation-launcher [OPTIONS]

Options:
11
12
13
```
## MODEL_ID
```shell
Merve Noyan's avatar
Merve Noyan committed
14
15
16
17
18
19
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers
          
          [env: MODEL_ID=]
          [default: bigscience/bloom-560m]

20
21
22
```
## REVISION
```shell
Merve Noyan's avatar
Merve Noyan committed
23
24
25
26
27
      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2`
          
          [env: REVISION=]

28
29
30
```
## VALIDATION_WORKERS
```shell
Merve Noyan's avatar
Merve Noyan committed
31
32
33
34
35
36
      --validation-workers <VALIDATION_WORKERS>
          The number of tokenizer workers used for payload validation and truncation inside the router
          
          [env: VALIDATION_WORKERS=]
          [default: 2]

37
38
39
```
## SHARDED
```shell
Merve Noyan's avatar
Merve Noyan committed
40
41
42
43
44
45
      --sharded <SHARDED>
          Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard`
          
          [env: SHARDED=]
          [possible values: true, false]

46
47
48
```
## NUM_SHARD
```shell
Merve Noyan's avatar
Merve Noyan committed
49
50
51
52
53
      --num-shard <NUM_SHARD>
          The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance
          
          [env: NUM_SHARD=]

54
55
56
```
## QUANTIZE
```shell
Merve Noyan's avatar
Merve Noyan committed
57
      --quantize <QUANTIZE>
58
59
60
          Quantization method to use for the model. It is not necessary to specify this option for pre-quantized models, since the quantization method is read from the model configuration.
          
          Marlin kernels will be used automatically for GPTQ/AWQ models.
Merve Noyan's avatar
Merve Noyan committed
61
62
          
          [env: QUANTIZE=]
63
64

          Possible values:
65
66
          - awq:              4 bit quantization. Requires a specific AWQ quantized model: <https://hf.co/models?search=awq>. Should replace GPTQ models wherever possible because of the better latency
          - eetq:             8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from <https://github.com/NetEase-FuXi/EETQ.git>
67
          - exl2:             Variable bit quantization. Requires a specific EXL2 quantized model: <https://hf.co/models?search=exl2>. Requires exllama2 kernels and does not support tensor parallelism (num_shard > 1)
68
          - gptq:             4 bit quantization. Requires a specific GTPQ quantized model: <https://hf.co/models?search=gptq>. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
69
          - marlin:           4 bit quantization. Requires a specific Marlin quantized model: <https://hf.co/models?search=marlin>
70
71
72
          - bitsandbytes:     Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
Nicolas Patry's avatar
Nicolas Patry committed
73
          - fp8:              [FP8](https://developer.nvidia.com/blog/nvidia-arm-and-intel-publish-fp8-specification-for-standardization-as-an-interchange-format-for-ai/) (e4m3) works on H100 and above This dtype has native ops should be the fastest if available. This is currently not the fastest because of local unpacking + padding to satisfy matrix multiplication limitations
Merve Noyan's avatar
Merve Noyan committed
74

Nicolas Patry's avatar
Nicolas Patry committed
75
76
77
78
79
80
81
82
```
## SPECULATE
```shell
      --speculate <SPECULATE>
          The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task
          
          [env: SPECULATE=]

83
84
85
```
## DTYPE
```shell
Merve Noyan's avatar
Merve Noyan committed
86
87
88
89
90
91
      --dtype <DTYPE>
          The dtype to be forced upon the model. This option cannot be used with `--quantize`
          
          [env: DTYPE=]
          [possible values: float16, bfloat16]

92
93
94
```
## TRUST_REMOTE_CODE
```shell
Merve Noyan's avatar
Merve Noyan committed
95
96
97
98
99
      --trust-remote-code
          Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision
          
          [env: TRUST_REMOTE_CODE=]

100
101
102
```
## MAX_CONCURRENT_REQUESTS
```shell
Merve Noyan's avatar
Merve Noyan committed
103
104
105
106
107
108
      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly
          
          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 128]

109
110
111
```
## MAX_BEST_OF
```shell
Merve Noyan's avatar
Merve Noyan committed
112
113
114
115
116
117
      --max-best-of <MAX_BEST_OF>
          This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence
          
          [env: MAX_BEST_OF=]
          [default: 2]

118
119
120
```
## MAX_STOP_SEQUENCES
```shell
Merve Noyan's avatar
Merve Noyan committed
121
122
123
124
125
126
      --max-stop-sequences <MAX_STOP_SEQUENCES>
          This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt
          
          [env: MAX_STOP_SEQUENCES=]
          [default: 4]

127
128
129
```
## MAX_TOP_N_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
130
      --max-top-n-tokens <MAX_TOP_N_TOKENS>
131
          This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens` is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
Merve Noyan's avatar
Merve Noyan committed
132
133
134
135
          
          [env: MAX_TOP_N_TOKENS=]
          [default: 5]

136
137
138
139
140
141
142
143
```
## MAX_INPUT_TOKENS
```shell
      --max-input-tokens <MAX_INPUT_TOKENS>
          This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle. Default to min(max_position_embeddings - 1, 4095)
          
          [env: MAX_INPUT_TOKENS=]

144
145
146
```
## MAX_INPUT_LENGTH
```shell
Merve Noyan's avatar
Merve Noyan committed
147
      --max-input-length <MAX_INPUT_LENGTH>
148
          Legacy version of [`Args::max_input_tokens`]
Merve Noyan's avatar
Merve Noyan committed
149
150
151
          
          [env: MAX_INPUT_LENGTH=]

152
153
154
```
## MAX_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
155
      --max-total-tokens <MAX_TOTAL_TOKENS>
156
          This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be. Default to min(max_position_embeddings, 4096)
Merve Noyan's avatar
Merve Noyan committed
157
158
159
          
          [env: MAX_TOTAL_TOKENS=]

160
161
162
```
## WAITING_SERVED_RATIO
```shell
Merve Noyan's avatar
Merve Noyan committed
163
164
165
166
167
168
      --waiting-served-ratio <WAITING_SERVED_RATIO>
          This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run.
          
          This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`.
          
          [env: WAITING_SERVED_RATIO=]
169
          [default: 0.3]
Merve Noyan's avatar
Merve Noyan committed
170

171
172
173
```
## MAX_BATCH_PREFILL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
174
      --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS>
175
          Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent. Default to `max_input_tokens + 50` to give a bit of room
Merve Noyan's avatar
Merve Noyan committed
176
177
178
          
          [env: MAX_BATCH_PREFILL_TOKENS=]

179
180
181
```
## MAX_BATCH_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
182
183
184
185
186
187
188
189
190
191
192
193
194
      --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.
          
          This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`.
          
          However in the non-padded (flash attention) version this can be much finer.
          
          For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.
          
          Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically.
          
          [env: MAX_BATCH_TOTAL_TOKENS=]

195
196
197
```
## MAX_WAITING_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
198
199
200
201
202
203
204
205
206
207
208
209
      --max-waiting-tokens <MAX_WAITING_TOKENS>
          This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch.
          
          With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot.
          
          With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s.
          
          This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users.
          
          [env: MAX_WAITING_TOKENS=]
          [default: 20]

210
211
212
213
214
215
216
217
```
## MAX_BATCH_SIZE
```shell
      --max-batch-size <MAX_BATCH_SIZE>
          Enforce a maximum number of requests per batch Specific flag for hardware targets that do not support unpadded inference
          
          [env: MAX_BATCH_SIZE=]

218
```
219
## CUDA_GRAPHS
220
```shell
221
      --cuda-graphs <CUDA_GRAPHS>
222
          Specify the batch sizes to compute cuda graphs for. Use "0" to disable. Default = "1,2,4,8,16,32"
223
          
224
          [env: CUDA_GRAPHS=]
225

226
227
228
```
## HOSTNAME
```shell
Merve Noyan's avatar
Merve Noyan committed
229
230
231
232
233
234
      --hostname <HOSTNAME>
          The IP address to listen on
          
          [env: HOSTNAME=]
          [default: 0.0.0.0]

235
236
237
```
## PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
238
239
240
241
242
243
  -p, --port <PORT>
          The port to listen on
          
          [env: PORT=]
          [default: 3000]

244
245
246
```
## SHARD_UDS_PATH
```shell
Merve Noyan's avatar
Merve Noyan committed
247
248
249
250
251
252
      --shard-uds-path <SHARD_UDS_PATH>
          The name of the socket for gRPC communication between the webserver and the shards
          
          [env: SHARD_UDS_PATH=]
          [default: /tmp/text-generation-server]

253
254
255
```
## MASTER_ADDR
```shell
Merve Noyan's avatar
Merve Noyan committed
256
257
258
259
260
261
      --master-addr <MASTER_ADDR>
          The address the master shard will listen on. (setting used by torch distributed)
          
          [env: MASTER_ADDR=]
          [default: localhost]

262
263
264
```
## MASTER_PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
265
266
267
268
269
270
      --master-port <MASTER_PORT>
          The address the master port will listen on. (setting used by torch distributed)
          
          [env: MASTER_PORT=]
          [default: 29500]

271
272
273
```
## HUGGINGFACE_HUB_CACHE
```shell
Merve Noyan's avatar
Merve Noyan committed
274
275
276
277
278
      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: HUGGINGFACE_HUB_CACHE=]

279
280
281
```
## WEIGHTS_CACHE_OVERRIDE
```shell
Merve Noyan's avatar
Merve Noyan committed
282
283
284
285
286
      --weights-cache-override <WEIGHTS_CACHE_OVERRIDE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: WEIGHTS_CACHE_OVERRIDE=]

287
288
289
```
## DISABLE_CUSTOM_KERNELS
```shell
Merve Noyan's avatar
Merve Noyan committed
290
291
292
293
294
      --disable-custom-kernels
          For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues
          
          [env: DISABLE_CUSTOM_KERNELS=]

295
296
297
```
## CUDA_MEMORY_FRACTION
```shell
Merve Noyan's avatar
Merve Noyan committed
298
299
300
301
302
303
      --cuda-memory-fraction <CUDA_MEMORY_FRACTION>
          Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction
          
          [env: CUDA_MEMORY_FRACTION=]
          [default: 1.0]

304
305
306
```
## ROPE_SCALING
```shell
Merve Noyan's avatar
Merve Noyan committed
307
308
309
310
311
312
313
314
315
316
317
318
      --rope-scaling <ROPE_SCALING>
          Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts.
          
          Goes together with `rope_factor`.
          
          `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically)
          
          `--rope-scaling linear --rope-factor` fully describes the scaling you want
          
          [env: ROPE_SCALING=]
          [possible values: linear, dynamic]

319
320
321
```
## ROPE_FACTOR
```shell
Merve Noyan's avatar
Merve Noyan committed
322
323
324
325
326
      --rope-factor <ROPE_FACTOR>
          Rope scaling will only be used for RoPE models See `rope_scaling`
          
          [env: ROPE_FACTOR=]

327
328
329
```
## JSON_OUTPUT
```shell
Merve Noyan's avatar
Merve Noyan committed
330
331
332
333
334
      --json-output
          Outputs the logs in JSON format (useful for telemetry)
          
          [env: JSON_OUTPUT=]

335
336
337
```
## OTLP_ENDPOINT
```shell
Merve Noyan's avatar
Merve Noyan committed
338
339
340
      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

341
342
343
344
345
346
347
```
## OTLP_SERVICE_NAME
```shell
      --otlp-service-name <OTLP_SERVICE_NAME>
          [env: OTLP_SERVICE_NAME=]
          [default: text-generation-inference.router]

348
349
350
```
## CORS_ALLOW_ORIGIN
```shell
Merve Noyan's avatar
Merve Noyan committed
351
352
353
      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

Erik Kaunismäki's avatar
Erik Kaunismäki committed
354
355
356
357
358
359
```
## API_KEY
```shell
      --api-key <API_KEY>
          [env: API_KEY=]

360
361
362
```
## WATERMARK_GAMMA
```shell
Merve Noyan's avatar
Merve Noyan committed
363
364
365
      --watermark-gamma <WATERMARK_GAMMA>
          [env: WATERMARK_GAMMA=]

366
367
368
```
## WATERMARK_DELTA
```shell
Merve Noyan's avatar
Merve Noyan committed
369
370
371
      --watermark-delta <WATERMARK_DELTA>
          [env: WATERMARK_DELTA=]

372
373
374
```
## NGROK
```shell
Merve Noyan's avatar
Merve Noyan committed
375
376
377
378
379
      --ngrok
          Enable ngrok tunneling
          
          [env: NGROK=]

380
381
382
```
## NGROK_AUTHTOKEN
```shell
Merve Noyan's avatar
Merve Noyan committed
383
384
385
386
387
      --ngrok-authtoken <NGROK_AUTHTOKEN>
          ngrok authentication token
          
          [env: NGROK_AUTHTOKEN=]

388
389
390
```
## NGROK_EDGE
```shell
Merve Noyan's avatar
Merve Noyan committed
391
392
393
394
395
      --ngrok-edge <NGROK_EDGE>
          ngrok edge
          
          [env: NGROK_EDGE=]

396
397
398
399
400
401
402
403
```
## TOKENIZER_CONFIG_PATH
```shell
      --tokenizer-config-path <TOKENIZER_CONFIG_PATH>
          The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub
          
          [env: TOKENIZER_CONFIG_PATH=]

drbh's avatar
drbh committed
404
405
406
407
408
409
410
411
```
## DISABLE_GRAMMAR_SUPPORT
```shell
      --disable-grammar-support
          Disable outlines grammar constrained generation. This is a feature that allows you to generate text that follows a specific grammar
          
          [env: DISABLE_GRAMMAR_SUPPORT=]

412
413
414
```
## ENV
```shell
Merve Noyan's avatar
Merve Noyan committed
415
416
417
  -e, --env
          Display a lot of information about your runtime environment

418
419
420
421
422
423
424
425
426
```
## MAX_CLIENT_BATCH_SIZE
```shell
      --max-client-batch-size <MAX_CLIENT_BATCH_SIZE>
          Control the maximum number of inputs that a client can send in a single request
          
          [env: MAX_CLIENT_BATCH_SIZE=]
          [default: 4]

427
428
429
430
431
432
433
434
```
## LORA_ADAPTERS
```shell
      --lora-adapters <LORA_ADAPTERS>
          Lora Adapters a list of adapter ids i.e. `repo/adapter1,repo/adapter2` to load during startup that will be available to callers via the `adapter_id` field in a request
          
          [env: LORA_ADAPTERS=]

435
```
436
## USAGE_STATS
437
```shell
438
439
      --usage-stats <USAGE_STATS>
          Control if anonymous usage stats are collected. Options are "on", "off" and "no-stack" Defaul is on
440
          
441
442
          [env: USAGE_STATS=]
          [default: on]
443

444
445
446
447
          Possible values:
          - on:       Default option, usage statistics are collected anonymously
          - off:      Disables all collection of usage statistics
          - no-stack: Doesn't send the error stack trace or error type, but allows sending a crash event
448

449
450
451
```
## HELP
```shell
Merve Noyan's avatar
Merve Noyan committed
452
453
454
  -h, --help
          Print help (see a summary with '-h')

455
456
457
```
## VERSION
```shell
Merve Noyan's avatar
Merve Noyan committed
458
459
460
  -V, --version
          Print version

461
```