launcher.md 15.2 KB
Newer Older
Merve Noyan's avatar
Merve Noyan committed
1
# Text-generation-launcher arguments
Mishig's avatar
Mishig committed
2

3
4
<!-- WRAP CODE BLOCKS -->

5
```shell
Merve Noyan's avatar
Merve Noyan committed
6
7
8
9
10
Text Generation Launcher

Usage: text-generation-launcher [OPTIONS]

Options:
11
12
13
```
## MODEL_ID
```shell
Merve Noyan's avatar
Merve Noyan committed
14
15
16
17
18
19
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers
          
          [env: MODEL_ID=]
          [default: bigscience/bloom-560m]

20
21
22
```
## REVISION
```shell
Merve Noyan's avatar
Merve Noyan committed
23
24
25
26
27
      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2`
          
          [env: REVISION=]

28
29
30
```
## VALIDATION_WORKERS
```shell
Merve Noyan's avatar
Merve Noyan committed
31
32
33
34
35
36
      --validation-workers <VALIDATION_WORKERS>
          The number of tokenizer workers used for payload validation and truncation inside the router
          
          [env: VALIDATION_WORKERS=]
          [default: 2]

37
38
39
```
## SHARDED
```shell
Merve Noyan's avatar
Merve Noyan committed
40
41
42
43
44
45
      --sharded <SHARDED>
          Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard`
          
          [env: SHARDED=]
          [possible values: true, false]

46
47
48
```
## NUM_SHARD
```shell
Merve Noyan's avatar
Merve Noyan committed
49
50
51
52
53
      --num-shard <NUM_SHARD>
          The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance
          
          [env: NUM_SHARD=]

54
55
56
```
## QUANTIZE
```shell
Merve Noyan's avatar
Merve Noyan committed
57
      --quantize <QUANTIZE>
58
          Whether you want the model to be quantized
Merve Noyan's avatar
Merve Noyan committed
59
60
          
          [env: QUANTIZE=]
61
62

          Possible values:
63
          - awq:              4 bit quantization. Requires a specific AWQ quantized model: https://hf.co/models?search=awq. Should replace GPTQ models wherever possible because of the better latency
64
          - eetq:             8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from https://github.com/NetEase-FuXi/EETQ.git
65
          - gptq:             4 bit quantization. Requires a specific GTPQ quantized model: https://hf.co/models?search=gptq. text-generation-inference will use exllama (faster) kernels wherever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
66
67
68
          - bitsandbytes:     Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
Merve Noyan's avatar
Merve Noyan committed
69

Nicolas Patry's avatar
Nicolas Patry committed
70
71
72
73
74
75
76
77
```
## SPECULATE
```shell
      --speculate <SPECULATE>
          The number of input_ids to speculate on If using a medusa model, the heads will be picked up automatically Other wise, it will use n-gram speculation which is relatively free in terms of compute, but the speedup heavily depends on the task
          
          [env: SPECULATE=]

78
79
80
```
## DTYPE
```shell
Merve Noyan's avatar
Merve Noyan committed
81
82
83
84
85
86
      --dtype <DTYPE>
          The dtype to be forced upon the model. This option cannot be used with `--quantize`
          
          [env: DTYPE=]
          [possible values: float16, bfloat16]

87
88
89
```
## TRUST_REMOTE_CODE
```shell
Merve Noyan's avatar
Merve Noyan committed
90
91
92
93
94
      --trust-remote-code
          Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision
          
          [env: TRUST_REMOTE_CODE=]

95
96
97
```
## MAX_CONCURRENT_REQUESTS
```shell
Merve Noyan's avatar
Merve Noyan committed
98
99
100
101
102
103
      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly
          
          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 128]

104
105
106
```
## MAX_BEST_OF
```shell
Merve Noyan's avatar
Merve Noyan committed
107
108
109
110
111
112
      --max-best-of <MAX_BEST_OF>
          This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence
          
          [env: MAX_BEST_OF=]
          [default: 2]

113
114
115
```
## MAX_STOP_SEQUENCES
```shell
Merve Noyan's avatar
Merve Noyan committed
116
117
118
119
120
121
      --max-stop-sequences <MAX_STOP_SEQUENCES>
          This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt
          
          [env: MAX_STOP_SEQUENCES=]
          [default: 4]

122
123
124
```
## MAX_TOP_N_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
125
126
127
128
129
130
      --max-top-n-tokens <MAX_TOP_N_TOKENS>
          This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
          
          [env: MAX_TOP_N_TOKENS=]
          [default: 5]

131
132
133
```
## MAX_INPUT_LENGTH
```shell
Merve Noyan's avatar
Merve Noyan committed
134
135
136
137
138
139
      --max-input-length <MAX_INPUT_LENGTH>
          This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle
          
          [env: MAX_INPUT_LENGTH=]
          [default: 1024]

140
141
142
```
## MAX_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
143
144
145
146
147
148
      --max-total-tokens <MAX_TOTAL_TOKENS>
          This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be
          
          [env: MAX_TOTAL_TOKENS=]
          [default: 2048]

149
150
151
```
## WAITING_SERVED_RATIO
```shell
Merve Noyan's avatar
Merve Noyan committed
152
153
154
155
156
157
158
159
      --waiting-served-ratio <WAITING_SERVED_RATIO>
          This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run.
          
          This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`.
          
          [env: WAITING_SERVED_RATIO=]
          [default: 1.2]

160
161
162
```
## MAX_BATCH_PREFILL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
163
164
165
166
167
168
      --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS>
          Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent
          
          [env: MAX_BATCH_PREFILL_TOKENS=]
          [default: 4096]

169
170
171
```
## MAX_BATCH_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
172
173
174
175
176
177
178
179
180
181
182
183
184
      --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.
          
          This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`.
          
          However in the non-padded (flash attention) version this can be much finer.
          
          For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.
          
          Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically.
          
          [env: MAX_BATCH_TOTAL_TOKENS=]

185
186
187
```
## MAX_WAITING_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
188
189
190
191
192
193
194
195
196
197
198
199
      --max-waiting-tokens <MAX_WAITING_TOKENS>
          This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch.
          
          With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot.
          
          With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s.
          
          This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users.
          
          [env: MAX_WAITING_TOKENS=]
          [default: 20]

200
201
202
203
204
205
206
207
```
## MAX_BATCH_SIZE
```shell
      --max-batch-size <MAX_BATCH_SIZE>
          Enforce a maximum number of requests per batch Specific flag for hardware targets that do not support unpadded inference
          
          [env: MAX_BATCH_SIZE=]

208
```
209
## CUDA_GRAPHS
210
```shell
211
212
      --cuda-graphs <CUDA_GRAPHS>
          Specify the batch sizes to compute cuda graphs for. Use "0" to disable
213
          
214
215
          [env: CUDA_GRAPHS=]
          [default: 1,2,4,8,16,32,64,96,128]
216

217
218
219
```
## HOSTNAME
```shell
Merve Noyan's avatar
Merve Noyan committed
220
221
222
223
224
225
      --hostname <HOSTNAME>
          The IP address to listen on
          
          [env: HOSTNAME=]
          [default: 0.0.0.0]

226
227
228
```
## PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
229
230
231
232
233
234
  -p, --port <PORT>
          The port to listen on
          
          [env: PORT=]
          [default: 3000]

235
236
237
```
## SHARD_UDS_PATH
```shell
Merve Noyan's avatar
Merve Noyan committed
238
239
240
241
242
243
      --shard-uds-path <SHARD_UDS_PATH>
          The name of the socket for gRPC communication between the webserver and the shards
          
          [env: SHARD_UDS_PATH=]
          [default: /tmp/text-generation-server]

244
245
246
```
## MASTER_ADDR
```shell
Merve Noyan's avatar
Merve Noyan committed
247
248
249
250
251
252
      --master-addr <MASTER_ADDR>
          The address the master shard will listen on. (setting used by torch distributed)
          
          [env: MASTER_ADDR=]
          [default: localhost]

253
254
255
```
## MASTER_PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
256
257
258
259
260
261
      --master-port <MASTER_PORT>
          The address the master port will listen on. (setting used by torch distributed)
          
          [env: MASTER_PORT=]
          [default: 29500]

262
263
264
```
## HUGGINGFACE_HUB_CACHE
```shell
Merve Noyan's avatar
Merve Noyan committed
265
266
267
268
269
      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: HUGGINGFACE_HUB_CACHE=]

270
271
272
```
## WEIGHTS_CACHE_OVERRIDE
```shell
Merve Noyan's avatar
Merve Noyan committed
273
274
275
276
277
      --weights-cache-override <WEIGHTS_CACHE_OVERRIDE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: WEIGHTS_CACHE_OVERRIDE=]

278
279
280
```
## DISABLE_CUSTOM_KERNELS
```shell
Merve Noyan's avatar
Merve Noyan committed
281
282
283
284
285
      --disable-custom-kernels
          For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues
          
          [env: DISABLE_CUSTOM_KERNELS=]

286
287
288
```
## CUDA_MEMORY_FRACTION
```shell
Merve Noyan's avatar
Merve Noyan committed
289
290
291
292
293
294
      --cuda-memory-fraction <CUDA_MEMORY_FRACTION>
          Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction
          
          [env: CUDA_MEMORY_FRACTION=]
          [default: 1.0]

295
296
297
```
## ROPE_SCALING
```shell
Merve Noyan's avatar
Merve Noyan committed
298
299
300
301
302
303
304
305
306
307
308
309
      --rope-scaling <ROPE_SCALING>
          Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts.
          
          Goes together with `rope_factor`.
          
          `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically)
          
          `--rope-scaling linear --rope-factor` fully describes the scaling you want
          
          [env: ROPE_SCALING=]
          [possible values: linear, dynamic]

310
311
312
```
## ROPE_FACTOR
```shell
Merve Noyan's avatar
Merve Noyan committed
313
314
315
316
317
      --rope-factor <ROPE_FACTOR>
          Rope scaling will only be used for RoPE models See `rope_scaling`
          
          [env: ROPE_FACTOR=]

318
319
320
```
## JSON_OUTPUT
```shell
Merve Noyan's avatar
Merve Noyan committed
321
322
323
324
325
      --json-output
          Outputs the logs in JSON format (useful for telemetry)
          
          [env: JSON_OUTPUT=]

326
327
328
```
## OTLP_ENDPOINT
```shell
Merve Noyan's avatar
Merve Noyan committed
329
330
331
      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

332
333
334
```
## CORS_ALLOW_ORIGIN
```shell
Merve Noyan's avatar
Merve Noyan committed
335
336
337
      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

338
339
340
```
## WATERMARK_GAMMA
```shell
Merve Noyan's avatar
Merve Noyan committed
341
342
343
      --watermark-gamma <WATERMARK_GAMMA>
          [env: WATERMARK_GAMMA=]

344
345
346
```
## WATERMARK_DELTA
```shell
Merve Noyan's avatar
Merve Noyan committed
347
348
349
      --watermark-delta <WATERMARK_DELTA>
          [env: WATERMARK_DELTA=]

350
351
352
```
## NGROK
```shell
Merve Noyan's avatar
Merve Noyan committed
353
354
355
356
357
      --ngrok
          Enable ngrok tunneling
          
          [env: NGROK=]

358
359
360
```
## NGROK_AUTHTOKEN
```shell
Merve Noyan's avatar
Merve Noyan committed
361
362
363
364
365
      --ngrok-authtoken <NGROK_AUTHTOKEN>
          ngrok authentication token
          
          [env: NGROK_AUTHTOKEN=]

366
367
368
```
## NGROK_EDGE
```shell
Merve Noyan's avatar
Merve Noyan committed
369
370
371
372
373
      --ngrok-edge <NGROK_EDGE>
          ngrok edge
          
          [env: NGROK_EDGE=]

374
375
376
377
378
379
380
381
```
## TOKENIZER_CONFIG_PATH
```shell
      --tokenizer-config-path <TOKENIZER_CONFIG_PATH>
          The path to the tokenizer config file. This path is used to load the tokenizer configuration which may include a `chat_template`. If not provided, the default config will be used from the model hub
          
          [env: TOKENIZER_CONFIG_PATH=]

drbh's avatar
drbh committed
382
383
384
385
386
387
388
389
```
## DISABLE_GRAMMAR_SUPPORT
```shell
      --disable-grammar-support
          Disable outlines grammar constrained generation. This is a feature that allows you to generate text that follows a specific grammar
          
          [env: DISABLE_GRAMMAR_SUPPORT=]

390
391
392
```
## ENV
```shell
Merve Noyan's avatar
Merve Noyan committed
393
394
395
  -e, --env
          Display a lot of information about your runtime environment

396
397
398
```
## HELP
```shell
Merve Noyan's avatar
Merve Noyan committed
399
400
401
  -h, --help
          Print help (see a summary with '-h')

402
403
404
```
## VERSION
```shell
Merve Noyan's avatar
Merve Noyan committed
405
406
407
  -V, --version
          Print version

408
```