launcher.md 13.8 KB
Newer Older
Merve Noyan's avatar
Merve Noyan committed
1
# Text-generation-launcher arguments
Mishig's avatar
Mishig committed
2

3
4
<!-- WRAP CODE BLOCKS -->

5
```shell
Merve Noyan's avatar
Merve Noyan committed
6
7
8
9
10
Text Generation Launcher

Usage: text-generation-launcher [OPTIONS]

Options:
11
12
13
```
## MODEL_ID
```shell
Merve Noyan's avatar
Merve Noyan committed
14
15
16
17
18
19
      --model-id <MODEL_ID>
          The name of the model to load. Can be a MODEL_ID as listed on <https://hf.co/models> like `gpt2` or `OpenAssistant/oasst-sft-1-pythia-12b`. Or it can be a local directory containing the necessary files as saved by `save_pretrained(...)` methods of transformers
          
          [env: MODEL_ID=]
          [default: bigscience/bloom-560m]

20
21
22
```
## REVISION
```shell
Merve Noyan's avatar
Merve Noyan committed
23
24
25
26
27
      --revision <REVISION>
          The actual revision of the model if you're referring to a model on the hub. You can use a specific commit id or a branch like `refs/pr/2`
          
          [env: REVISION=]

28
29
30
```
## VALIDATION_WORKERS
```shell
Merve Noyan's avatar
Merve Noyan committed
31
32
33
34
35
36
      --validation-workers <VALIDATION_WORKERS>
          The number of tokenizer workers used for payload validation and truncation inside the router
          
          [env: VALIDATION_WORKERS=]
          [default: 2]

37
38
39
```
## SHARDED
```shell
Merve Noyan's avatar
Merve Noyan committed
40
41
42
43
44
45
      --sharded <SHARDED>
          Whether to shard the model across multiple GPUs By default text-generation-inference will use all available GPUs to run the model. Setting it to `false` deactivates `num_shard`
          
          [env: SHARDED=]
          [possible values: true, false]

46
47
48
```
## NUM_SHARD
```shell
Merve Noyan's avatar
Merve Noyan committed
49
50
51
52
53
      --num-shard <NUM_SHARD>
          The number of shards to use if you don't want to use all GPUs on a given machine. You can use `CUDA_VISIBLE_DEVICES=0,1 text-generation-launcher... --num_shard 2` and `CUDA_VISIBLE_DEVICES=2,3 text-generation-launcher... --num_shard 2` to launch 2 copies with 2 shard each on a given machine with 4 GPUs for instance
          
          [env: NUM_SHARD=]

54
55
56
```
## QUANTIZE
```shell
Merve Noyan's avatar
Merve Noyan committed
57
      --quantize <QUANTIZE>
58
          Whether you want the model to be quantized
Merve Noyan's avatar
Merve Noyan committed
59
60
          
          [env: QUANTIZE=]
61
62
63
64
65
66
67
68

          Possible values:
          - awq:              4 bit quantization. Requires a specific GTPQ quantized model: https://hf.co/models?search=awq. Should replace GPTQ models whereever possible because of the better latency
          - eetq:             8 bit quantization, doesn't require specific model. Should be a drop-in replacement to bitsandbytes with much better performance. Kernels are from https://github.com/NetEase-FuXi/EETQ.git
          - gptq:             4 bit quantization. Requires a specific GTPQ quantized model: https://hf.co/models?search=gptq. text-generation-inference will use exllama (faster) kernels whereever possible, and use triton kernel (wider support) when it's not. AWQ has faster kernels
          - bitsandbytes:     Bitsandbytes 8bit. Can be applied on any model, will cut the memory requirement in half, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-nf4: Bitsandbytes 4bit. Can be applied on any model, will cut the memory requirement by 4x, but it is known that the model will be much slower to run than the native f16
          - bitsandbytes-fp4: Bitsandbytes 4bit. nf4 should be preferred in most cases but maybe this one has better perplexity performance for you model
Merve Noyan's avatar
Merve Noyan committed
69

70
71
72
```
## DTYPE
```shell
Merve Noyan's avatar
Merve Noyan committed
73
74
75
76
77
78
      --dtype <DTYPE>
          The dtype to be forced upon the model. This option cannot be used with `--quantize`
          
          [env: DTYPE=]
          [possible values: float16, bfloat16]

79
80
81
```
## TRUST_REMOTE_CODE
```shell
Merve Noyan's avatar
Merve Noyan committed
82
83
84
85
86
      --trust-remote-code
          Whether you want to execute hub modelling code. Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision
          
          [env: TRUST_REMOTE_CODE=]

87
88
89
```
## MAX_CONCURRENT_REQUESTS
```shell
Merve Noyan's avatar
Merve Noyan committed
90
91
92
93
94
95
      --max-concurrent-requests <MAX_CONCURRENT_REQUESTS>
          The maximum amount of concurrent requests for this particular deployment. Having a low limit will refuse clients requests instead of having them wait for too long and is usually good to handle backpressure correctly
          
          [env: MAX_CONCURRENT_REQUESTS=]
          [default: 128]

96
97
98
```
## MAX_BEST_OF
```shell
Merve Noyan's avatar
Merve Noyan committed
99
100
101
102
103
104
      --max-best-of <MAX_BEST_OF>
          This is the maximum allowed value for clients to set `best_of`. Best of makes `n` generations at the same time, and return the best in terms of overall log probability over the entire generated sequence
          
          [env: MAX_BEST_OF=]
          [default: 2]

105
106
107
```
## MAX_STOP_SEQUENCES
```shell
Merve Noyan's avatar
Merve Noyan committed
108
109
110
111
112
113
      --max-stop-sequences <MAX_STOP_SEQUENCES>
          This is the maximum allowed value for clients to set `stop_sequences`. Stop sequences are used to allow the model to stop on more than just the EOS token, and enable more complex "prompting" where users can preprompt the model in a specific way and define their "own" stop token aligned with their prompt
          
          [env: MAX_STOP_SEQUENCES=]
          [default: 4]

114
115
116
```
## MAX_TOP_N_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
117
118
119
120
121
122
      --max-top-n-tokens <MAX_TOP_N_TOKENS>
          This is the maximum allowed value for clients to set `top_n_tokens`. `top_n_tokens is used to return information about the the `n` most likely tokens at each generation step, instead of just the sampled token. This information can be used for downstream tasks like for classification or ranking
          
          [env: MAX_TOP_N_TOKENS=]
          [default: 5]

123
124
125
```
## MAX_INPUT_LENGTH
```shell
Merve Noyan's avatar
Merve Noyan committed
126
127
128
129
130
131
      --max-input-length <MAX_INPUT_LENGTH>
          This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle
          
          [env: MAX_INPUT_LENGTH=]
          [default: 1024]

132
133
134
```
## MAX_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
135
136
137
138
139
140
      --max-total-tokens <MAX_TOTAL_TOKENS>
          This is the most important value to set as it defines the "memory budget" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be
          
          [env: MAX_TOTAL_TOKENS=]
          [default: 2048]

141
142
143
```
## WAITING_SERVED_RATIO
```shell
Merve Noyan's avatar
Merve Noyan committed
144
145
146
147
148
149
150
151
      --waiting-served-ratio <WAITING_SERVED_RATIO>
          This represents the ratio of waiting queries vs running queries where you want to start considering pausing the running queries to include the waiting ones into the same batch. `waiting_served_ratio=1.2` Means when 12 queries are waiting and there's only 10 queries left in the current batch we check if we can fit those 12 waiting queries into the batching strategy, and if yes, then batching happens delaying the 10 running queries by a `prefill` run.
          
          This setting is only applied if there is room in the batch as defined by `max_batch_total_tokens`.
          
          [env: WAITING_SERVED_RATIO=]
          [default: 1.2]

152
153
154
```
## MAX_BATCH_PREFILL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
155
156
157
158
159
160
      --max-batch-prefill-tokens <MAX_BATCH_PREFILL_TOKENS>
          Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent
          
          [env: MAX_BATCH_PREFILL_TOKENS=]
          [default: 4096]

161
162
163
```
## MAX_BATCH_TOTAL_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
164
165
166
167
168
169
170
171
172
173
174
175
176
      --max-batch-total-tokens <MAX_BATCH_TOTAL_TOKENS>
          **IMPORTANT** This is one critical control to allow maximum usage of the available hardware.
          
          This represents the total amount of potential tokens within a batch. When using padding (not recommended) this would be equivalent of `batch_size` * `max_total_tokens`.
          
          However in the non-padded (flash attention) version this can be much finer.
          
          For `max_batch_total_tokens=1000`, you could fit `10` queries of `total_tokens=100` or a single query of `1000` tokens.
          
          Overall this number should be the largest possible amount that fits the remaining memory (after the model is loaded). Since the actual memory overhead depends on other parameters like if you're using quantization, flash attention or the model implementation, text-generation-inference cannot infer this number automatically.
          
          [env: MAX_BATCH_TOTAL_TOKENS=]

177
178
179
```
## MAX_WAITING_TOKENS
```shell
Merve Noyan's avatar
Merve Noyan committed
180
181
182
183
184
185
186
187
188
189
190
191
      --max-waiting-tokens <MAX_WAITING_TOKENS>
          This setting defines how many tokens can be passed before forcing the waiting queries to be put on the batch (if the size of the batch allows for it). New queries require 1 `prefill` forward, which is different from `decode` and therefore you need to pause the running batch in order to run `prefill` to create the correct values for the waiting queries to be able to join the batch.
          
          With a value too small, queries will always "steal" the compute to run `prefill` and running queries will be delayed by a lot.
          
          With a value too big, waiting queries could wait for a very long time before being allowed a slot in the running batch. If your server is busy that means that requests that could run in ~2s on an empty server could end up running in ~20s because the query had to wait for 18s.
          
          This number is expressed in number of tokens to make it a bit more "model" agnostic, but what should really matter is the overall latency for end users.
          
          [env: MAX_WAITING_TOKENS=]
          [default: 20]

192
193
194
```
## HOSTNAME
```shell
Merve Noyan's avatar
Merve Noyan committed
195
196
197
198
199
200
      --hostname <HOSTNAME>
          The IP address to listen on
          
          [env: HOSTNAME=]
          [default: 0.0.0.0]

201
202
203
```
## PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
204
205
206
207
208
209
  -p, --port <PORT>
          The port to listen on
          
          [env: PORT=]
          [default: 3000]

210
211
212
```
## SHARD_UDS_PATH
```shell
Merve Noyan's avatar
Merve Noyan committed
213
214
215
216
217
218
      --shard-uds-path <SHARD_UDS_PATH>
          The name of the socket for gRPC communication between the webserver and the shards
          
          [env: SHARD_UDS_PATH=]
          [default: /tmp/text-generation-server]

219
220
221
```
## MASTER_ADDR
```shell
Merve Noyan's avatar
Merve Noyan committed
222
223
224
225
226
227
      --master-addr <MASTER_ADDR>
          The address the master shard will listen on. (setting used by torch distributed)
          
          [env: MASTER_ADDR=]
          [default: localhost]

228
229
230
```
## MASTER_PORT
```shell
Merve Noyan's avatar
Merve Noyan committed
231
232
233
234
235
236
      --master-port <MASTER_PORT>
          The address the master port will listen on. (setting used by torch distributed)
          
          [env: MASTER_PORT=]
          [default: 29500]

237
238
239
```
## HUGGINGFACE_HUB_CACHE
```shell
Merve Noyan's avatar
Merve Noyan committed
240
241
242
243
244
      --huggingface-hub-cache <HUGGINGFACE_HUB_CACHE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: HUGGINGFACE_HUB_CACHE=]

245
246
247
```
## WEIGHTS_CACHE_OVERRIDE
```shell
Merve Noyan's avatar
Merve Noyan committed
248
249
250
251
252
      --weights-cache-override <WEIGHTS_CACHE_OVERRIDE>
          The location of the huggingface hub cache. Used to override the location if you want to provide a mounted disk for instance
          
          [env: WEIGHTS_CACHE_OVERRIDE=]

253
254
255
```
## DISABLE_CUSTOM_KERNELS
```shell
Merve Noyan's avatar
Merve Noyan committed
256
257
258
259
260
      --disable-custom-kernels
          For some models (like bloom), text-generation-inference implemented custom cuda kernels to speed up inference. Those kernels were only tested on A100. Use this flag to disable them if you're running on different hardware and encounter issues
          
          [env: DISABLE_CUSTOM_KERNELS=]

261
262
263
```
## CUDA_MEMORY_FRACTION
```shell
Merve Noyan's avatar
Merve Noyan committed
264
265
266
267
268
269
      --cuda-memory-fraction <CUDA_MEMORY_FRACTION>
          Limit the CUDA available memory. The allowed value equals the total visible memory multiplied by cuda-memory-fraction
          
          [env: CUDA_MEMORY_FRACTION=]
          [default: 1.0]

270
271
272
```
## ROPE_SCALING
```shell
Merve Noyan's avatar
Merve Noyan committed
273
274
275
276
277
278
279
280
281
282
283
284
      --rope-scaling <ROPE_SCALING>
          Rope scaling will only be used for RoPE models and allow rescaling the position rotary to accomodate for larger prompts.
          
          Goes together with `rope_factor`.
          
          `--rope-factor 2.0` gives linear scaling with a factor of 2.0 `--rope-scaling dynamic` gives dynamic scaling with a factor of 1.0 `--rope-scaling linear` gives linear scaling with a factor of 1.0 (Nothing will be changed basically)
          
          `--rope-scaling linear --rope-factor` fully describes the scaling you want
          
          [env: ROPE_SCALING=]
          [possible values: linear, dynamic]

285
286
287
```
## ROPE_FACTOR
```shell
Merve Noyan's avatar
Merve Noyan committed
288
289
290
291
292
      --rope-factor <ROPE_FACTOR>
          Rope scaling will only be used for RoPE models See `rope_scaling`
          
          [env: ROPE_FACTOR=]

293
294
295
```
## JSON_OUTPUT
```shell
Merve Noyan's avatar
Merve Noyan committed
296
297
298
299
300
      --json-output
          Outputs the logs in JSON format (useful for telemetry)
          
          [env: JSON_OUTPUT=]

301
302
303
```
## OTLP_ENDPOINT
```shell
Merve Noyan's avatar
Merve Noyan committed
304
305
306
      --otlp-endpoint <OTLP_ENDPOINT>
          [env: OTLP_ENDPOINT=]

307
308
309
```
## CORS_ALLOW_ORIGIN
```shell
Merve Noyan's avatar
Merve Noyan committed
310
311
312
      --cors-allow-origin <CORS_ALLOW_ORIGIN>
          [env: CORS_ALLOW_ORIGIN=]

313
314
315
```
## WATERMARK_GAMMA
```shell
Merve Noyan's avatar
Merve Noyan committed
316
317
318
      --watermark-gamma <WATERMARK_GAMMA>
          [env: WATERMARK_GAMMA=]

319
320
321
```
## WATERMARK_DELTA
```shell
Merve Noyan's avatar
Merve Noyan committed
322
323
324
      --watermark-delta <WATERMARK_DELTA>
          [env: WATERMARK_DELTA=]

325
326
327
```
## NGROK
```shell
Merve Noyan's avatar
Merve Noyan committed
328
329
330
331
332
      --ngrok
          Enable ngrok tunneling
          
          [env: NGROK=]

333
334
335
```
## NGROK_AUTHTOKEN
```shell
Merve Noyan's avatar
Merve Noyan committed
336
337
338
339
340
      --ngrok-authtoken <NGROK_AUTHTOKEN>
          ngrok authentication token
          
          [env: NGROK_AUTHTOKEN=]

341
342
343
```
## NGROK_EDGE
```shell
Merve Noyan's avatar
Merve Noyan committed
344
345
346
347
348
      --ngrok-edge <NGROK_EDGE>
          ngrok edge
          
          [env: NGROK_EDGE=]

349
350
351
```
## ENV
```shell
Merve Noyan's avatar
Merve Noyan committed
352
353
354
  -e, --env
          Display a lot of information about your runtime environment

355
356
357
```
## HELP
```shell
Merve Noyan's avatar
Merve Noyan committed
358
359
360
  -h, --help
          Print help (see a summary with '-h')

361
362
363
```
## VERSION
```shell
Merve Noyan's avatar
Merve Noyan committed
364
365
366
  -V, --version
          Print version

367
```