@@ -10,7 +10,8 @@ When the server is running at full load, look for the following in the log:
### Tune Your Request Submission Speed
`#queue-req` indicates the number of requests in the queue. If you frequently see `#queue-req == 0`, it suggests you are bottlenecked by the request submission speed.
A healthy range for `#queue-req` is `100 - 1000`.
A healthy range for `#queue-req` is `50 - 1000`.
On the other hand, do not make `#queue-req` too large because it will also increase the scheduling overhead on the server.
### Tune `--schedule-conservativeness`
`token usage` indicates the KV cache memory utilization of the server. `token usage > 0.9` means good utilization.
...
...
@@ -19,13 +20,14 @@ The case of serving being too conservative can happen when users send many reque
On the other hand, if you see `token usage` very high and you frequently see warnings like
`decode out of memory happened, #retracted_reqs: 1, #new_token_ratio: 0.9998 -> 1.0000`, you can increase `--schedule-conservativeness` to a value like 1.3.
If you see `decode out of memory happened` occasionally but not frequently, it is okay.
### Tune `--dp-size` and `--tp-size`
Data parallelism is better for throughput. When there is enough GPU memory, always favor data parallelism for throughput.