@@ -208,11 +208,11 @@ Instructions for supporting a new model are [here](https://github.com/sgl-projec
- Benchmark a single static batch by running the following command without launching a server. The arguments are the same as those for `launch_server.py`. This is not a dynamic batching server, so it may run out of memory for a batch size that can run successfully with a real server. This is because a real server will truncate the prefill into several batches/chunks, while this unit test does not do this.
1. To profile a single batch, use `nsys profile --cuda-graph-trace=node python3 -m sglang.bench_latency --model meta-llama/Meta-Llama-3-8B --batch-size 64 --input-len 512`
1. To profile a single batch, use `nsys profile --cuda-graph-trace=node python3 -m sglang.benchmarks.bench_latency --model meta-llama/Meta-Llama-3-8B --batch-size 64 --input-len 512`
2. To profile a server, use `nsys profile --cuda-graph-trace=node python3 -m sglang.launch_server --model meta-llama/Meta-Llama-3-8B`.