Unverified Commit ad0ff62a authored by Lianmin Zheng's avatar Lianmin Zheng Committed by GitHub
Browse files

Balance test in CI (#1411)

parent 9a903a87
......@@ -88,29 +88,23 @@ jobs:
pip install -e "python[all]"
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ --force-reinstall
- name: Benchmark Offline Throughput
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_default
- name: Benchmark Offline Throughput (w/o RadixAttention)
- name: Benchmark Single Latency
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_without_radix_cache
python3 -m unittest test_bench_latency.TestBenchLatency.test_default
- name: Benchmark Offline Throughput (w/o ChunkedPrefill)
- name: Benchmark Online Latency
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_without_chunked_prefill
python3 -m unittest test_bench_serving.TestBenchServing.test_online_latency_default
- name: Benchmark Offline Throughput (w/ Triton)
- name: Benchmark Offline Throughput
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_with_triton_attention_backend
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_default
performance-test-1-gpu-part-2:
if: github.repository == 'sgl-project/sglang' || github.event_name == 'pull_request'
......@@ -125,17 +119,23 @@ jobs:
pip install -e "python[all]"
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ --force-reinstall
- name: Benchmark Single Latency
- name: Benchmark Offline Throughput (w/o RadixAttention)
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_latency.TestBenchLatency.test_default
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_without_radix_cache
- name: Benchmark Online Latency
- name: Benchmark Offline Throughput (w/o ChunkedPrefill)
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_online_latency_default
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_without_chunked_prefill
- name: Benchmark Offline Throughput (w/ Triton)
timeout-minutes: 10
run: |
cd test/srt
python3 -m unittest test_bench_serving.TestBenchServing.test_offline_throughput_with_triton_attention_backend
performance-test-2-gpu:
if: github.repository == 'sgl-project/sglang' || github.event_name == 'pull_request'
......
......@@ -7,5 +7,5 @@
- `bench_latency.py`: Benchmark a single static batch.
- `bench_serving.py`: Benchmark online serving with dynamic requests.
- `global_config.py`: The global configs and constants.
- `launch_server.py`: The entry point of launching local server.
- `launch_server.py`: The entry point for launching the local server.
- `utils.py`: Common utilities.
......@@ -69,7 +69,7 @@ class TestBenchServing(unittest.TestCase):
if os.getenv("SGLANG_IS_IN_CI", "false") == "true":
assert res["median_e2e_latency_ms"] < 12000
assert res["median_ttft_ms"] < 78
assert res["median_ttft_ms"] < 80
assert res["median_itl_ms"] < 12
def test_moe_offline_throughput_default(self):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment