Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
74bc9184
Unverified
Commit
74bc9184
authored
Dec 09, 2024
by
Yineng Zhang
Committed by
GitHub
Dec 09, 2024
Browse files
minor: add random use case (#2408)
parent
0f8eb153
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
31 additions
and
2 deletions
+31
-2
.github/workflows/experiment-runner.yml
.github/workflows/experiment-runner.yml
+6
-2
test/srt/configs/random_config.yaml
test/srt/configs/random_config.yaml
+25
-0
No files found.
.github/workflows/experiment-runner.yml
View file @
74bc9184
...
@@ -2,6 +2,10 @@ name: Experiment Runner
...
@@ -2,6 +2,10 @@ name: Experiment Runner
on
:
on
:
workflow_dispatch
:
workflow_dispatch
:
inputs
:
script
:
description
:
"
Experiment
Runner
Script"
default
:
"
configs/sharegpt_config.yaml"
concurrency
:
concurrency
:
group
:
experiment-runner-${{ github.ref }}
group
:
experiment-runner-${{ github.ref }}
...
@@ -20,7 +24,7 @@ jobs:
...
@@ -20,7 +24,7 @@ jobs:
bash scripts/ci_install_dependency.sh
bash scripts/ci_install_dependency.sh
-
name
:
Test experiment runner
-
name
:
Test experiment runner
timeout-minutes
:
10
timeout-minutes
:
1
2
0
run
:
|
run
:
|
cd test/srt
cd test/srt
python3 experiment_runner.py --config
configs/sharegpt_config.yaml
python3 experiment_runner.py --config
${{ inputs.script }}
test/srt/configs/random_config.yaml
0 → 100644
View file @
74bc9184
tasks
:
-
name
:
sglang-128-4
server_cmd
:
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --disable-radix-cache
client_cmd
:
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 128 --random-output 4 --request-rate 24 --num-prompt
1440
-
name
:
vllm-128-4
server_cmd
:
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-3.1-8B-Instruct --disable-log-requests
client_cmd
:
python3 -m sglang.bench_serving --backend vllm --dataset-name random --random-input 128 --random-output 4 --request-rate 24 --num-prompt
1440
-
name
:
sglang-2000-100
server_cmd
:
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --disable-radix-cache
client_cmd
:
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 2000 --random-output 100 --request-rate 2 --num-prompt
120
-
name
:
vllm-2000-100
server_cmd
:
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-3.1-8B-Instruct --disable-log-requests
client_cmd
:
python3 -m sglang.bench_serving --backend vllm --dataset-name random --random-input 2000 --random-output 100 --request-rate 2 --num-prompt
120
-
name
:
sglang-4000-200
server_cmd
:
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --disable-radix-cache
client_cmd
:
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 4000 --random-output 200 --request-rate 8 --num-prompt
480
-
name
:
vllm-4000-200
server_cmd
:
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-3.1-8B-Instruct --disable-log-requests
client_cmd
:
python3 -m sglang.bench_serving --backend vllm --dataset-name random --random-input 4000 --random-output 200 --request-rate 8 --num-prompt
480
-
name
:
sglang-32000-100
server_cmd
:
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --disable-radix-cache
client_cmd
:
python3 -m sglang.bench_serving --backend sglang --dataset-name random --random-input 32000 --random-output 100 --request-rate 1 --num-prompt
60
-
name
:
vllm-32000-100
server_cmd
:
python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-3.1-8B-Instruct --disable-log-requests
client_cmd
:
python3 -m sglang.bench_serving --backend vllm --dataset-name random --random-input 32000 --random-output 100 --request-rate 1 --num-prompt
60
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment