Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
9020f7fc
Unverified
Commit
9020f7fc
authored
Aug 08, 2025
by
Yineng Zhang
Committed by
GitHub
Aug 08, 2025
Browse files
chore: bump v0.5.0rc0 (#8959)
parent
dd650e0e
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
11 additions
and
11 deletions
+11
-11
benchmark/deepseek_v3/README.md
benchmark/deepseek_v3/README.md
+1
-1
docs/references/setup_github_runner.md
docs/references/setup_github_runner.md
+2
-2
docs/start/install.md
docs/start/install.md
+6
-6
python/pyproject.toml
python/pyproject.toml
+1
-1
python/sglang/version.py
python/sglang/version.py
+1
-1
No files found.
benchmark/deepseek_v3/README.md
View file @
9020f7fc
...
@@ -33,7 +33,7 @@ Add [performance optimization options](#performance-optimization-options) as nee
...
@@ -33,7 +33,7 @@ Add [performance optimization options](#performance-optimization-options) as nee
```
bash
```
bash
# Installation
# Installation
pip
install
"sglang[all]>=0.
4.10.post2
"
pip
install
"sglang[all]>=0.
5.0rc0
"
# Launch
# Launch
python3
-m
sglang.launch_server
--model
deepseek-ai/DeepSeek-V3
--tp
8
--trust-remote-code
python3
-m
sglang.launch_server
--model
deepseek-ai/DeepSeek-V3
--tp
8
--trust-remote-code
...
...
docs/references/setup_github_runner.md
View file @
9020f7fc
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
# Nvidia
# Nvidia
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
# AMD
# AMD
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.
4.10.post2
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.
5.0rc0
-rocm630 /bin/bash
# AMD just the last 2 GPUs
# AMD just the last 2 GPUs
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.
4.10.post2
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.
5.0rc0
-rocm630 /bin/bash
```
```
### Step 2: Configure the runner by `config.sh`
### Step 2: Configure the runner by `config.sh`
...
...
docs/start/install.md
View file @
9020f7fc
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
```
bash
```
bash
pip
install
--upgrade
pip
pip
install
--upgrade
pip
pip
install
uv
pip
install
uv
uv pip
install
"sglang[all]>=0.
4.10.post2
"
uv pip
install
"sglang[all]>=0.
5.0rc0
"
```
```
**Quick Fixes to Common Problems**
**Quick Fixes to Common Problems**
...
@@ -27,7 +27,7 @@ uv pip install "sglang[all]>=0.4.10.post2"
...
@@ -27,7 +27,7 @@ uv pip install "sglang[all]>=0.4.10.post2"
```
bash
```
bash
# Use the last release branch
# Use the last release branch
git clone
-b
v0.
4.10.post2
https://github.com/sgl-project/sglang.git
git clone
-b
v0.
5.0rc0
https://github.com/sgl-project/sglang.git
cd
sglang
cd
sglang
pip
install
--upgrade
pip
pip
install
--upgrade
pip
...
@@ -42,7 +42,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
...
@@ -42,7 +42,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
```
bash
```
bash
# Use the last release branch
# Use the last release branch
git clone
-b
v0.
4.10.post2
https://github.com/sgl-project/sglang.git
git clone
-b
v0.
5.0rc0
https://github.com/sgl-project/sglang.git
cd
sglang
cd
sglang
pip
install
--upgrade
pip
pip
install
--upgrade
pip
...
@@ -74,7 +74,7 @@ docker run --gpus all \
...
@@ -74,7 +74,7 @@ docker run --gpus all \
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
```
bash
```
bash
docker build
--build-arg
SGL_BRANCH
=
v0.
4.10.post2
-t
v0.4.10.post2
-rocm630
-f
Dockerfile.rocm
.
docker build
--build-arg
SGL_BRANCH
=
v0.
5.0rc0
-t
v0.5.0rc0
-rocm630
-f
Dockerfile.rocm
.
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
...
@@ -83,11 +83,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
...
@@ -83,11 +83,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
drun
-p
30000:30000
\
drun
-p
30000:30000
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
--env
"HF_TOKEN=<secret>"
\
--env
"HF_TOKEN=<secret>"
\
v0.
4.10.post2
-rocm630
\
v0.
5.0rc0
-rocm630
\
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.
4.10.post2
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
drun v0.
5.0rc0
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
```
```
Note: Please refer to
[
the CPU installation guide using Docker
](
../references/cpu.md#install-using-docker
)
Note: Please refer to
[
the CPU installation guide using Docker
](
../references/cpu.md#install-using-docker
)
...
...
python/pyproject.toml
View file @
9020f7fc
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
[project]
name
=
"sglang"
name
=
"sglang"
version
=
"0.
4.10.post2
"
version
=
"0.
5.0rc0
"
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
readme
=
"README.md"
readme
=
"README.md"
requires-python
=
">=3.9"
requires-python
=
">=3.9"
...
...
python/sglang/version.py
View file @
9020f7fc
__version__
=
"0.
4.10.post2
"
__version__
=
"0.
5.0rc0
"
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment