Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
678d8cc9
"vscode:/vscode.git/clone" did not exist on "419fb8151b290a318660f41967f7ad8bff790999"
Unverified
Commit
678d8cc9
authored
May 09, 2025
by
Yineng Zhang
Committed by
GitHub
May 09, 2025
Browse files
chore: bump v0.4.6.post3 (#6165)
parent
d2cb3024
Changes
6
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
12 additions
and
12 deletions
+12
-12
benchmark/deepseek_v3/README.md
benchmark/deepseek_v3/README.md
+1
-1
docker/Dockerfile.rocm
docker/Dockerfile.rocm
+1
-1
docs/developer/setup_github_runner.md
docs/developer/setup_github_runner.md
+2
-2
docs/start/install.md
docs/start/install.md
+6
-6
python/pyproject.toml
python/pyproject.toml
+1
-1
python/sglang/version.py
python/sglang/version.py
+1
-1
No files found.
benchmark/deepseek_v3/README.md
View file @
678d8cc9
...
@@ -33,7 +33,7 @@ Add [performance optimization options](#performance-optimization-options) as nee
...
@@ -33,7 +33,7 @@ Add [performance optimization options](#performance-optimization-options) as nee
```
bash
```
bash
# Installation
# Installation
pip
install
"sglang[all]>=0.4.6.post
2
"
pip
install
"sglang[all]>=0.4.6.post
3
"
# Launch
# Launch
python3
-m
sglang.launch_server
--model
deepseek-ai/DeepSeek-V3
--tp
8
--trust-remote-code
python3
-m
sglang.launch_server
--model
deepseek-ai/DeepSeek-V3
--tp
8
--trust-remote-code
...
...
docker/Dockerfile.rocm
View file @
678d8cc9
# Usage (to build SGLang ROCm docker image):
# Usage (to build SGLang ROCm docker image):
# docker build --build-arg SGL_BRANCH=v0.4.6.post
2
-t v0.4.6.post
2
-rocm630 -f Dockerfile.rocm .
# docker build --build-arg SGL_BRANCH=v0.4.6.post
3
-t v0.4.6.post
3
-rocm630 -f Dockerfile.rocm .
# default base image
# default base image
ARG BASE_IMAGE="rocm/sgl-dev:vllm20250114"
ARG BASE_IMAGE="rocm/sgl-dev:vllm20250114"
...
...
docs/developer/setup_github_runner.md
View file @
678d8cc9
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
# Nvidia
# Nvidia
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
# AMD
# AMD
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post
2
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post
3
-rocm630 /bin/bash
# AMD just the last 2 GPUs
# AMD just the last 2 GPUs
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post
2
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.6.post
3
-rocm630 /bin/bash
```
```
### Step 2: Configure the runner by `config.sh`
### Step 2: Configure the runner by `config.sh`
...
...
docs/start/install.md
View file @
678d8cc9
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
```
bash
```
bash
pip
install
--upgrade
pip
pip
install
--upgrade
pip
pip
install
uv
pip
install
uv
uv pip
install
"sglang[all]>=0.4.6.post
2
"
uv pip
install
"sglang[all]>=0.4.6.post
3
"
```
```
**Quick Fixes to Common Problems**
**Quick Fixes to Common Problems**
...
@@ -29,7 +29,7 @@ uv pip install "sglang[all]>=0.4.6.post2"
...
@@ -29,7 +29,7 @@ uv pip install "sglang[all]>=0.4.6.post2"
```
bash
```
bash
# Use the last release branch
# Use the last release branch
git clone
-b
v0.4.6.post
2
https://github.com/sgl-project/sglang.git
git clone
-b
v0.4.6.post
3
https://github.com/sgl-project/sglang.git
cd
sglang
cd
sglang
pip
install
--upgrade
pip
pip
install
--upgrade
pip
...
@@ -44,7 +44,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
...
@@ -44,7 +44,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
```
bash
```
bash
# Use the last release branch
# Use the last release branch
git clone
-b
v0.4.6.post
2
https://github.com/sgl-project/sglang.git
git clone
-b
v0.4.6.post
3
https://github.com/sgl-project/sglang.git
cd
sglang
cd
sglang
pip
install
--upgrade
pip
pip
install
--upgrade
pip
...
@@ -73,7 +73,7 @@ docker run --gpus all \
...
@@ -73,7 +73,7 @@ docker run --gpus all \
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
```
bash
```
bash
docker build
--build-arg
SGL_BRANCH
=
v0.4.6.post
2
-t
v0.4.6.post
2
-rocm630
-f
Dockerfile.rocm
.
docker build
--build-arg
SGL_BRANCH
=
v0.4.6.post
3
-t
v0.4.6.post
3
-rocm630
-f
Dockerfile.rocm
.
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
...
@@ -82,11 +82,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
...
@@ -82,11 +82,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
drun
-p
30000:30000
\
drun
-p
30000:30000
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
--env
"HF_TOKEN=<secret>"
\
--env
"HF_TOKEN=<secret>"
\
v0.4.6.post
2
-rocm630
\
v0.4.6.post
3
-rocm630
\
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.4.6.post
2
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
drun v0.4.6.post
3
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
```
```
## Method 4: Using docker compose
## Method 4: Using docker compose
...
...
python/pyproject.toml
View file @
678d8cc9
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
[project]
name
=
"sglang"
name
=
"sglang"
version
=
"0.4.6.post
2
"
version
=
"0.4.6.post
3
"
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
readme
=
"README.md"
readme
=
"README.md"
requires-python
=
">=3.8"
requires-python
=
">=3.8"
...
...
python/sglang/version.py
View file @
678d8cc9
__version__
=
"0.4.6.post
2
"
__version__
=
"0.4.6.post
3
"
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment