Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
9c58e68b
Unverified
Commit
9c58e68b
authored
Mar 06, 2025
by
Lianmin Zheng
Committed by
GitHub
Mar 06, 2025
Browse files
Release v0.4.3.post4 (#4140)
parent
d03b3467
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
11 additions
and
11 deletions
+11
-11
docker/Dockerfile.rocm
docker/Dockerfile.rocm
+1
-1
docs/developer/setup_github_runner.md
docs/developer/setup_github_runner.md
+2
-2
docs/start/install.md
docs/start/install.md
+6
-6
python/pyproject.toml
python/pyproject.toml
+1
-1
python/sglang/version.py
python/sglang/version.py
+1
-1
No files found.
docker/Dockerfile.rocm
View file @
9c58e68b
# Usage (to build SGLang ROCm docker image):
# Usage (to build SGLang ROCm docker image):
# docker build --build-arg SGL_BRANCH=v0.4.3.post
3
-t v0.4.3.post
3
-rocm630 -f Dockerfile.rocm .
# docker build --build-arg SGL_BRANCH=v0.4.3.post
4
-t v0.4.3.post
4
-rocm630 -f Dockerfile.rocm .
# default base image
# default base image
ARG BASE_IMAGE="rocm/sgl-dev:vllm20250114"
ARG BASE_IMAGE="rocm/sgl-dev:vllm20250114"
...
...
docs/developer/setup_github_runner.md
View file @
9c58e68b
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
# Nvidia
# Nvidia
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
# AMD
# AMD
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.3.post
3
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.3.post
4
-rocm630 /bin/bash
# AMD just the last 2 GPUs
# AMD just the last 2 GPUs
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.3.post
3
-rocm630 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.4.3.post
4
-rocm630 /bin/bash
```
```
### Step 2: Configure the runner by `config.sh`
### Step 2: Configure the runner by `config.sh`
...
...
docs/start/install.md
View file @
9c58e68b
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
...
@@ -11,7 +11,7 @@ It is recommended to use uv to install the dependencies for faster installation:
```
bash
```
bash
pip
install
--upgrade
pip
pip
install
--upgrade
pip
pip
install
uv
pip
install
uv
uv pip
install
"sglang[all]>=0.4.3.post
3
"
--find-links
https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
uv pip
install
"sglang[all]>=0.4.3.post
4
"
--find-links
https://flashinfer.ai/whl/cu124/torch2.5/flashinfer-python
```
```
**Quick Fixes to Common Problems**
**Quick Fixes to Common Problems**
...
@@ -27,7 +27,7 @@ uv pip install "sglang[all]>=0.4.3.post3" --find-links https://flashinfer.ai/whl
...
@@ -27,7 +27,7 @@ uv pip install "sglang[all]>=0.4.3.post3" --find-links https://flashinfer.ai/whl
## Method 2: From source
## Method 2: From source
```
```
# Use the last release branch
# Use the last release branch
git clone -b v0.4.3.post
3
https://github.com/sgl-project/sglang.git
git clone -b v0.4.3.post
4
https://github.com/sgl-project/sglang.git
cd sglang
cd sglang
pip install --upgrade pip
pip install --upgrade pip
...
@@ -42,7 +42,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
...
@@ -42,7 +42,7 @@ Note: For AMD ROCm system with Instinct/MI GPUs, do following instead:
```
```
# Use the last release branch
# Use the last release branch
git clone -b v0.4.3.post
3
https://github.com/sgl-project/sglang.git
git clone -b v0.4.3.post
4
https://github.com/sgl-project/sglang.git
cd sglang
cd sglang
pip install --upgrade pip
pip install --upgrade pip
...
@@ -70,7 +70,7 @@ docker run --gpus all \
...
@@ -70,7 +70,7 @@ docker run --gpus all \
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
Note: For AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
```
bash
```
bash
docker build
--build-arg
SGL_BRANCH
=
v0.4.3.post
3
-t
v0.4.3.post
3
-rocm630
-f
Dockerfile.rocm
.
docker build
--build-arg
SGL_BRANCH
=
v0.4.3.post
4
-t
v0.4.3.post
4
-rocm630
-f
Dockerfile.rocm
.
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
...
@@ -79,11 +79,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
...
@@ -79,11 +79,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
drun
-p
30000:30000
\
drun
-p
30000:30000
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
--env
"HF_TOKEN=<secret>"
\
--env
"HF_TOKEN=<secret>"
\
v0.4.3.post
3
-rocm630
\
v0.4.3.post
4
-rocm630
\
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.4.3.post
3
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
drun v0.4.3.post
4
-rocm630 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
```
```
## Method 4: Using docker compose
## Method 4: Using docker compose
...
...
python/pyproject.toml
View file @
9c58e68b
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
[project]
name
=
"sglang"
name
=
"sglang"
version
=
"0.4.3.post
3
"
version
=
"0.4.3.post
4
"
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
readme
=
"README.md"
readme
=
"README.md"
requires-python
=
">=3.8"
requires-python
=
">=3.8"
...
...
python/sglang/version.py
View file @
9c58e68b
__version__
=
"0.4.3.post
3
"
__version__
=
"0.4.3.post
4
"
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment