Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
9a00e6f4
Unverified
Commit
9a00e6f4
authored
Nov 22, 2024
by
Yineng Zhang
Committed by
GitHub
Nov 22, 2024
Browse files
chore: bump v0.3.6 (#2120)
parent
4f8c3aea
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
9 additions
and
9 deletions
+9
-9
docker/Dockerfile.rocm
docker/Dockerfile.rocm
+1
-1
docs/developer/setup_github_runner.md
docs/developer/setup_github_runner.md
+2
-2
docs/start/install.md
docs/start/install.md
+4
-4
python/pyproject.toml
python/pyproject.toml
+1
-1
python/sglang/version.py
python/sglang/version.py
+1
-1
No files found.
docker/Dockerfile.rocm
View file @
9a00e6f4
# Usage (to build SGLang ROCm docker image):
# Usage (to build SGLang ROCm docker image):
# docker build --build-arg SGL_BRANCH=v0.3.
5.post2
-t testImage -f Dockerfile.rocm .
# docker build --build-arg SGL_BRANCH=v0.3.
6
-t testImage -f Dockerfile.rocm .
# default base image
# default base image
ARG BASE_IMAGE="rocm/vllm-dev:20241022"
ARG BASE_IMAGE="rocm/vllm-dev:20241022"
...
...
docs/developer/setup_github_runner.md
View file @
9a00e6f4
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
...
@@ -11,9 +11,9 @@ docker pull nvidia/cuda:12.1.1-devel-ubuntu22.04
# Nvidia
# Nvidia
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
docker run --shm-size 128g -it -v /tmp/huggingface:/hf_home --gpus all nvidia/cuda:12.1.1-devel-ubuntu22.04 /bin/bash
# AMD
# AMD
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.3.
5.post2
-rocm620 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.3.
6
-rocm620 /bin/bash
# AMD just the last 2 GPUs
# AMD just the last 2 GPUs
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.3.
5.post2
-rocm620 /bin/bash
docker run --rm --device=/dev/kfd --device=/dev/dri/renderD176 --device=/dev/dri/renderD184 --group-add video --shm-size 128g -it -v /tmp/huggingface:/hf_home lmsysorg/sglang:v0.3.
6
-rocm620 /bin/bash
```
```
### Step 2: Configure the runner by `config.sh`
### Step 2: Configure the runner by `config.sh`
...
...
docs/start/install.md
View file @
9a00e6f4
...
@@ -16,7 +16,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/
...
@@ -16,7 +16,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/
## Method 2: From source
## Method 2: From source
```
```
# Use the last release branch
# Use the last release branch
git clone -b v0.3.
5.post2
https://github.com/sgl-project/sglang.git
git clone -b v0.3.
6
https://github.com/sgl-project/sglang.git
cd sglang
cd sglang
pip install --upgrade pip
pip install --upgrade pip
...
@@ -46,7 +46,7 @@ docker run --gpus all \
...
@@ -46,7 +46,7 @@ docker run --gpus all \
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
```
bash
```
bash
docker build
--build-arg
SGL_BRANCH
=
v0.3.
5.post2
-t
v0.3.
5.post2
-rocm620
-f
Dockerfile.rocm
.
docker build
--build-arg
SGL_BRANCH
=
v0.3.
6
-t
v0.3.
6
-rocm620
-f
Dockerfile.rocm
.
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
...
@@ -55,11 +55,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
...
@@ -55,11 +55,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
drun
-p
30000:30000
\
drun
-p
30000:30000
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
--env
"HF_TOKEN=<secret>"
\
--env
"HF_TOKEN=<secret>"
\
v0.3.
5.post2
-rocm620
\
v0.3.
6
-rocm620
\
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.3.
5.post2
-rocm620 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
drun v0.3.
6
-rocm620 python3
-m
sglang.bench_one_batch
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
```
```
## Method 4: Using docker compose
## Method 4: Using docker compose
...
...
python/pyproject.toml
View file @
9a00e6f4
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
...
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
[project]
name
=
"sglang"
name
=
"sglang"
version
=
"0.3.
5.post2
"
version
=
"0.3.
6
"
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
description
=
"SGLang is yet another fast serving framework for large language models and vision language models."
readme
=
"README.md"
readme
=
"README.md"
requires-python
=
">=3.8"
requires-python
=
">=3.8"
...
...
python/sglang/version.py
View file @
9a00e6f4
__version__
=
"0.3.
5.post2
"
__version__
=
"0.3.
6
"
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment