Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
3cd28092
Unverified
Commit
3cd28092
authored
Nov 04, 2024
by
HAI
Committed by
GitHub
Nov 04, 2024
Browse files
[Docs, ROCm] update install to cover ROCm with MI GPUs (#1915)
parent
704f8e8e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
21 additions
and
2 deletions
+21
-2
docs/start/install.md
docs/start/install.md
+21
-2
No files found.
docs/start/install.md
View file @
3cd28092
...
@@ -7,7 +7,7 @@ You can install SGLang using any of the methods below.
...
@@ -7,7 +7,7 @@ You can install SGLang using any of the methods below.
pip install --upgrade pip
pip install --upgrade pip
pip install "sglang[all]"
pip install "sglang[all]"
# Install FlashInfer accelerated kernels
# Install FlashInfer accelerated kernels
(CUDA only for now)
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
```
```
...
@@ -22,7 +22,7 @@ cd sglang
...
@@ -22,7 +22,7 @@ cd sglang
pip install --upgrade pip
pip install --upgrade pip
pip install -e "python[all]"
pip install -e "python[all]"
# Install FlashInfer accelerated kernels
# Install FlashInfer accelerated kernels
(CUDA only for now)
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
```
```
...
@@ -42,6 +42,25 @@ docker run --gpus all \
...
@@ -42,6 +42,25 @@ docker run --gpus all \
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
```
```
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use
`docker/Dockerfile.rocm`
to build images, example and usage as below:
```
bash
docker build
--build-arg
SGL_BRANCH
=
v0.3.5
-t
v0.3.5-rocm620
-f
Dockerfile.rocm
.
alias
drun
=
'docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
-v $HOME/dockerx:/dockerx -v /data:/data'
drun
-p
30000:30000
\
-v
~/.cache/huggingface:/root/.cache/huggingface
\
--env
"HF_TOKEN=<secret>"
\
v0.3.5-rocm620
\
python3
-m
sglang.launch_server
--model-path
meta-llama/Llama-3.1-8B-Instruct
--host
0.0.0.0
--port
30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.3.5-rocm620 python3
-m
sglang.bench_latency
--batch-size
32
--input
1024
--output
128
--model
amd/Meta-Llama-3.1-8B-Instruct-FP8-KV
--tp
8
--quantization
fp8
```
## Method 4: Using docker compose
## Method 4: Using docker compose
<details>
<details>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment