"git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "bd06dd023f7f01040b6ee1ca463adda8bf2fc25e"
Unverified Commit 3cd28092 authored by HAI's avatar HAI Committed by GitHub
Browse files

[Docs, ROCm] update install to cover ROCm with MI GPUs (#1915)

parent 704f8e8e
...@@ -7,7 +7,7 @@ You can install SGLang using any of the methods below. ...@@ -7,7 +7,7 @@ You can install SGLang using any of the methods below.
pip install --upgrade pip pip install --upgrade pip
pip install "sglang[all]" pip install "sglang[all]"
# Install FlashInfer accelerated kernels # Install FlashInfer accelerated kernels (CUDA only for now)
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
``` ```
...@@ -22,7 +22,7 @@ cd sglang ...@@ -22,7 +22,7 @@ cd sglang
pip install --upgrade pip pip install --upgrade pip
pip install -e "python[all]" pip install -e "python[all]"
# Install FlashInfer accelerated kernels # Install FlashInfer accelerated kernels (CUDA only for now)
pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/ pip install flashinfer -i https://flashinfer.ai/whl/cu121/torch2.4/
``` ```
...@@ -42,6 +42,25 @@ docker run --gpus all \ ...@@ -42,6 +42,25 @@ docker run --gpus all \
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000 python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
``` ```
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use `docker/Dockerfile.rocm` to build images, example and usage as below:
```bash
docker build --build-arg SGL_BRANCH=v0.3.5 -t v0.3.5-rocm620 -f Dockerfile.rocm .
alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
-v $HOME/dockerx:/dockerx -v /data:/data'
drun -p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
v0.3.5-rocm620 \
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.3.5-rocm620 python3 -m sglang.bench_latency --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
```
## Method 4: Using docker compose ## Method 4: Using docker compose
<details> <details>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment