"docs/source/en/using-diffusers/audio.mdx" did not exist on "48d0123f0f4415b1bb78f5a538df8b0b9975c6d4"
Unverified Commit f407fcf9 authored by Lianmin Zheng's avatar Lianmin Zheng Committed by GitHub
Browse files

Release v0.3.5.post1 (#2022)

parent 54479d6f
# Usage (to build SGLang ROCm docker image):
# docker build --build-arg SGL_BRANCH=v0.3.5 -t testImage -f Dockerfile.rocm .
# docker build --build-arg SGL_BRANCH=v0.3.5.post1 -t testImage -f Dockerfile.rocm .
# default base image
ARG BASE_IMAGE="rocm/vllm-dev:20241022"
......
......@@ -16,7 +16,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/
## Method 2: From source
```
# Use the last release branch
git clone -b v0.3.5 https://github.com/sgl-project/sglang.git
git clone -b v0.3.5.post1 https://github.com/sgl-project/sglang.git
cd sglang
pip install --upgrade pip
......@@ -46,7 +46,7 @@ docker run --gpus all \
Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use `docker/Dockerfile.rocm` to build images, example and usage as below:
```bash
docker build --build-arg SGL_BRANCH=v0.3.5 -t v0.3.5-rocm620 -f Dockerfile.rocm .
docker build --build-arg SGL_BRANCH=v0.3.5.post1 -t v0.3.5.post1-rocm620 -f Dockerfile.rocm .
alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
--shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
......@@ -55,11 +55,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
drun -p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
v0.3.5-rocm620 \
v0.3.5.post1-rocm620 \
python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
drun v0.3.5-rocm620 python3 -m sglang.bench_latency --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
drun v0.3.5.post1-rocm620 python3 -m sglang.bench_latency --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
```
## Method 4: Using docker compose
......
......@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
[project]
name = "sglang"
version = "0.3.5"
version = "0.3.5.post1"
description = "SGLang is yet another fast serving framework for large language models and vision language models."
readme = "README.md"
requires-python = ">=3.8"
......
__version__ = "0.3.5"
__version__ = "0.3.5.post1"
......@@ -40,7 +40,7 @@ class TestPyTorchSamplingBackend(unittest.TestCase):
)
metrics = run_eval(args)
assert metrics["score"] >= 0.65
self.assertGreaterEqual(metrics["score"], 0.65)
def test_greedy(self):
......@@ -62,7 +62,7 @@ class TestPyTorchSamplingBackend(unittest.TestCase):
if first_text is None:
first_text = text
assert text == first_text, f'"{text}" is not identical to "{first_text}"'
self.assertEqual(text, first_text)
first_text = None
......@@ -82,7 +82,7 @@ class TestPyTorchSamplingBackend(unittest.TestCase):
text = response_batch[i]["text"]
if first_text is None:
first_text = text
assert text == first_text, f'"{text}" is not identical to "{first_text}"'
self.assertEqual(text, first_text)
if __name__ == "__main__":
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment