Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
dynamo
Commits
9c9f0086
Commit
9c9f0086
authored
Feb 18, 2025
by
aflowers
Committed by
Alec
Feb 18, 2025
Browse files
[fix] add kv routing to vllm dockerfile
parent
d0d35a9e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
16 additions
and
2 deletions
+16
-2
container/Dockerfile.vllm
container/Dockerfile.vllm
+16
-2
No files found.
container/Dockerfile.vllm
View file @
9c9f0086
...
@@ -78,14 +78,26 @@ COPY runtime /workspace/runtime
...
@@ -78,14 +78,26 @@ COPY runtime /workspace/runtime
RUN cd runtime/rust && \
RUN cd runtime/rust && \
cargo build --release --locked && cargo doc --no-deps
cargo build --release --locked && cargo doc --no-deps
# Generate C bindings for kv cache routing in vLLM
COPY llm /workspace/llm
RUN cd llm/rust/ && \
cargo build --release --locked && cargo doc --no-deps
# Build triton_distributed_rs wheel
# Build triton_distributed_rs wheel
RUN cd runtime/rust/python-wheel && \
COPY python-wheel /workspace/python-wheel
RUN cd python-wheel && \
uv build && \
uv build && \
uv pip install dist/triton_distributed_rs*cp312*.whl
uv pip install dist/triton_distributed_rs*cp312*.whl
# Package the bindings
RUN mkdir -p /opt/triton/llm_binding/wheels && mkdir /opt/triton/llm_binding/lib
RUN cp python-wheel/dist/triton_distributed_rs*cp312*.whl /opt/triton/llm_binding/wheels/.
RUN cp llm/rust/target/release/libtriton_llm_capi.so /opt/triton/llm_binding/lib/.
RUN cp -r llm/rust/libtriton-llm/include /opt/triton/llm_binding/.
# Install patched vllm
# Install patched vllm
ARG VLLM_REF="v0.7.2"
ARG VLLM_REF="v0.7.2"
ARG VLLM_PATCH="vllm_${VLLM_REF}.patch"
ARG VLLM_PATCH="vllm_${VLLM_REF}
-triton-kv-disagg-patch
.patch"
RUN --mount=type=bind,source=./container/deps/,target=/tmp/deps \
RUN --mount=type=bind,source=./container/deps/,target=/tmp/deps \
bash /tmp/deps/vllm/install.sh --patch /tmp/deps/vllm/${VLLM_PATCH} --ref ${VLLM_REF} --install-cmd "uv pip install --editable" --use-precompiled --installation-dir /opt/vllm
bash /tmp/deps/vllm/install.sh --patch /tmp/deps/vllm/${VLLM_PATCH} --ref ${VLLM_REF} --install-cmd "uv pip install --editable" --use-precompiled --installation-dir /opt/vllm
...
@@ -106,6 +118,8 @@ COPY . /workspace
...
@@ -106,6 +118,8 @@ COPY . /workspace
# Environment setup
# Environment setup
ENV PYTHONPATH="${PYTHONPATH}:/workspace/examples/python:/opt/tritonserver/python/openai/openai_frontend"
ENV PYTHONPATH="${PYTHONPATH}:/workspace/examples/python:/opt/tritonserver/python/openai/openai_frontend"
ENV RAPIDS_LIBUCX_PREFER_SYSTEM_LIBRARY=true
ENV RAPIDS_LIBUCX_PREFER_SYSTEM_LIBRARY=true
# Tell vllm to use the Triton LLM C API for KV Cache Routing
ENV VLLM_KV_CAPI_PATH="/opt/triton/llm_binding/lib/libtriton_llm_capi.so"
CMD []
CMD []
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
ENTRYPOINT ["/opt/nvidia/nvidia_entrypoint.sh"]
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment