Commit 441846de authored by hhzhang16's avatar hhzhang16 Committed by GitHub
Browse files
parent 3136b716
...@@ -82,4 +82,6 @@ __pycache__/ ...@@ -82,4 +82,6 @@ __pycache__/
Chart.lock Chart.lock
generated-values.yaml generated-values.yaml
.build/
**/.devcontainer/.env
TensorRT-LLM TensorRT-LLM
...@@ -68,7 +68,9 @@ rust-base: ...@@ -68,7 +68,9 @@ rust-base:
protobuf-compiler \ protobuf-compiler \
cmake \ cmake \
libssl-dev \ libssl-dev \
pkg-config pkg-config \
libclang-dev \
git
ENV RUSTUP_HOME=/usr/local/rustup ENV RUSTUP_HOME=/usr/local/rustup
ENV CARGO_HOME=/usr/local/cargo ENV CARGO_HOME=/usr/local/cargo
...@@ -83,25 +85,45 @@ rust-base: ...@@ -83,25 +85,45 @@ rust-base:
rm rustup-init && \ rm rustup-init && \
chmod -R a+w $RUSTUP_HOME $CARGO_HOME chmod -R a+w $RUSTUP_HOME $CARGO_HOME
dynamo-builder:
FROM +rust-base
WORKDIR /workspace
COPY . /workspace/
ENV CARGO_TARGET_DIR=/workspace/target
RUN cargo build --release --locked --features mistralrs,sglang,vllm,python && \
strip target/release/dynamo-run && \
strip target/release/http && \
strip target/release/llmctl && \
strip target/release/metrics && \
strip target/release/mock_worker
SAVE ARTIFACT target/release/dynamo-run /dynamo-run
SAVE ARTIFACT target/release/http /http
SAVE ARTIFACT target/release/llmctl /llmctl
SAVE ARTIFACT target/release/metrics /metrics
SAVE ARTIFACT target/release/mock_worker /mock_worker
dynamo-base-docker: dynamo-base-docker:
ARG IMAGE=dynamo-base-docker ARG IMAGE=dynamo-base-docker
ARG CI_REGISTRY_IMAGE=my-registry ARG CI_REGISTRY_IMAGE=my-registry
ARG CI_COMMIT_SHA=latest ARG CI_COMMIT_SHA=latest
FROM +rust-base FROM +dynamo-base
WORKDIR /workspace WORKDIR /workspace
COPY . /workspace/ COPY . /workspace/
ENV CARGO_TARGET_DIR=/workspace/target # Copy built binaries from builder target
COPY +dynamo-builder/dynamo-run /usr/local/bin/dynamo-run
RUN cargo build --release --locked --features mistralrs,sglang,vllm,python && \ COPY +dynamo-builder/http /usr/local/bin/http
cargo doc --no-deps && \ COPY +dynamo-builder/llmctl /usr/local/bin/llmctl
cp target/release/dynamo-run /usr/local/bin && \ COPY +dynamo-builder/metrics /usr/local/bin/metrics
cp target/release/http /usr/local/bin && \ COPY +dynamo-builder/mock_worker /usr/local/bin/mock_worker
cp target/release/llmctl /usr/local/bin && \
cp target/release/metrics /usr/local/bin && \ COPY +dynamo-builder/dynamo-run /workspace/target/release/dynamo-run
cp target/release/mock_worker /usr/local/bin COPY +dynamo-builder/http /workspace/target/release/http
COPY +dynamo-builder/llmctl /workspace/target/release/llmctl
COPY +dynamo-builder/metrics /workspace/target/release/metrics
COPY +dynamo-builder/mock_worker /workspace/target/release/mock_worker
RUN uv build --wheel --out-dir /workspace/dist && \ RUN uv build --wheel --out-dir /workspace/dist && \
uv pip install /workspace/dist/ai_dynamo*any.whl uv pip install /workspace/dist/ai_dynamo*any.whl
......
...@@ -47,18 +47,26 @@ source venv/bin/activate ...@@ -47,18 +47,26 @@ source venv/bin/activate
pip install ai-dynamo[all] pip install ai-dynamo[all]
``` ```
### Development Environment ### Building the Dynamo Base Image
For a consistent development environment, you can use the provided devcontainer configuration. This requires: Although not needed for local development, deploying your Dynamo pipelines to Kubernetes will require you to build and push a Dynamo base image to your container registry. You can use any container registry of your choice, such as:
- [Docker](https://www.docker.com/products/docker-desktop) - Docker Hub (docker.io)
- [VS Code](https://code.visualstudio.com/) with the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) - NVIDIA NGC Container Registry (nvcr.io)
- Any private registry
To use the devcontainer: Here's how to build it:
1. Open the project in VS Code
2. Click on the button in the bottom-left corner
3. Select "Reopen in Container"
This will build and start a container with all the necessary dependencies for Dynamo development. ```bash
export CI_REGISTRY_IMAGE=<your-registry>
export CI_COMMIT_SHA=<your-tag>
earthly --push +dynamo-base-docker --CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE --CI_COMMIT_SHA=$CI_COMMIT_SHA
```
After building, you can use this image by setting the `DYNAMO_IMAGE` environment variable to point to your built image:
```bash
export DYNAMO_IMAGE=<your-registry>/dynamo-base-docker:<your-tag>
```
### Running and Interacting with an LLM Locally ### Running and Interacting with an LLM Locally
...@@ -97,7 +105,6 @@ First start the Dynamo Distributed Runtime services: ...@@ -97,7 +105,6 @@ First start the Dynamo Distributed Runtime services:
```bash ```bash
docker compose -f deploy/docker-compose.yml up -d docker compose -f deploy/docker-compose.yml up -d
``` ```
#### Start Dynamo LLM Serving Components #### Start Dynamo LLM Serving Components
Next serve a minimal configuration with an http server, basic Next serve a minimal configuration with an http server, basic
...@@ -143,6 +150,20 @@ cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/ ...@@ -143,6 +150,20 @@ cp /workspace/target/release/dynamo-run /workspace/deploy/dynamo/sdk/src/dynamo/
uv pip install -e . uv pip install -e .
``` ```
#### Devcontainer Environment
For a consistent development environment, you can use the provided devcontainer configuration. This requires:
- [Docker](https://www.docker.com/products/docker-desktop)
- [VS Code](https://code.visualstudio.com/) with the [Dev Containers extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers)
To use the devcontainer:
1. Open the project in VS Code
2. Click on the button in the bottom-left corner
3. Select "Reopen in Container"
This will build and start a container with all the necessary dependencies for Dynamo development.
#### Conda Environment #### Conda Environment
Alternately, you can use a conda environment Alternately, you can use a conda environment
......
...@@ -9,11 +9,48 @@ This is a proof of concept for a Helm chart to deploy services defined in a bent ...@@ -9,11 +9,48 @@ This is a proof of concept for a Helm chart to deploy services defined in a bent
- make sure dynamo cli is installed - make sure dynamo cli is installed
- make sure you have a docker image registry to which you can push and pull from k8s cluster - make sure you have a docker image registry to which you can push and pull from k8s cluster
- set the imagePullSecrets in the values.yaml file - set the imagePullSecrets in the values.yaml file
- navigate to the pipeline deployment directory by running:
```bash
cd deploy/Kubernetes/pipeline
```
- build and push the DYNAMO_IMAGE as described in the [main README](../../README.md#building-the-dynamo_image-base-image) to an image registry
- make sure the `nats` and `etcd` dependencies are installed (under the `dependencies` subdirectory). For more details, see [Installing Required Dependencies](../../../docs/guides/dynamo_deploy.md#installing-required-dependencies)
### Setting up Image Pull Secrets
Before deploying, you need to ensure your Kubernetes namespace has the appropriate image pull secret configured. The Helm chart uses `docker-imagepullsecret` by default.
You can create this secret in your namespace using:
```bash
kubectl create secret docker-registry docker-imagepullsecret \
--docker-server=<registry-server> \
--docker-username=<username> \
--docker-password=<password> \
-n <namespace>
```
Alternatively, you can modify the `imagePullSecrets` section in `deploy/Kubernetes/pipeline/chart/values.yaml` to match your registry credentials.
### Install the Helm chart ### Install the Helm chart
```bash ```bash
export DYNAMO_IMAGE=<dynamo_docker_image_name> export DYNAMO_IMAGE=<dynamo_docker_image_name>
./deploy.sh <docker_registry> <k8s_namespace> <path_to_dynamo_directory> <dynamo_identifier> ./deploy.sh <docker_registry> <k8s_namespace> <path_to_dynamo_directory> <dynamo_identifier> [<dynamo_config_file>]
# example : ./deploy.sh nvcr.io/nvidian/nim-llm-dev my-namespace ../deploy/dynamo/sdk/examples/hello_world/ hello_world:Frontend
# example: export DYNAMO_IMAGE=nvcr.io/nvidian/nim-llm-dev/dynamo-base-worker:0.0.1
# example: ./deploy.sh nvcr.io/nvidian/nim-llm-dev my-namespace ../../../examples/hello_world/ hello_world:Frontend
# example: ./deploy.sh nvcr.io/nvidian/nim-llm-dev my-namespace ../../../examples/llm graphs.disagg_router:Frontend ../../../examples/llm/configs/disagg_router.yaml
```
### Test the deployment
```bash
# Forward the service port to localhost
kubectl -n <k8s_namespace> port-forward svc/hello-world-frontend 3000:80
# In another terminal window, test the API endpoint
curl -X 'POST' 'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{"text": "test"}'
``` ```
\ No newline at end of file
...@@ -40,15 +40,31 @@ spec: ...@@ -40,15 +40,31 @@ spec:
- name: {{ $.Release.Name }}-{{ .name | lower }} - name: {{ $.Release.Name }}-{{ .name | lower }}
image: {{ $.Values.image }} image: {{ $.Values.image }}
args: args:
- uv run dynamo serve --service-name {{ .name }} src.{{ $.Values.dynamoIdentifier }} {{ if $.Values.configFilePath }}
- cd src && uv run dynamo serve --service-name {{ .name }} {{ $.Values.dynamoIdentifier }} -f {{ $.Values.configFilePath }}
{{ else }}
- cd src && uv run dynamo serve --service-name {{ .name }} {{ $.Values.dynamoIdentifier }}
{{ end }}
command: command:
- sh - sh
- -c - -c
resources: resources:
requests: requests:
cpu: "{{ .config.resources.cpu }}" cpu: "{{ .config.resources.cpu }}"
{{ if .config.resources.memory }}
memory: "{{ .config.resources.memory }}"
{{ end }}
{{ if .config.resources.gpu }}
nvidia.com/gpu: "{{ .config.resources.gpu }}"
{{ end }}
limits: limits:
cpu: "{{ .config.resources.cpu }}" cpu: "{{ .config.resources.cpu }}"
{{ if .config.resources.memory }}
memory: "{{ .config.resources.memory }}"
{{ end }}
{{ if .config.resources.gpu }}
nvidia.com/gpu: "{{ .config.resources.gpu }}"
{{ end }}
env: env:
- name: TRAFFIC_TIMEOUT - name: TRAFFIC_TIMEOUT
value: "{{ .config.traffic.timeout }}" value: "{{ .config.traffic.timeout }}"
......
...@@ -16,6 +16,7 @@ ...@@ -16,6 +16,7 @@
imagePullSecrets: imagePullSecrets:
- name: yatai-regcred - name: yatai-regcred
- name: nvcrimagepullsecret - name: nvcrimagepullsecret
- name: docker-imagepullsecret
- name: gitlab-imagepull - name: gitlab-imagepull
natsAddr: nats://dynamo-platform-nats:4222 natsAddr: nats://dynamo-platform-nats:4222
......
...@@ -21,8 +21,9 @@ set -euo pipefail ...@@ -21,8 +21,9 @@ set -euo pipefail
# Validate input parameters # Validate input parameters
if [ "$#" -ne 4 ]; then if [ "$#" -lt 4 ] || [ "$#" -gt 5 ]; then
echo "Usage: $0 <DOCKER_REGISTRY> <NAMESPACE> <DYNAMO_DIRECTORY> <DYNAMO_IDENTIFIER>" echo "Usage: $0 <DOCKER_REGISTRY> <NAMESPACE> <DYNAMO_DIRECTORY> <DYNAMO_IDENTIFIER> [<DYNAMO_CONFIG_FILE>]"
echo "Note: DYNAMO_CONFIG_FILE is optional"
exit 1 exit 1
fi fi
...@@ -30,6 +31,7 @@ DOCKER_REGISTRY=$1 ...@@ -30,6 +31,7 @@ DOCKER_REGISTRY=$1
NAMESPACE=$2 NAMESPACE=$2
DYNAMO_DIRECTORY=$3 DYNAMO_DIRECTORY=$3
DYNAMO_IDENTIFIER=$4 DYNAMO_IDENTIFIER=$4
DYNAMO_CONFIG_FILE=$5
# Check if any of the inputs are empty # Check if any of the inputs are empty
if [[ -z "$DOCKER_REGISTRY" || -z "$NAMESPACE" || -z "$DYNAMO_IDENTIFIER" || -z "$DYNAMO_DIRECTORY" ]]; then if [[ -z "$DOCKER_REGISTRY" || -z "$NAMESPACE" || -z "$DYNAMO_IDENTIFIER" || -z "$DYNAMO_DIRECTORY" ]]; then
...@@ -85,4 +87,4 @@ cd - ...@@ -85,4 +87,4 @@ cd -
# Install the Helm chart with the correct tag (SHA) # Install the Helm chart with the correct tag (SHA)
echo "Installing Helm chart with image: $docker_tag_for_registry" echo "Installing Helm chart with image: $docker_tag_for_registry"
HELM_RELEASE="${DYNAMO_MODULE//_/\-}" HELM_RELEASE="${DYNAMO_MODULE//_/\-}"
helm upgrade -i "$HELM_RELEASE" ./chart -f ~/bentoml/bentos/"$DYNAMO_NAME"/"$docker_sha"/bento.yaml --set image="$docker_tag_for_registry" --set dynamoIdentifier="$DYNAMO_IDENTIFIER" -n "$NAMESPACE" helm upgrade -i "$HELM_RELEASE" ./chart -f ~/bentoml/bentos/"$DYNAMO_NAME"/"$docker_sha"/bento.yaml --set image="$docker_tag_for_registry" --set dynamoIdentifier="$DYNAMO_IDENTIFIER" --set configFilePath="$DYNAMO_CONFIG_FILE" -n "$NAMESPACE"
\ No newline at end of file
<!--
SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Deploying Dynamo inference graphs to Kubernetes
## Deployment Paths in Dynamo
Dynamo provides two distinct deployment paths, each serving different purposes:
1. **Dynamo Cloud Platform** (`deploy/dynamo/helm/`)
- Contains the infrastructure components required for the Dynamo cloud platform
- Used when deploying with the `dynamo deploy` CLI commands
- Provides a managed deployment experience
- This README focuses on setting up this platform infrastructure
- For Dynamo cloud installation instructions, see [Installing Dynamo Cloud](./helm/README.md), which walks through installing and configuring the Dynamo cloud components on your Kubernetes cluster.
2. **Manual Deployment with Helm Charts** (`deploy/Kubernetes/`)
- Used for manually deploying inference graphs to Kubernetes
- Contains Helm charts and configurations for deploying individual inference pipelines
- Documentation:
- [Deploying Dynamo Inference Graphs to Kubernetes using Helm](../Kubernetes/pipeline/README.md)
- [Dynamo Deploy Guide](../../docs/guides/dynamo_deploy.md)
Choose the appropriate deployment path based on your needs:
- Use `deploy/Kubernetes/` if you want to manually manage your inference graph deployments
- Use `deploy/dynamo/helm/` if you want to use the Dynamo cloud platform and CLI tools
## Hello World example
See [examples/hello_world/README.md#deploying-to-kubernetes-using-dynamo-cloud-and-dynamo-deploy-cli](../../examples/hello_world/README.md#deploying-to-kubernetes-using-dynamo-cloud-and-dynamo-deploy-cli)
\ No newline at end of file
# Deploy Dynamo Cloud # Deploy Dynamo Cloud
## Building Docker images for Dynamo Cloud components
You can build and push Docker images for the Dynamo cloud components (API server, API store, and operator) to any container registry of your choice. Here's how to build each component:
### Prerequisites
- [Earthly](https://earthly.dev/) installed
- Docker installed and running
- Access to a container registry of your choice
### Building and Pushing Images
First, set the required environment variables:
```bash
export CI_REGISTRY_IMAGE=<CONTAINER_REGISTRY>/<ORGANIZATION>
export CI_COMMIT_SHA=<TAG>
```
As a description of the placeholders:
- `<CONTAINER_REGISTRY>/<ORGANIZATION>`: Your container registry and organization name (e.g., `nvcr.io/myorg`, `docker.io/myorg`, etc.)
- `<TAG>`: The tag you want to use for the image (e.g., `latest`, `0.0.1`, etc.)
Note: Make sure you're logged in to your container registry before pushing images. For example:
```bash
docker login <CONTAINER_REGISTRY>
```
You can build each component individually or build all components at once:
#### Option 1: Build All Components at Once
```bash
earthly --push +all-docker --CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE --CI_COMMIT_SHA=$CI_COMMIT_SHA
```
#### Option 2: Build Components Individually
1. **API Store**
```bash
cd deploy/dynamo/api-store
earthly --push +docker --CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE --CI_COMMIT_SHA=$CI_COMMIT_SHA
```
2. **Operator**
```bash
cd deploy/dynamo/operator
earthly --push +docker --CI_REGISTRY_IMAGE=$CI_REGISTRY_IMAGE --CI_COMMIT_SHA=$CI_COMMIT_SHA
```
## Deploy Dynamo Cloud Platform ## Deploy Dynamo Cloud Platform
Pre-requisite: make sure your terminal is set in the `deploy/dynamo/helm/` directory. Pre-requisite: make sure your terminal is set in the `deploy/dynamo/helm/` directory.
......
...@@ -60,6 +60,7 @@ def create_bentoml_cli() -> click.Command: ...@@ -60,6 +60,7 @@ def create_bentoml_cli() -> click.Command:
# Add top-level CLI commands # Add top-level CLI commands
bentoml_cli.add_command(cloud_command) bentoml_cli.add_command(cloud_command)
bentoml_cli.add_single_command(bento_command, "build") bentoml_cli.add_single_command(bento_command, "build")
bentoml_cli.add_single_command(bento_command, "get")
bentoml_cli.add_subcommands(serve_command) bentoml_cli.add_subcommands(serve_command)
bentoml_cli.add_subcommands(run_command) bentoml_cli.add_subcommands(run_command)
# bentoml_cli.add_command(deploy_command) # bentoml_cli.add_command(deploy_command)
......
<!--
SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
SPDX-License-Identifier: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Deploying Dynamo Inference Graphs to Kubernetes using Helm # Deploying Dynamo Inference Graphs to Kubernetes using Helm
This guide will walk you through the process of deploying an inference graph created using the Dynamo SDK onto a Kubernetes cluster. Note that this is currently an experimental feature. This guide will walk you through the process of deploying an inference graph created using the Dynamo SDK onto a Kubernetes cluster. Note that this is currently an experimental feature.
...@@ -6,7 +23,7 @@ This guide will walk you through the process of deploying an inference graph cre ...@@ -6,7 +23,7 @@ This guide will walk you through the process of deploying an inference graph cre
![Dynamo Deploy](../images/dynamo-deploy.png) ![Dynamo Deploy](../images/dynamo-deploy.png)
While this guide covers deployment of Dynamo inference graphs using Helm, the preferred method to deploy an inference graph in the future will be via the Dynamo Kubernetes Operator. Dynamo Kubernetes Operator is a soon to be released cloud platform that will simplify the deployment and management of Dynamo inference graphs. It includes a set of components (Operator, UIs, Kubernetes Custom Resources, etc.) to simplify the deployment and management of Dynamo inference graphs. While this guide covers deployment of Dynamo inference graphs using Helm, the preferred method to deploy an inference graph is via the Dynamo cloud platform. The Dynamo cloud platform, documented in [deploy/dynamo/README.md](../../deploy/dynamo/README.md), simplifies the deployment and management of Dynamo inference graphs. It includes a set of components (Operator, Kubernetes Custom Resources, etc.) that work together to streamline the deployment and management process.
Once an inference graph is defined using the Dynamo SDK, it can be deployed onto a Kubernetes cluster using a simple `dynamo deploy` command that orchestrates the following deployment steps: Once an inference graph is defined using the Dynamo SDK, it can be deployed onto a Kubernetes cluster using a simple `dynamo deploy` command that orchestrates the following deployment steps:
...@@ -67,27 +84,27 @@ Follow these steps to set up the namespace and install required components: ...@@ -67,27 +84,27 @@ Follow these steps to set up the namespace and install required components:
```bash ```bash
export NAMESPACE=dynamo-playground export NAMESPACE=dynamo-playground
export RELEASE_NAME=dynamo-platform export RELEASE_NAME=dynamo-platform
export PROJECT_ROOT=$(pwd)
``` ```
2. Install NATS messaging system: 2. Install NATS messaging system:
```bash ```bash
# Navigate to dependencies directory # Navigate to dependencies directory
cd deploy/Kubernetes/pipeline/dependencies cd $PROJECT_ROOT/deploy/Kubernetes/pipeline/dependencies
# Add and update NATS Helm repository # Add and update NATS Helm repository
helm repo add nats https://nats-io.github.io/k8s/helm/charts/ helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update helm repo update
# Install NATS with custom values # Install NATS with custom values
helm install --namespace ${NAMESPACE} dynamo-platform-nats nats/nats \ helm install --namespace ${NAMESPACE} ${RELEASE_NAME}-nats nats/nats \
--create-namespace \
--values nats-values.yaml --values nats-values.yaml
``` ```
3. Install etcd key-value store: 3. Install etcd key-value store:
```bash ```bash
# Install etcd using Bitnami chart # Install etcd using Bitnami chart
helm install --namespace ${NAMESPACE} dynamo-platform-etcd \ helm install --namespace ${NAMESPACE} ${RELEASE_NAME}-etcd \
oci://registry-1.docker.io/bitnamicharts/etcd \ oci://registry-1.docker.io/bitnamicharts/etcd \
--values etcd-values.yaml --values etcd-values.yaml
``` ```
...@@ -99,9 +116,13 @@ After completing these steps, your cluster will have the necessary messaging and ...@@ -99,9 +116,13 @@ After completing these steps, your cluster will have the necessary messaging and
Follow these steps to containerize and deploy your inference pipeline: Follow these steps to containerize and deploy your inference pipeline:
1. Build and containerize the pipeline: 1. Build and containerize the pipeline:
> [!NOTE]
> For instructions on building the Dynamo base image, see the [Building the Dynamo Base Image](../../README.md#building-the-dynamo-base-image) section in the main README.
```bash ```bash
# Navigate to example directory # Navigate to example directory
cd examples/hello_world cd $PROJECT_ROOT/examples/hello_world
# Set runtime image name # Set runtime image name
export DYNAMO_IMAGE=<dynamo_runtime_image_name> export DYNAMO_IMAGE=<dynamo_runtime_image_name>
...@@ -121,8 +142,11 @@ docker push <TAG> ...@@ -121,8 +142,11 @@ docker push <TAG>
3. Deploy using Helm: 3. Deploy using Helm:
```bash ```bash
# Navigate to the deployment directory
cd $PROJECT_ROOT/deploy/Kubernetes/pipeline
# Set release name for Helm # Set release name for Helm
export HELM_RELEASE=helloworld export HELM_RELEASE=hello-world-manual
# Generate Helm values file from Frontend service # Generate Helm values file from Frontend service
dynamo get frontend > pipeline-values.yaml dynamo get frontend > pipeline-values.yaml
...@@ -138,7 +162,7 @@ helm upgrade -i "$HELM_RELEASE" ./chart \ ...@@ -138,7 +162,7 @@ helm upgrade -i "$HELM_RELEASE" ./chart \
4. Test the deployment: 4. Test the deployment:
```bash ```bash
# Forward the service port to localhost # Forward the service port to localhost
kubectl -n ${NAMESPACE} port-forward svc/helloworld-frontend 3000:80 kubectl -n ${NAMESPACE} port-forward svc/${HELM_RELEASE}-frontend 3000:80
# Test the API endpoint # Test the API endpoint
curl -X 'POST' 'http://localhost:3000/generate' \ curl -X 'POST' 'http://localhost:3000/generate' \
......
...@@ -64,7 +64,7 @@ Users/Clients (HTTP) ...@@ -64,7 +64,7 @@ Users/Clients (HTTP)
- Processes requests from the Middle service - Processes requests from the Middle service
- Appends "-back" to the text and yields tokens - Appends "-back" to the text and yields tokens
## Running the Example ## Running the Example Locally
1. Launch all three services using a single command: 1. Launch all three services using a single command:
...@@ -87,6 +87,107 @@ curl -X 'POST' \ ...@@ -87,6 +87,107 @@ curl -X 'POST' \
}' }'
``` ```
## Deploying to and Running the Example in Kubernetes
There are two ways to deploy the hello world example:
1. Manually using helm charts
2. Using the Dynamo cloud Kubernetes platform and the Dynamo deploy CLI.
#### Deploying with helm charts
The instructions for deploying the hello world example using helm charts can be found at [Deploying Dynamo Inference Graphs to Kubernetes using Helm](../../docs/guides/dynamo_deploy.md). The guide covers:
1. Setting up a local Kubernetes cluster with MicroK8s
2. Installing required dependencies like NATS and etcd
3. Building and containerizing the pipeline
4. Deploying using Helm charts
5. Testing the deployment
#### Deploying with the Dynamo cloud platform
This example can be deployed to a Kubernetes cluster using Dynamo cloud and the Dynamo deploy CLI.
##### Prerequisites
Before deploying, ensure you have:
- Dynamo CLI installed
- Ubuntu 24.04 as the base image
- Required dependencies:
- Helm package manager
- Dynamo SDK and CLI tools
- Rust packages and toolchain
You must have first followed the instructions in [deploy/dynamo/helm/README.md](../../deploy/dynamo/helm/README.md) to create your Dynamo cloud deployment.
##### Understanding the Build and Deployment Process
The deployment process involves two distinct build steps:
1. **Local `dynamo build`**: This step creates a Dynamo service archive that contains:
- Your service code and dependencies
- Service configuration and metadata
- Runtime requirements
- The service graph definition
2. **Remote Image Build**: When you create a deployment, a `yatai-dynamonim-image-builder` pod is created in your cluster. This pod:
- Takes the Dynamo service archive created in step 1
- Containerizes it using the specified base image
- Pushes the final container image to your cluster's registry
##### Deployment Steps
1. **Login to Dynamo Server**
```bash
export PROJECT_ROOT=$(pwd)
export KUBE_NS=hello-world # Must match your Kubernetes namespace
export DYNAMO_SERVER=https://${KUBE_NS}.dev.aire.nvidia.com
dynamo server login --api-token TEST-TOKEN --endpoint $DYNAMO_SERVER
```
2. **Build the Dynamo Image**
> [!NOTE]
> For instructions on building the Dynamo base image, see the [Building the Dynamo Base Image](../../README.md#building-the-dynamo-base-image) section in the main README.
```bash
# Set runtime image name
export DYNAMO_IMAGE=<dynamo_docker_image_name>
# Prepare your project for deployment.
cd $PROJECT_ROOT/examples/hello_world
DYNAMO_TAG=$(dynamo build hello_world:Frontend | grep "Successfully built" | awk -F"\"" '{ print $2 }')
```
3. **Deploy to Kubernetes**
```bash
echo $DYNAMO_TAG
export HELM_RELEASE=ci-hw
dynamo deployment create $DYNAMO_TAG --no-wait -n $HELM_RELEASE
```
To delete an existing Dynamo deployment:
```bash
kubectl delete dynamodeployment $HELM_RELEASE
```
4. **Test the deployment**
Once you create the Dynamo deployment, a pod prefixed with `yatai-dynamonim-image-builder` will begin running. Once it finishes running, it will create the pods necessary. Once the pods prefixed with `$HELM_RELEASE` are up and running, you can test out your example!
```bash
# Forward the service port to localhost
kubectl -n ${KUBE_NS} port-forward svc/${HELM_RELEASE}-frontend 3000:3000
# Test the API endpoint
curl -X 'POST' 'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{"text": "test"}'
```
## Expected Output ## Expected Output
When you send the request with "test" as input, the response will show how the text flows through each service: When you send the request with "test" as input, the response will show how the text flows through each service:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment