# LLM Deployment Examples This directory contains examples and reference implementations for deploying Large Language Models (LLMs) in various configurations. ## Components - workers: Prefill and decode worker handles actual LLM inference - router: Handles API requests and routes them to appropriate workers based on specified strategy - frontend: OpenAI compatible http server handles incoming requests ## Deployment Architectures ### Aggregated Single-instance deployment where both prefill and decode are done by the same worker. ### Disaggregated Distributed deployment where prefill and decode are done by separate workers that can scale independently. ```mermaid sequenceDiagram participant D as VllmWorker participant Q as PrefillQueue participant P as PrefillWorker Note over D: Request is routed to decode D->>D: Decide if prefill should be done locally or remotely D->>D: Allocate KV blocks D->>Q: Put RemotePrefillRequest on the queue P->>Q: Pull request from the queue P-->>D: Read cached KVs from Decode D->>D: Decode other requests P->>P: Run prefill P-->>D: Write prefilled KVs into allocated blocks P->>D: Send completion notification Note over D: Notification received when prefill is done D->>D: Schedule decoding ``` ## Getting Started 1. Choose a deployment architecture based on your requirements 2. Configure the components as needed 3. Deploy using the provided scripts ### Prerequisites Start required services (etcd and NATS) using [Docker Compose](../../deploy/docker-compose.yml) ```bash docker compose -f deploy/docker-compose.yml up -d ``` ### Build docker ``` ./container/build.sh ``` ### Run container ``` ./container/run.sh -it ``` ## Run Deployment This figure shows an overview of the major components to deploy: ``` +----------------+ +------| prefill worker |-------+ notify | | | | finished | +----------------+ | pull v v +------+ +-----------+ +------------------+ push +---------------+ | HTTP |----->| processor |----->| decode/monolith |------------>| prefill queue | | |<-----| |<-----| worker | | | +------+ +-----------+ +------------------+ +---------------+ | ^ | query best | | return | publish kv events worker | | worker_id v | | +------------------+ | +---------| kv-router | +------------->| | +------------------+ ``` ### Example architectures _Note_: For a non-dockerized deployment, first export `DYNAMO_HOME` to point to the dynamo repository root, e.g. `export DYNAMO_HOME=$(pwd)` #### Aggregated serving ```bash cd $DYNAMO_HOME/examples/llm dynamo serve graphs.agg:Frontend -f ./configs/agg.yaml ``` #### Aggregated serving with KV Routing ```bash cd $DYNAMO_HOME/examples/llm dynamo serve graphs.agg_router:Frontend -f ./configs/agg_router.yaml ``` #### Disaggregated serving ```bash cd $DYNAMO_HOME/examples/llm dynamo serve graphs.disagg:Frontend -f ./configs/disagg.yaml ``` #### Disaggregated serving with KV Routing ```bash cd $DYNAMO_HOME/examples/llm dynamo serve graphs.disagg_router:Frontend -f ./configs/disagg_router.yaml ``` ### Client In another terminal: ```bash # this test request has around 200 tokens isl curl localhost:8000/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "deepseek-ai/DeepSeek-R1-Distill-Llama-8B", "messages": [ { "role": "user", "content": "In the heart of Eldoria, an ancient land of boundless magic and mysterious creatures, lies the long-forgotten city of Aeloria. Once a beacon of knowledge and power, Aeloria was buried beneath the shifting sands of time, lost to the world for centuries. You are an intrepid explorer, known for your unparalleled curiosity and courage, who has stumbled upon an ancient map hinting at ests that Aeloria holds a secret so profound that it has the potential to reshape the very fabric of reality. Your journey will take you through treacherous deserts, enchanted forests, and across perilous mountain ranges. Your Task: Character Background: Develop a detailed background for your character. Describe their motivations for seeking out Aeloria, their skills and weaknesses, and any personal connections to the ancient city or its legends. Are they driven by a quest for knowledge, a search for lost familt clue is hidden." } ], "stream":false, "max_tokens": 30 }' ``` ### Multi-node deployment See [multinode-examples.md](multinode-examples.md) for more details. ### Close deployment > [!IMPORTANT] > We are aware of an issue where vLLM subprocesses might not be killed when `ctrl-c` is pressed. > We are working on addressing this. Relevant vLLM issues can be found [here](https://github.com/vllm-project/vllm/pull/8492) and [here](https://github.com/vllm-project/vllm/issues/6219#issuecomment-2439257824). To stop the serve, you can press `ctrl-c` which will kill the different components. In order to kill the remaining vLLM subprocesses you can run `nvidia-smi` and `kill -9` the remaining processes or run `pkill python3` from inside of the container.