Unverified Commit eac3cf1f authored by kYLe's avatar kYLe Committed by GitHub
Browse files

feat: Add disaggregated serving hello world example (#683)

parent 50aa390b
# Deployment Examples
This directory contains a hello world example which implements a simplified disaggregated serving architecture used for deploying Large Language Models (LLMs). It removes the LLM related inference code and focuses on how Dynamo handles routing, task queue and metadata communication between prefill and decode workers.
## Components
- frontend: A simple http server handles incoming requests
- processor: A pre/post processing server and invokes router server
- router: Handles API requests and routes them to appropriate workers based on specified strategy
- worker: A dummy decode worker
- prefill-worker: A dummy prefill worker
## Deployment Architectures
This figure shows an overview of the major components to deploy:
```
+----------------+
| prefill worker |-------+
| | |
+----------------+ | pull
v
+------+ +-----------+ +------------------+ push +---------------+
| HTTP |----->| processor |----->| decode/monolith |------------>| prefill queue |
| |<-----| |<-----| worker | | |
+------+ +-----------+ +------------------+ +---------------+
| ^
query best | | return
worker | | worker_id
| | +------------------+
| +---------| router |
+------------->| |
+------------------+
```
## The Aggregated Deployment
In this example, we will use 2 nodes to demo the disagg serving.
- Node 1
- Runs NATS and etcd services
- Deploys Frontend, Processor and Router
- Deploys DummyWorker as the monolith worker
- Node 2
- Deploys DummyWorker as the monolith worker
### Prerequisites
On Node 1, start required services (etcd and NATS) using [Docker Compose](../../../deploy/docker-compose.yml)
```bash
docker compose -f deploy/docker-compose.yml up -d
```
### Run the Deployment
1. Set environment variables for NATS and etcd services
```bash
export NATS_SERVER="nats://Node_1_IP_ADDRESS:4222"
export ETCD_ENDPOINTS="http://Node_1_IP_ADDRESS:2379"
```
2. Launch Frontend, Processor and Router services:
```
cd dynamo/examples/hello_world/disagg_skeleton
dynamo serve components.graph:Frontend
```
3. Open a new terminal on Node 1 and deploy Worker service
```
export NATS_SERVER="nats://Node_1_IP_ADDRESS:4222"
export ETCD_ENDPOINTS="http://Node_1_IP_ADDRESS:2379"
cd dynamo/examples/hello_world/disagg_skeleton
dynamo serve components.worker:DummyWorker
```
4. Go to Node 2 and start Worker service as in step 3.
Now you should see both workers are ready in Node 1's terminal.
5. Query the Frontend with following two prompts. The router would assign different workers for each prompt and you can observe it from the responses.
- `Response: {"worker_output":"Tell me a joke_GeneratedBy_NODE1HOSTNAME","request_id":"id_number"}`
- `Response: {"worker_output":"Which team won 2020 World Series_GeneratedBy_NODE2HOSTNAME","request_id":"id_number"}`
```
curl -X 'POST' \
'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "Tell me a joke",
"request_id":"id_number"
}'
curl -X 'POST' \
'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "Which team won 2020 World Series",
"request_id":"id_number"
}'
```
6. Then modify the prompt and you will notice prompts with similar prefix will be routed to the same worker due to the simply routing algorithm used in this demo. For example, following query will be routed to the worker proceesed "Tell me a joke" prompt.
```
curl -X 'POST' \
'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "Tell me a fact",
"request_id":"id_number"
}'
```
-`Response: {"worker_output":"Tell me a fact_GeneratedBy_NODE1HOSTNAME","request_id":"id_number"}`
## The Disaggregated Deployment
In this example, we will use 3 nodes to demo the disagg serving.
- Node 1
- Runs NATS and etcd services
- Deploys Frontend and Processor
- Deploys DummyWorker as the decode worker
- Node 2
- Deploys DummyWorker as the decode worker
- Node 3
- Deploys Prefill as the prefill worker
### Run the Deployment
1. Repeat step 1 to 4 to deploy Frontend, Processor, Router and 2 Workers as decode worker
2. Go to Node 3 and start the prefill worker.
```
export NATS_SERVER="nats://Node_1_IP_ADDRESS:4222"
export ETCD_ENDPOINTS="http://Node_1_IP_ADDRESS:2379"
cd dynamo/examples/hello_world/disagg_skeleton
dynamo serve components.prefill_worker:PrefillWorker
```
3. Query the Frontend. This time decode workers push requests to the prefill queue, and prefill worker pulles task from the queue to simulate the prefill task. The actual prefill is skipped in this demo.
```
curl -X 'POST' \
'http://localhost:3000/generate' \
-H 'accept: text/event-stream' \
-H 'Content-Type: application/json' \
-d '{
"prompt": "This is prefill disagg serving example",
"request_id":"12345"
}'
```
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import signal
import sys
from components.processor import Processor
from components.utils import GeneralRequest
from dynamo.sdk import api, depends, service
from dynamo.sdk.lib.image import DYNAMO_IMAGE
logger = logging.getLogger(__name__)
@service(
workers=1,
image=DYNAMO_IMAGE,
)
class Frontend:
processor = depends(Processor)
def __init__(self):
signal.signal(signal.SIGTERM, self.handle_exit)
signal.signal(signal.SIGINT, self.handle_exit)
def handle_exit(self, signum, frame):
logger.debug(f"Received signal {signum}, shutting down...")
sys.exit(0)
@api
async def generate(self, prompt, request_id): # from request body keys
"""Stream results from the pipeline."""
logger.info(f"Received: {prompt=},{request_id=}")
frontend_request = GeneralRequest(
prompt=prompt, request_id=request_id
).model_dump_json()
async for response in self.processor.processor_generate(frontend_request):
yield f"Response: {response}\n"
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from components.frontend import Frontend
from components.kv_router import Router
from components.processor import Processor
Frontend.link(Processor).link(Router)
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from difflib import SequenceMatcher
from typing import AsyncIterator
from components.utils import check_required_workers
from components.worker import DummyWorker
from dynamo.sdk import async_on_start, depends, dynamo_context, dynamo_endpoint, service
WorkerId = str
logger = logging.getLogger(__name__)
@service(
dynamo={
"enabled": True,
"namespace": "dynamo-demo",
},
resources={"cpu": "10", "memory": "20Gi"},
workers=1,
)
class Router:
"""
Request handler for the generate endpoint
"""
kv_cache: dict[str, str] = {}
threshold = 0.6
worker = depends(DummyWorker)
def __init__(self):
self.min_workers = 2
@async_on_start
async def async_init(self):
print("in kv router async_init")
self.runtime = dynamo_context["runtime"]
self.workers_client = (
await self.runtime.namespace("dynamo-demo")
.component("DummyWorker")
.endpoint("worker_generate")
.client()
)
await check_required_workers(self.workers_client, self.min_workers, "kv router")
print("KV Router initialized")
def _cost_function(self, request_prompt):
worker_ids = self.workers_client.endpoint_ids()
num_workers = len(worker_ids)
max_hit_rate = -1.0
for curr_id in self.kv_cache.keys():
# Estimate hit rate by string matching
hit_rate = SequenceMatcher(
None, self.kv_cache[curr_id], request_prompt
).ratio()
if hit_rate > max_hit_rate:
max_hit_rate = hit_rate
max_id = curr_id
print(f"{max_hit_rate=},{len(self.kv_cache.keys())=}")
if max_hit_rate > self.threshold:
# Found the hit rate larger than the threshold
return max_id, max_hit_rate
elif len(self.kv_cache.keys()) == num_workers:
# Cache is already full, return the max rate
return max_id, max_hit_rate
else:
# Add current request into the cache
for curr_id in worker_ids:
if curr_id not in self.kv_cache.keys():
self.kv_cache[curr_id] = request_prompt
break
return curr_id, -1
# A dummy hit rate checking endpoint
# The actual worker selection is based on custom cost function
# See details at examples/llm/components/kv_router.py
@dynamo_endpoint()
async def check_hit_rate(self, request_prompt: str) -> AsyncIterator[WorkerId]:
max_id, max_hit_rate = self._cost_function(request_prompt)
yield f"{max_id}_{max_hit_rate}"
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
import logging
import os
import socket
import sys
from components.utils import NixlMetadataStore, PrefillQueue, RemotePrefillRequest
from vllm.distributed.device_communicators.nixl import NixlMetadata
from dynamo.sdk import async_on_start, dynamo_context, dynamo_endpoint, service
logger = logging.getLogger(__name__)
@service(
dynamo={
"enabled": True,
"namespace": "dynamo-demo",
},
resources={"gpu": 1, "cpu": "10", "memory": "20Gi"},
workers=1,
)
class PrefillWorker:
def __init__(self):
self._loaded_metadata = set()
self.initialized = False
self.hostname = socket.gethostname()
self.engine_id = self.hostname
@async_on_start
async def async_init(self):
runtime = dynamo_context["runtime"]
# create dummy meta data
metadata = NixlMetadata(
engine_id=self.engine_id,
agent_metadata=[],
kv_caches_base_addr=[[]],
num_blocks=0,
)
self._metadata_store = NixlMetadataStore("dynamo-nixl", runtime)
await self._metadata_store.put(metadata.engine_id, metadata)
task = asyncio.create_task(self.prefill_queue_handler())
def prefill_queue_handler_cb(fut):
try:
fut.result()
print("prefill queue handler exited successfully")
except Exception as e:
print(f"[ERROR] prefill queue handler failed: {e!r}")
sys.exit(1)
task.add_done_callback(prefill_queue_handler_cb)
print("PrefillWorker initialized")
async def prefill_queue_handler(self):
print("Prefill queue handler entered")
prefill_queue_nats_server = os.getenv("NATS_SERVER", "nats://localhost:4222")
prefill_queue_stream_name = "DummyLLM"
print(f"Prefill queue: {prefill_queue_nats_server}:{prefill_queue_stream_name}")
self.initialized = True
# TODO: integrate prefill_queue to a dynamo endpoint
async with PrefillQueue.get_instance(
nats_server=prefill_queue_nats_server,
stream_name=prefill_queue_stream_name,
) as prefill_queue:
print("prefill queue handler started")
while True:
# TODO: this might add a small overhead to pull prefill from nats
# need to test and check how much overhead it is
prefill_request = await prefill_queue.dequeue_prefill_request()
if prefill_request is not None:
print(f"Dequeued prefill request: {prefill_request.request_id}")
async for _ in self.prefill_generate(prefill_request):
pass
async def prefill_generate(self, request: RemotePrefillRequest):
# TODO check if metadata has changed
# and reload - currently only loading once
print(f"prefill invoked {request.engine_id}{self._loaded_metadata=}")
if request.engine_id not in self._loaded_metadata:
remote_metadata = await self._metadata_store.get(request.engine_id)
# await self.engine_client.add_remote_nixl_metadata(remote_metadata)
print(f"Received nixl metadata from host {remote_metadata.engine_id}")
self._loaded_metadata.add(remote_metadata.engine_id)
print("Prefill invoked and will read KV cache from worker and write it back")
yield "prefill invoked"
@dynamo_endpoint()
async def mock(self, req: RemotePrefillRequest):
yield f"mock_response: {req}"
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Protocol
from components.kv_router import Router
from components.utils import GeneralRequest, GeneralResponse, check_required_workers
from components.worker import DummyWorker
from dynamo._core import Client
from dynamo.sdk import async_on_start, depends, dynamo_context, dynamo_endpoint, service
from dynamo.sdk.lib.dependency import DynamoClient
logger = logging.getLogger(__name__)
@service(
dynamo={
"enabled": True,
"namespace": "dynamo-demo",
},
workers=1,
)
class Processor(Protocol):
"""
vLLM pre and post processing
"""
router: DynamoClient = depends(Router)
router_mode: str
min_workers: int
worker_client: Client
def __init__(self):
self.router_mode = "kv"
self.min_workers = 2
@async_on_start
async def async_init(self):
runtime = dynamo_context["runtime"]
comp_ns, comp_name = DummyWorker.dynamo_address() # type: ignore
self.worker_client = (
await runtime.namespace(comp_ns)
.component(comp_name)
.endpoint("worker_generate")
.client()
)
await check_required_workers(
self.worker_client, self.min_workers, tag="processor"
)
async def _generate(
self,
raw_request: GeneralRequest,
):
if self.router_mode == "kv":
async for route_response in self.router.check_hit_rate(raw_request.prompt):
worker_id, prefix_hit_rate = route_response.split("_")
prefix_hit_rate = float(prefix_hit_rate)
print(
f"Worker ID: {worker_id} with estimated prefix hit rate: {prefix_hit_rate}"
)
break
if worker_id == "":
engine_generator = await self.worker_client.random(
raw_request.model_dump_json()
)
else:
engine_generator = await self.worker_client.direct(
raw_request.model_dump_json(),
int(worker_id),
)
elif self.router_mode == "random":
engine_generator = await self.worker_client.random(
raw_request.model_dump_json()
)
elif self.router_mode == "round-robin":
engine_generator = await self.worker_client.round_robin(
raw_request.model_dump_json()
)
async for resp in engine_generator:
yield GeneralResponse.model_validate_json(resp.data())
@dynamo_endpoint()
async def processor_generate(self, raw_request: GeneralRequest):
async for response in self._generate(raw_request):
yield response.model_dump_json()
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import asyncio
import logging
from contextlib import asynccontextmanager
from typing import ClassVar, Optional
import msgspec
from nats.aio.client import Client as NATS
from nats.errors import Error as NatsError
from nats.js.client import JetStreamContext
from nats.js.errors import NotFoundError
from pydantic import BaseModel
from vllm.distributed.device_communicators.nixl import NixlMetadata
from dynamo._core import Client
from dynamo.runtime import DistributedRuntime
logger = logging.getLogger(__name__)
class GeneralRequest(BaseModel):
prompt: str = "user input"
request_id: str = "id_string"
class GeneralResponse(BaseModel):
worker_output: str = "generated output"
request_id: str = "id_string"
class RemotePrefillRequest(msgspec.Struct, omit_defaults=True, dict=True):
engine_id: str = "Engine ID"
request_id: str = "id_string"
class NATSQueue:
_instance: ClassVar[Optional["NATSQueue"]] = None
_lock: ClassVar[asyncio.Lock] = asyncio.Lock()
def __init__(
self,
stream_name: str = "default",
nats_server: str = "nats://localhost:4222",
dequeue_timeout: float = 1,
):
self.nats_url = nats_server
self._nc: Optional[NATS] = None
self._js: Optional[JetStreamContext] = None
# TODO: check if this is needed
# Sanitize stream_name to remove path separators
self._stream_name = stream_name.replace("/", "_").replace("\\", "_")
self._subject = f"{self._stream_name}.*"
self.dequeue_timeout = dequeue_timeout
self._subscriber: Optional[JetStreamContext.PullSubscription] = None
@classmethod
@asynccontextmanager
async def get_instance(
cls,
*,
stream_name: str = "default",
nats_server: str = "nats://localhost:4222",
dequeue_timeout: float = 1,
):
"""Get or create a singleton instance of NATSq"""
# TODO: check if this _lock is needed with GIL
async with cls._lock:
if cls._instance is None:
cls._instance = cls(
stream_name=stream_name,
nats_server=nats_server,
dequeue_timeout=dequeue_timeout,
)
await cls._instance.connect()
try:
yield cls._instance
except Exception:
if cls._instance:
await cls._instance.close()
cls._instance = None
raise
# TODO: check to see if this can be replaced by something like get_instance().close()
@classmethod
async def shutdown(cls):
"""Explicitly close the singleton instance if it exists"""
async with cls._lock:
if cls._instance:
await cls._instance.close()
cls._instance = None
async def connect(self):
"""Establish connection and create stream if needed"""
try:
if self._nc is None:
self._nc = NATS()
await self._nc.connect(self.nats_url)
self._js = self._nc.jetstream()
# Check if stream exists, if not create it
try:
await self._js.stream_info(self._stream_name)
except NotFoundError:
await self._js.add_stream(
name=self._stream_name, subjects=[self._subject]
)
# Create persistent subscriber
self._subscriber = await self._js.pull_subscribe(
f"{self._stream_name}.queue", durable="worker-group"
)
except NatsError as e:
await self.close()
raise ConnectionError(f"Failed to connect to NATS: {e}")
async def ensure_connection(self):
"""Ensure we have an active connection"""
if self._nc is None or self._nc.is_closed:
await self.connect()
async def close(self):
"""Close the connection when done"""
if self._nc:
await self._nc.close()
self._nc = None
self._js = None
self._subscriber = None
# TODO: is enqueue/dequeue_object a better name for a general queue?
async def enqueue_task(self, task_data: bytes) -> None:
"""
Enqueue a task using msgspec-encoded data
"""
await self.ensure_connection()
try:
await self._js.publish(f"{self._stream_name}.queue", task_data) # type: ignore
except NatsError as e:
raise RuntimeError(f"Failed to enqueue task: {e}")
async def dequeue_task(self) -> Optional[bytes]:
"""Dequeue and return a task as raw bytes, to be decoded with msgspec"""
await self.ensure_connection()
try:
msgs = await self._subscriber.fetch(1, timeout=self.dequeue_timeout) # type: ignore
if msgs:
msg = msgs[0]
await msg.ack()
return msg.data
return None
except asyncio.TimeoutError:
return None
except NatsError as e:
raise RuntimeError(f"Failed to dequeue task: {e}")
async def get_queue_size(self) -> int:
"""Get the number of messages currently in the queue"""
await self.ensure_connection()
try:
# Get consumer info to get pending messages count
consumer_info = await self._js.consumer_info( # type: ignore
self._stream_name, "worker-group"
)
# Return number of pending messages (real-time queue size)
return consumer_info.num_pending
except NatsError as e:
raise RuntimeError(f"Failed to get queue size: {e}")
class PrefillQueue(NATSQueue):
"""
A wrapper of NATSQueue for PrefillRequest.
The stream name is forced to be "prefill_queue".
"""
def __init__(
self,
stream_name="prefill_queue",
nats_server: str = "nats://localhost:4222",
dequeue_timeout: float = 1,
):
super().__init__(
stream_name=stream_name,
nats_server=nats_server,
dequeue_timeout=dequeue_timeout,
)
async def enqueue_prefill_request(
self, prefill_request: RemotePrefillRequest
) -> None:
encoded_request = msgspec.json.encode(prefill_request)
await self.enqueue_task(encoded_request)
async def dequeue_prefill_request(self) -> Optional[RemotePrefillRequest]:
encoded_request = await self.dequeue_task()
if encoded_request is not None:
prefill_request = msgspec.json.decode(
encoded_request, type=RemotePrefillRequest
)
return prefill_request
else:
return None
class NixlMetadataStore:
NIXL_METADATA_KEY = "nixl_metadata"
def __init__(self, namespace: str, runtime: DistributedRuntime) -> None:
self._namespace = namespace
# TODO Remove metadata from etcd on delete
self._stored: set[str] = set()
self._cached: dict[str, NixlMetadata] = {}
self._client = runtime.etcd_client()
if self._client is None:
raise Exception("Cannot be used with static workers")
self._key_prefix = f"{self._namespace}/{NixlMetadataStore.NIXL_METADATA_KEY}"
async def put(self, engine_id, metadata: NixlMetadata):
serialized_metadata = msgspec.msgpack.encode(metadata)
key = "/".join([self._key_prefix, engine_id])
await self._client.kv_put(key, serialized_metadata, None)
self._stored.add(engine_id)
async def get(self, engine_id) -> NixlMetadata:
try:
if engine_id in self._cached:
return self._cached[engine_id]
key = "/".join([self._key_prefix, engine_id])
key_values = await self._client.kv_get_prefix(key)
deserialized_metadata = None
for item in key_values:
deserialized_metadata = msgspec.msgpack.decode(
item["value"], type=NixlMetadata
)
break
if deserialized_metadata is None:
raise Exception("metadata not found in etcd")
self._cached[engine_id] = deserialized_metadata
# TODO watch for changes and update cache
# self._client.add_watch_callback(
# key,
# self._watch_callback,
# )
except Exception as e:
raise Exception(f"Error retrieving metadata for engine {engine_id}") from e
return deserialized_metadata
async def check_required_workers(
workers_client: Client,
required_workers: int,
on_change=True,
poll_interval=5,
tag="",
):
"""Wait until the minimum number of workers are ready."""
worker_ids = workers_client.endpoint_ids()
num_workers = len(worker_ids)
new_count = -1 # Force to print "waiting for worker" once
while num_workers < required_workers:
if (not on_change) or new_count != num_workers:
num_workers = new_count if new_count >= 0 else num_workers
print(
f" {tag} Waiting for more workers to be ready.\n"
f" Current: {num_workers},"
f" Required: {required_workers}"
)
await asyncio.sleep(poll_interval)
worker_ids = workers_client.endpoint_ids()
new_count = len(worker_ids)
print(f"Workers ready: {worker_ids}")
return worker_ids
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import socket
from components.utils import (
GeneralRequest,
GeneralResponse,
NixlMetadataStore,
PrefillQueue,
RemotePrefillRequest,
)
from vllm.distributed.device_communicators.nixl import NixlMetadata
from dynamo.sdk import async_on_start, dynamo_context, dynamo_endpoint, service
logger = logging.getLogger(__name__)
@service(
dynamo={
"enabled": True,
"namespace": "dynamo-demo",
},
resources={"cpu": "10", "memory": "20Gi"},
workers=1,
)
class DummyWorker:
def __init__(self):
self.hostname = socket.gethostname()
self.do_remote_prefill = True
self.model_name = "DummyLLM"
self._prefill_queue_nats_server = os.getenv(
"NATS_SERVER", "nats://localhost:4222"
)
self._prefill_queue_stream_name = self.model_name
logger.info(
f"Prefill queue: {self._prefill_queue_nats_server}:{self._prefill_queue_stream_name}"
)
@async_on_start
async def async_init(self):
runtime = dynamo_context["runtime"]
if self.do_remote_prefill:
# Create dummy Nixl meta data
metadata = NixlMetadata(
engine_id=self.hostname,
agent_metadata=[],
kv_caches_base_addr=[[]],
num_blocks=0,
)
metadata_store = NixlMetadataStore("dynamo-nixl", runtime)
await metadata_store.put(metadata.engine_id, metadata)
self.disaggregated_router = "DummyDisaggregateRouter"
logger.info("VllmWorker has been initialized")
def get_remote_prefill_request_callback(self):
# TODO: integrate prefill_queue to dynamo endpoint
async def callback(request: RemotePrefillRequest):
print(
f"enqueue request {self._prefill_queue_nats_server}, \
{self._prefill_queue_stream_name},{request.engine_id=}"
)
async with PrefillQueue.get_instance(
nats_server=self._prefill_queue_nats_server,
stream_name=self._prefill_queue_stream_name,
) as prefill_queue:
await prefill_queue.enqueue_prefill_request(request)
return callback
@dynamo_endpoint()
async def worker_generate(self, request: GeneralRequest):
# TODO: consider prefix hit when deciding prefill locally or remotely
if self.disaggregated_router is not None:
# decision = (
# absolute_prefill_length > self.max_local_prefill_length
# and queue_size < self.max_prefill_queue_size )
# Disagg router decision is based on prefill length and queue size
# Always set to True in this demo (see details at disagg_router.py)
disagg_router_decision = True
else:
# always prefill remotely if no disaggregated router is provided
disagg_router_decision = True
if self.do_remote_prefill and disagg_router_decision:
## Mimic the process of enqueue request for prefill
prefill_request = RemotePrefillRequest(
engine_id=self.hostname, request_id=request.request_id
)
callback = self.get_remote_prefill_request_callback()
await callback(prefill_request)
print(f"{self.hostname}: Worker invoked")
yield GeneralResponse(
request_id=request.request_id,
worker_output=request.prompt + "_GeneratedBy_" + self.hostname,
).model_dump_json()
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment