"deploy/cloud/vscode:/vscode.git/clone" did not exist on "b2aa2317041ed79d390874a8cf89d48452779ddf"
- 06 May, 2025 1 commit
-
-
Graham King authored
Adding this to a Python script makes it register on the network so that `dynamo-run` can discover it and send it requests: ``` from dynamo.llm import register_llm MODEL = "Qwen/Qwen2.5-0.5B-Instruct" await register_llm(endpoint, MODEL, 3) ``` Full vllm example, with pre-processing in dynamo: - `dynamo-run in=text out=dyn://dynamo.backend.generate` - `cd lib/bindings/python/examples/hello_world` - `python server_vllm.py` This builds on top of the work to move pre-processor to ingress side. It means we can decouple Rust and Python using NATS as the bus. The `register_llm` call does this: - Download the model from HF if necessary - Load the model deployment card from the HF folder or extract from GGUF - Push the tokenizer config etc into NATS object store so ingress can access it from a different machine - Publish the model deployment card to ETCD
-
- 29 Apr, 2025 1 commit
-
-
Abrar Shivani authored
Adds support for specifying default request parameters through a json template file that can be applied across all inference requests. This enables consistent parameter settings while still allowing per-request overrides. Changes: - Add --request-template CLI flag to specify template file path - Integrate template support in HTTP, batch and text input modes - Template values can be overridden by individual request parameters - Example template.json: ``` { "model": "Qwen2.5-3B-Instruct", "temperature": 0.7, "max_completion_tokens": 4096 } ```
-
- 25 Apr, 2025 1 commit
-
-
Graham King authored
This will allow an ingress-side pre-processor to see it without needing a model checkout. Currently pre-processing is done in the worker, which has access to the model deployment card ("MDC") files (`config.json`, `tokenizer.json` and `tokenizer_config.json`) locally. We want to move the pre-processor to the ingress side to support KV routing. That requires ingress side (i.e the HTTP server), on a different machine than the worker to be able to see those three files. To support that this PR makes the worker upload the contents of those files to the NATS object store, and publishes the MDC with those NATS urls to the key-value store. The key-value store has an interface so any store (nats, etcd, redis, etc) can be supported. Implementations for memory and NATS are provided. Fetching the MDC from the store, doing pre-processing ingress side, and publishing a card backed by a GGUF, are all for a later commit. Part of #743
-
- 04 Apr, 2025 1 commit
-
-
Yan Ru Pei authored
-
- 24 Mar, 2025 1 commit
-
-
Graham King authored
This lets us do: ``` dynamo-run out=llamacpp <gguf_file> ``` Previously a `--model-config <hf-repo>` was also required, to configure our tokenizer.
-
- 14 Mar, 2025 1 commit
-
-
Ryan Olson authored
-
- 09 Mar, 2025 1 commit
-
-
Hongkuan Zhou authored
Signed-off-by:
Hongkuan Zhou <tedzhouhk@gmail.com> Co-authored-by:
hongkuan <hongkuanz@nvidia.com> Co-authored-by:
Piotr Tarasiewicz <ptarasiewicz@nvidia.com> Co-authored-by:
Piotr Tarasiewicz Nvidia <ptarasiewicznv@Piotrs-MacBook-Pro.local> Co-authored-by:
alec-flowers <aflowers@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-
- 08 Mar, 2025 1 commit
-
-
Neelay Shah authored
Co-authored-by:Biswa Panda <biswa.panda@gmail.com>
-
- 05 Mar, 2025 1 commit
-
-
Neelay Shah authored
Co-authored-by:Graham King <grahamk@nvidia.com>
-
- 25 Feb, 2025 2 commits
-
-
Alec authored
Co-authored-by:aflowers <aflowers@nvidia.com>
-
Neelay Shah authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
- 24 Feb, 2025 1 commit
-
-
Biswa Panda authored
-
- 21 Feb, 2025 1 commit
-
-
Graham King authored
Add support in tio for distributed components and discovery. Node 1: ``` tio in=http out=tdr://ns/backend/mistralrs ``` Node 2: ``` tio in=tdr://ns/backend/mistralrs out=mistralrs ~/llm_models/Llama-3.2-3B-Instruct ``` This will use etcd to auto-discover the model and NATS to talk to it. You can run multiple workers on the same endpoint and it will pick one at random each time. The `ns/backend/mistralrs` are purely symbolic, pick anything as long as it has three parts, and it matches the other node.
-
- 20 Feb, 2025 1 commit
-
-
Biswa Panda authored
-
- 18 Feb, 2025 1 commit
-
-
GuanLuo authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
aflowers <aflowers@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com> Co-authored-by:
hongkuanz <hongkuanz@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-
- 14 Feb, 2025 1 commit
-
-
Graham King authored
This allows us to run a real model. Build: ``` cargo build --release --features mistralrs,cuda ``` Run: ``` ./target/release/tio in=text out=mistralrs --model-path Llama-3.2-1B-Instruct-Q4_K_M.gguf ``` Why [mistral.rs](https://github.com/EricLBuehler/mistral.rs)? - It has no dependencies. You don't need a container or a virtual env to get started. - It supports CUDA, Metal (MacOS) and CPU-only. Everyone can join the AI revolution. - It starts fast and serves fast (with CUDA). That makes it fun to experiment with. - It runs many models, not just Mistral, that's just it's name.
-
- 10 Feb, 2025 1 commit
-
-
Ryan Olson authored
Signed-off-by:
Ryan Olson <ryanolson@users.noreply.github.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-
- 05 Feb, 2025 1 commit
-
-
J Wyman authored
-
- 04 Feb, 2025 1 commit
-
-
Ryan Olson authored
the journey begins
-