- 25 Feb, 2025 5 commits
-
-
Graham King authored
Add backend type `EngineConfig::StaticCore` that wraps the engine in a preprocessor (prompt templating and tokenization). Add example engine `echo_core` (`out=echo_core`) which takes and returns tokens. A nice side effect is that it echos the full prompt template with system prompt, whereas `echo_full` echos only user prompt. 
-
Ryan McCormick authored
Signed-off-by:Ryan McCormick <rmccormick@nvidia.com>
-
Ryan McCormick authored
-
Neelay Shah authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
Neelay Shah authored
-
- 24 Feb, 2025 3 commits
-
-
Ryan Olson authored
What does the PR do? - adds etcd method to atomic create or validate a kv entry. - adds integration tests to validate the behavior
-
Biswa Panda authored
-
Meenakshi Sharma authored
Signed-off-by:Meenakshi Sharma <163925564+nvda-mesharma@users.noreply.github.com>
-
- 22 Feb, 2025 3 commits
-
-
Ryan Olson authored
- Minor update to DeadlineStream - Adding tests
-
Ryan Olson authored
Enables `#[tokio::test]` via `Runtime::from_current()` This uses the current handle as both the primary and secondary.
-
Alec authored
Co-authored-by:hongkuanz <hongkuanz@nvidia.com>
-
- 21 Feb, 2025 6 commits
-
-
Graham King authored
Add support in tio for distributed components and discovery. Node 1: ``` tio in=http out=tdr://ns/backend/mistralrs ``` Node 2: ``` tio in=tdr://ns/backend/mistralrs out=mistralrs ~/llm_models/Llama-3.2-3B-Instruct ``` This will use etcd to auto-discover the model and NATS to talk to it. You can run multiple workers on the same endpoint and it will pick one at random each time. The `ns/backend/mistralrs` are purely symbolic, pick anything as long as it has three parts, and it matches the other node.
-
Ryan Olson authored
Signed-off-by:
Ryan Olson <ryanolson@users.noreply.github.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
Ryan McCormick authored
-
Alec authored
Co-authored-by:
Sean Choi <choishsean@gmail.com> Co-authored-by:
aflowers <aflowers@nvidia.com>
-
Meenakshi Sharma authored
Signed-off-by:
Meenakshi Sharma <163925564+nvda-mesharma@users.noreply.github.com> Co-authored-by:
Anant Sharma <anants@nvidia.com>
-
Piotr Marcinkiewicz authored
-
- 20 Feb, 2025 7 commits
-
-
Anant Sharma authored
-
Graham King authored
Co-authored-by:Ryan McCormick <rmccormick@nvidia.com>
-
Graham King authored
You can now run an HF repo directly: ``` tio ~/llm_models/Llama-3.2-1B-Instruct/ ``` or a GGUF ``` tio ~/llm_models/Llama-3.2-1B-Instruct-Q4_K_M.gguf ``` Also cleanup kv_router so I can merge.
-
Biswa Panda authored
-
Biswa Panda authored
Co-authored-by:Biswa Ranjan Panda <biswaranjanp@nvidia.com>
-
ptarasiewiczNV authored
Signed-off-by:
Piotr Marcinkiewicz <piotrm@nvidia.com> Co-authored-by:
Piotr Marcinkiewicz <piotrm@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
Biswa Panda authored
-
- 19 Feb, 2025 1 commit
-
-
Thomas Montfort authored
-
- 18 Feb, 2025 8 commits
-
-
ptarasiewiczNV authored
Co-authored-by:
Ryan Olson <rolson@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
Ryan Olson authored
-
aflowers authored
-
Ryan Olson authored
Co-authored-by:Ryan McCormick <rmccormick@nvidia.com>
-
Graham King authored
-
Ryan McCormick authored
-
GuanLuo authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
aflowers <aflowers@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com> Co-authored-by:
hongkuanz <hongkuanz@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-
ptarasiewiczNV authored
-
- 17 Feb, 2025 1 commit
-
-
ptarasiewiczNV authored
-
- 15 Feb, 2025 1 commit
-
-
Ryan Olson authored
-
- 14 Feb, 2025 5 commits
-
-
Neelay Shah authored
-
Graham King authored
Upgrade mistralrs to latest.
-
Graham King authored
This allows us to run a real model. Build: ``` cargo build --release --features mistralrs,cuda ``` Run: ``` ./target/release/tio in=text out=mistralrs --model-path Llama-3.2-1B-Instruct-Q4_K_M.gguf ``` Why [mistral.rs](https://github.com/EricLBuehler/mistral.rs)? - It has no dependencies. You don't need a container or a virtual env to get started. - It supports CUDA, Metal (MacOS) and CPU-only. Everyone can join the AI revolution. - It starts fast and serves fast (with CUDA). That makes it fun to experiment with. - It runs many models, not just Mistral, that's just it's name.
-
Blazej authored
Signed-off-by:
Piotr Marcinkiewicz <piotrm@nvidia.com> Co-authored-by:
Piotr Marcinkiewicz <piotrm@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-
Ryan McCormick authored
-