- 25 Feb, 2025 2 commits
-
-
GuanLuo authored
Signed-off-by:
Meenakshi Sharma <163925564+nvda-mesharma@users.noreply.github.com> Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
Ryan Olson <ryanolson@users.noreply.github.com> Co-authored-by:
Meenakshi Sharma <163925564+nvda-mesharma@users.noreply.github.com> Co-authored-by:
Biswa Panda <biswapanda@users.noreply.github.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
Neelay Shah authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com>
-
- 20 Feb, 2025 1 commit
-
-
Graham King authored
You can now run an HF repo directly: ``` tio ~/llm_models/Llama-3.2-1B-Instruct/ ``` or a GGUF ``` tio ~/llm_models/Llama-3.2-1B-Instruct-Q4_K_M.gguf ``` Also cleanup kv_router so I can merge.
-
- 18 Feb, 2025 1 commit
-
-
GuanLuo authored
Signed-off-by:
Neelay Shah <neelays@nvidia.com> Co-authored-by:
aflowers <aflowers@nvidia.com> Co-authored-by:
Ryan McCormick <rmccormick@nvidia.com> Co-authored-by:
hongkuanz <hongkuanz@nvidia.com> Co-authored-by:
Neelay Shah <neelays@nvidia.com>
-