"git@developer.sourcefind.cn:kecinstone/2024-pra-vllm.git" did not exist on "30bad5c49278ec5c3836a7bf00faa1316e8827b8"
  1. 18 Apr, 2025 1 commit
    • Graham King's avatar
      feat(dynamo-engine-vllm): vllm 0.8.X support (#728) · a745a980
      Graham King authored
      It's different enough that I made a new engine vllm0_8 and renamed the previous engine to vllm0_7.
      
      `dynamo-run out=vllm` now expects 0.8. This matches the container change in #690.
      
      For older use `dynamo-run out=vllm0_7`.
      a745a980
  2. 03 Apr, 2025 1 commit
  3. 19 Mar, 2025 2 commits
    • Anant Sharma's avatar
      fix: update crates metadata (#264) · 68d953f7
      Anant Sharma authored
      
      Co-authored-by: default avatarDmitry Tokarev <dtokarev@nvidia.com>
      68d953f7
    • Graham King's avatar
      chore: Don't depend on openssl (#292) · 7c3fd5c9
      Graham King authored
      This makes the Rust parts all use ring / rustls library instead of local install of openssl. It's a step on the journey to being statically linked.
      
      Pieces:
      - `tokenizers` and `mistralrs` now support rustls (mistralrs by default, tokenizers with feature flag).
      - Move shared dependencies up into workspace
      - New `rand` crate has some renames for future rust
      - Ensure the dependency doesn't creep back in by enforcing it with cargo deny.
      7c3fd5c9
  4. 17 Mar, 2025 1 commit
  5. 15 Mar, 2025 1 commit
    • Graham King's avatar
      feat(dynamo-run): Batch mode (#142) · 2cca070c
      Graham King authored
      ```
      dynamo-run in=batch:prompts.jsonl out=mistralrs ~/llm_models/Llama-3.2-3B-Instruct/
      ```
      
      The file has genai format, one entry per line:
      ```
      {"text": "the prompt"}
      {"text": ..etc
      ```
      
      The prompt is evaluated and the output written to `output.jsonl` in the
      same folder as the input.
      
      At the end of the run various statistics are printed:
      > Ran 5 files in 8s 679ms. Tokens in: 40 (5/s). Tokens out: 346 (43/s)
      
      This is also helpful for pushing load into the system and stressing the
      various components. Not intended for performance measurement, it's a
      batch inference tool.
      2cca070c
  6. 14 Mar, 2025 1 commit
    • Graham King's avatar
      feat(dynamo-run): Various UX improvements (#168) · 1fb31d6a
      Graham King authored
      Engines mistralrs, sglang and vllm included by default. Can be disabled like this: `cargo build --no-default-features --features <add-back-what-you-want>`.
      
      Added `--feature vulkan` option, for llamacpp.
      
      Build time message if CUDA or Metal would help and are missing. That's the best we can do:
      > warning: dynamo-run@0.1.0: CUDA not enabled, re-run with `--features cuda`
      
      Runtime message if CUDA, Metal or Vulkan are enabled:
      > 2025-03-14T21:59:26.501937Z  INFO dynamo_run: CUDA on
      
      Runtime message if they are missing:
      > 2025-03-14T22:02:37.439404Z  INFO dynamo_run: CPU mode. Rebuild with `--features cuda|metal|vulkan` for better performance
      
      Defaut engine message includes available engines:
      > 2025-03-14T21:59:26.503612Z  INFO dynamo_run: Using default engine: mistralrs. Use out=<engine> to specify one of echo_core, echo_full, mistralrs, llamacpp, sglang, vllm, pystr, pytok
      
      The really important outcome is that this should now "just work":
      ```
      cargo install dynamo-run
      dynamo-run Qwen/Qwen2.5-3B-Instruct
      ```
      
      Sadly you still need `--features cuda|metal` for performance, I couldn't automate that.
      1fb31d6a
  7. 13 Mar, 2025 3 commits
  8. 09 Mar, 2025 1 commit
  9. 08 Mar, 2025 1 commit
  10. 07 Mar, 2025 1 commit
    • Graham King's avatar
      feat: Bring-your-own engine for dynemo-run (#43) · 1b96c2c4
      Graham King authored
      1. Create `my_engine.py`
      
      ```
      import asyncio
      
      async def generate(request):
          yield {"id":"1","choices":[{"index":0,"delta":{"content":"The","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":" capital","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":" of","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":" France","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":" is","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":" Paris","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":".","role":"assistant"}}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
          await asyncio.sleep(0.1)
          yield {"id":"1","choices":[{"index":0,"delta":{"content":"","role":"assistant"},"finish_reason":"stop"}],"created":1841762283,"model":"Llama-3.2-1B-Instruct","system_fingerprint":"local","object":"chat.completion.chunk"}
      ```
      
      2. Build
      
      ```
      cargo build --release --feature python
      ```
      
      3. Run
      
      ```
      dynemo-run out=pystr:my_engine.py --name test
      ```
      
      And here's a distributed system, with your engine:
      
      - Node 1: `dynemo-run in=http out=dyn://test`
      - Node 2: `dynemo-run in=dyn://test out=pystr:my_engine.py`
      1b96c2c4
  11. 05 Mar, 2025 2 commits
  12. 04 Mar, 2025 1 commit
  13. 28 Feb, 2025 2 commits
  14. 27 Feb, 2025 1 commit
  15. 26 Feb, 2025 2 commits
  16. 25 Feb, 2025 3 commits
  17. 21 Feb, 2025 1 commit
  18. 14 Feb, 2025 1 commit
    • Graham King's avatar
      feat: Add a mistralrs engine to tio (#178) · 2f700421
      Graham King authored
      This allows us to run a real model.
      
      Build:
      ```
      cargo build --release --features mistralrs,cuda
      ```
      
      Run:
      ```
      ./target/release/tio in=text out=mistralrs --model-path Llama-3.2-1B-Instruct-Q4_K_M.gguf
      ```
      
      Why [mistral.rs](https://github.com/EricLBuehler/mistral.rs)?
      
      - It has no dependencies. You don't need a container or a virtual env to get started.
      - It supports CUDA, Metal (MacOS) and CPU-only. Everyone can join the AI revolution.
      - It starts fast and serves fast (with CUDA). That makes it fun to experiment with.
      - It runs many models, not just Mistral, that's just it's name.
      2f700421
  19. 13 Feb, 2025 1 commit
    • Graham King's avatar
      feat: Add `tio` your friendly cmd line uncle to run triton-llm services (#174) · 418ae5e8
      Graham King authored
      This provides a simple example of how to write a triton-llm engine, and how to connect it to the OpenAI HTTP server.
      
      This is the tool previously called `nio` and `llmctl`.
      
      - **Inputs**: Text and HTTP.
      - **Engines**: Echo, which streams your prompt back with a slight delay.
      
      Build: `cargo build`
      
      Pre-requisites: `nats-server` and `etcd` must be running locally, even though they are not yet used by `tio`.
      
      Run with text input:
      ```
      ./target/debug/tio in=text out=echo_full --model-name test
      ```
      
      Run with the triton-llm HTTP server:
      ```
      ./target/debug/tio in=http out=echo_full --http-port 8080 --model-name Echo-0B
      ```
      
      List models:
      ```
      curl localhost:8080/v1/models | jq
      ```
      
      Will output
      ```
      {
        "object": "list",
        "data": [
          {
            "id": "Echo-0B",
            "object": "object",
            "created": 1739400430,
            "owned_by": "nvidia"
          }
        ]
      }
      ```
      
      #### What's next
      
      As triton-distributed gains features `tio` will be able to grow:
      - When we get the pre-processor we can have token-in token-out engines. 
      - When we get a pull-router we can have `in=nats` and `out=nats`.
      - When we get discovery we can have dynamic engines.
      418ae5e8
  20. 11 Feb, 2025 1 commit
  21. 10 Feb, 2025 1 commit
  22. 06 Feb, 2025 1 commit
  23. 05 Feb, 2025 2 commits
  24. 04 Feb, 2025 1 commit