1. 29 Apr, 2025 1 commit
    • Abrar Shivani's avatar
      feat: Add request template support for default inference parameters (#841) · adad2ecd
      Abrar Shivani authored
      Adds support for specifying default request parameters through a json template file that can be applied across all inference requests. This enables consistent parameter settings while still allowing per-request overrides.
      
      Changes:
      - Add --request-template CLI flag to specify template file path
      - Integrate template support in HTTP, batch and text input modes
      - Template values can be overridden by individual request parameters
      - Example template.json:
      ```
      {
          "model": "Qwen2.5-3B-Instruct",
          "temperature": 0.7,
          "max_completion_tokens": 4096
      }
      ```
      adad2ecd
  2. 07 Apr, 2025 1 commit
    • Graham King's avatar
      feat(dynamo-run): Basic routing choice (#524) · ec2e7307
      Graham King authored
      As a first step towards KV routing:
      - introduce a `--router-mode` in dynamo-run that only does random and round-robin right now. Not that interesting yet.
      - Make the vllm engine publish the KV events received from our patched vllm.
      
      Now we "just" need to connect the two. Easy right?
      ec2e7307
  3. 19 Mar, 2025 1 commit
    • Graham King's avatar
      fix(mistralrs): Disable paged attention (#234) · fd95f37b
      Graham King authored
      Under load it sometimes drops a request. The request gets added to the batch (sequence) and immediately gets a FinishReason Stop. Not sure why. It doesn't happen with the default scheduler (non-paged attention), so switch to that for now.
      fd95f37b
  4. 15 Mar, 2025 1 commit
    • Graham King's avatar
      feat(dynamo-run): Batch mode (#142) · 2cca070c
      Graham King authored
      ```
      dynamo-run in=batch:prompts.jsonl out=mistralrs ~/llm_models/Llama-3.2-3B-Instruct/
      ```
      
      The file has genai format, one entry per line:
      ```
      {"text": "the prompt"}
      {"text": ..etc
      ```
      
      The prompt is evaluated and the output written to `output.jsonl` in the
      same folder as the input.
      
      At the end of the run various statistics are printed:
      > Ran 5 files in 8s 679ms. Tokens in: 40 (5/s). Tokens out: 346 (43/s)
      
      This is also helpful for pushing load into the system and stressing the
      various components. Not intended for performance measurement, it's a
      batch inference tool.
      2cca070c