1. 09 Dec, 2024 1 commit
  2. 06 Dec, 2024 3 commits
    • drbh's avatar
      Enable paligemma2 (#2807) · 9f5c9a5e
      drbh authored
      * feat: support loading gemma2 as vlm text model
      
      * feat: add test for paligemma2
      9f5c9a5e
    • Nicolas Patry's avatar
      Removing experimental to prefill chunking. · 08f6fa0b
      Nicolas Patry authored
      08f6fa0b
    • Nicolas Patry's avatar
      Auto max prefill (#2797) · 5df80590
      Nicolas Patry authored
      * Attempt at automatic max batch prefill.
      
      * Taking into account number of shards.
      
      * Adding more cards.
      
      * Adding A100 + H100
      
      * Adding a few more cards.
      
      * Logprobs cost too much.
      
      * h100 better name, and keep factor of 2
      
      * Damn inflated sparse tflops.
      
      * Typo in h100.
      
      * Updated the flops calculation (checked with fvcore).
      
      * chunking by default.
      
      * Fix prefix caching for chat completion since we removed logprobs.
      
      * More tests.
      
      * Dropping all the prefill logprobs.
      
      * Add a flag that enables users to get logprobs back.
      
      * Repairing prompt token counting.
      
      * Fixing a few tests.
      
      * Remove some scaffolding.
      
      * Attempting to reduces the issues (workarounds for now).
      5df80590
  3. 04 Dec, 2024 1 commit
  4. 03 Dec, 2024 2 commits
    • Nicolas Patry's avatar
      Saving some VRAM. (#2790) · b57f3703
      Nicolas Patry authored
      * Saving some VRAM.
      
      - 8B on 4xL4 attention=flashdecoding . Before 4.28GB left, After 4.32GB
        left, so 400MB saved.
      
      - Effect not as visible on attention=flashinfer and n_shard=1. I suspect
        it's linked to the torch allocator.
      
      * Adding assertion.
      b57f3703
    • Daniël de Kok's avatar
      Sync (most) server dependencies with Nix (#2782) · 2003d8be
      Daniël de Kok authored
      
      
      * Sync (most) server dependencies with Nix
      
      Skipped most grpcio packages, because of protobuf version
      incompatibility with the opentelemetry packages.
      
      * Add a primitive script to generate Poetry commands to sync with Nix
      
      This is not fully automated, since getting the Nix versions may be
      unresolvable. However, it does take most of the work out of doing
      this manually.
      
      * Upgrade eetq ?
      
      * Fmt.
      
      ---------
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      2003d8be
  5. 02 Dec, 2024 1 commit
  6. 26 Nov, 2024 1 commit
  7. 25 Nov, 2024 1 commit
  8. 22 Nov, 2024 1 commit
  9. 21 Nov, 2024 1 commit
  10. 20 Nov, 2024 2 commits
  11. 19 Nov, 2024 3 commits
  12. 18 Nov, 2024 4 commits
  13. 17 Nov, 2024 1 commit
    • Daniël de Kok's avatar
      Remove vLLM dependency for CUDA (#2751) · 52e48739
      Daniël de Kok authored
      * Remove vLLM dependency for CUDA
      
      This change adds `attention-kernels` as a dependency for paged
      attention and cache reshaping. With that, we don't use vLLM
      anywhere for CUDA.
      
      Tested run (since we don't have paged attention in CI):
      
      ```
      ❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release
      [...]
      5 snapshots passed.
      ```
      
      * Fix clippy warning
      52e48739
  14. 15 Nov, 2024 4 commits
  15. 10 Nov, 2024 1 commit
    • Daniël de Kok's avatar
      Add initial support for compressed-tensors checkpoints (#2732) · a7850008
      Daniël de Kok authored
      compressed-tensors is a safetensors extension for sparse, quantized
      tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
      quantization, because
      
      - Different quantizer configurations can be used for different targets.
      - The format can specify input/output quantizers in addition to weight
        quantizers.
      - Configurable exclusions for quantization.
      
      This change adds a dependency on the `compressed-tensors` package for
      its configuration parsing and layer matching functionality.
      
      The following types of quantization are supported in this PR:
      
      - W8A16 and W4A16 INT using GPTQ-Marlin kernels.
      - W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.
      
      Support for other quantization types will be added in subsequent PRs.
      a7850008
  16. 04 Nov, 2024 4 commits
  17. 02 Nov, 2024 1 commit
  18. 01 Nov, 2024 1 commit
    • drbh's avatar
      fix cuda graphs for qwen2-vl (#2708) · 01dacf8e
      drbh authored
      
      
      * feat: support multidimensional position ids on batch to enable cuda graphs on qwen2-vl
      
      * fix: only check model type if config exists
      
      * fix: adjust sharding and lm head logic
      
      * fix qwen2 failure in intel cpu
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * fix: return correct shape logits and add streaming test
      
      * fix: remove unused import and refactor test
      
      ---------
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      01dacf8e
  19. 30 Oct, 2024 1 commit
    • drbh's avatar
      Support qwen2 vl (#2689) · befd9f67
      drbh authored
      * feat: add support for qwen2 vl model
      
      * feat: fix token padding, enable warmup and process basic request
      
      * fix: improve get_position_ids, add lift embed_tokens
      
      * fix: remove get_cos_sin_hack dev function
      
      * feat: add simple test chat with meesage and text
      
      * fix: lint test
      
      * fix: adjust positional embeddings for multi dimensional position ids
      
      * fix: update docs and lint unused vars
      
      * fix: include linted file
      
      * fix: add norm after text output
      
      * fix: format model file
      
      * fix: adjust for ruff lints
      
      * fix: remove unused rotate_half
      
      * feat: refactors and calc num features
      
      * fix: prefer position_ids passed from vlm causal lm and reset ids on batch
      
      * fix: adjust get_position_ids if not available and add required args to signatures
      
      * fix: adjust resize case for qwen2_vl warmup
      
      * fix: avoid qwen2 vl specific paths with qwen2
      befd9f67
  20. 28 Oct, 2024 4 commits
  21. 25 Oct, 2024 2 commits