1. 25 Oct, 2024 1 commit
  2. 27 Sep, 2024 1 commit
    • Daniël de Kok's avatar
      Improve support for GPUs with capability < 8 (#2575) · 5b6b74e2
      Daniël de Kok authored
      * Improve support for GPUs with capability < 8
      
      - For models that cannot use flashinfer, use flash-attn v1 + paged
        attention for models with a compute capability older than 8.
      - Disable prefix caching when using paged attention.
      - When using flash-attn v1, pass the key/value, rather than the
        cache, since v1 cannot use block tables.
      
      * nix: add flash-attn-v1 to the server environment
      
      * Move disabling prefix caching into the block of exceptions
      
      * Capability as `usize`s
      5b6b74e2
  3. 17 Sep, 2024 1 commit
    • Daniël de Kok's avatar
      Move to moe-kernels package and switch to common MoE layer (#2511) · ce85efa9
      Daniël de Kok authored
      * Move to moe-kernels package and switch to common MoE layer
      
      This change introduces the new `moe-kernels` package:
      
      - Add `moe-kernels` as a dependency.
      - Introduce a `SparseMoELayer` module that can be used by MoE
        models.
      - Port over Mixtral and Deepseek.
      
      * Make `cargo check` pass
      
      * Update runner
      ce85efa9
  4. 02 Sep, 2024 1 commit
  5. 29 Aug, 2024 1 commit
    • Daniël de Kok's avatar
      nix: build Torch against MKL and various other improvements (#2469) · 4e821c00
      Daniël de Kok authored
      Updates tgi-nix input:
      
      - Move Torch closer to upstream by building against MKL.
      - Remove compute capability 8.7 from Torch (Jetson).
      - Sync nixpkgs cumpute capabilities with Torch (avoids
        compiling too mana capabilities for MAGMA).
      - Use nixpkgs configuration passed through by `tgi-nix`.
      4e821c00
  6. 21 Aug, 2024 2 commits
  7. 20 Aug, 2024 1 commit