1. 28 Nov, 2024 1 commit
    • drbh's avatar
      Support continue final message (#2733) · d4718051
      drbh authored
      * feat: support continue_final_message param in chat request
      
      * feat: add test for continue final message
      
      * fix: bump openapi docs
      
      * fix: remove continue_final_message chat request param
      
      * fix: remove unneeded launcher args in continue test
      
      * fix: bump test output
      
      * fix: remove accidentally included guideline from rebase
      
      * fix: remove guideline tests
      
      * fix: adjust continuation tests expected text
      
      * fix: replace expected output for continue test
      d4718051
  2. 25 Nov, 2024 1 commit
  3. 22 Nov, 2024 1 commit
  4. 21 Nov, 2024 1 commit
  5. 20 Nov, 2024 1 commit
  6. 19 Nov, 2024 1 commit
    • drbh's avatar
      PR 2634 CI - Fix the tool_choice format for named choice by adapting OpenAIs scheme (#2645) · 5489406c
      drbh authored
      
      
      * add OpenAI like tool_choice for named choice
      
      * add tests
      
      * fix: run linter and bump api docs
      
      * fix: consolidate changes and remove old tool type
      
      * feat: improve, simplify and rename tool choice struct add required support and refactor
      
      * fix: simplify tool choice logic, improve tests, openapi and rust docs
      
      * fix: refactor away prepare_chat_input and improve tool grammar apply control flow
      
      * feat: update docs and add tool choice configuration section
      
      * fix: simplify naming, tool choice default and improve test
      
      * fix: adjust tool choice none logic, add test and small refactors
      
      * fix: add missing snapshot file
      
      * fix: adjust tool choice type in test
      
      * fix: adjust default when json tool choice is
      
      * fix: remove trailing space lint after rebase
      
      * fix: remove mostly mocked unit test
      
      ---------
      Co-authored-by: default avatarLinus Bierhoff <linus.bierhoff@icloud.com>
      5489406c
  7. 18 Nov, 2024 1 commit
    • Daniël de Kok's avatar
      Add support for compressed-tensors w8a8 int checkpoints (#2745) · 3c9df21f
      Daniël de Kok authored
      
      
      * Add support for compressed-tensors w8a8 int checkpoints
      
      This change adds a loader for w8a8 int checkpoints. One large benefit of
      int8 support is that the corresponding cutlass matmul kernels also work on
      compute capability 7.5.
      
      Evaluation on neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8:
      
      |     Tasks     |Version|     Filter     |n-shot|        Metric         |   |Value |   |Stderr|
      |---------------|------:|----------------|-----:|-----------------------|---|-----:|---|------|
      |gsm8k_cot_llama|      3|flexible-extract|     8|exact_match            |↑  |0.8431|±  |0.0100|
      |               |       |strict-match    |     8|exact_match            |↑  |0.8393|±  |0.0101|
      |ifeval         |      4|none            |     0|inst_level_loose_acc   |↑  |0.8597|±  |   N/A|
      |               |       |none            |     0|inst_level_strict_acc  |↑  |0.8201|±  |   N/A|
      |               |       |none            |     0|prompt_level_loose_acc |↑  |0.7967|±  |0.0173|
      |               |       |none            |     0|prompt_level_strict_acc|↑  |0.7468|±  |0.0187|
      
      Which is the same ballpark as vLLM.
      
      As usual, lots of thanks to Neural Magic/vLLM for the kernels.
      
      * Always use dynamic input quantization for w8a8 int
      
      It's far less flaky and gives better output.
      
      * Use marlin-kernels 0.3.5
      
      * Fix a typo
      Co-authored-by: default avatardrbh <david.richard.holtz@gmail.com>
      
      * Small fixes
      
      ---------
      Co-authored-by: default avatardrbh <david.richard.holtz@gmail.com>
      3c9df21f
  8. 10 Nov, 2024 1 commit
    • Daniël de Kok's avatar
      Add initial support for compressed-tensors checkpoints (#2732) · a7850008
      Daniël de Kok authored
      compressed-tensors is a safetensors extension for sparse, quantized
      tensors. The format is more powerful than earlier AWQ/GPTQ/FP8
      quantization, because
      
      - Different quantizer configurations can be used for different targets.
      - The format can specify input/output quantizers in addition to weight
        quantizers.
      - Configurable exclusions for quantization.
      
      This change adds a dependency on the `compressed-tensors` package for
      its configuration parsing and layer matching functionality.
      
      The following types of quantization are supported in this PR:
      
      - W8A16 and W4A16 INT using GPTQ-Marlin kernels.
      - W8A8 and W8A16 FP using FP8-Marlin and cutlass kernels.
      
      Support for other quantization types will be added in subsequent PRs.
      a7850008
  9. 01 Nov, 2024 1 commit
    • drbh's avatar
      fix cuda graphs for qwen2-vl (#2708) · 01dacf8e
      drbh authored
      
      
      * feat: support multidimensional position ids on batch to enable cuda graphs on qwen2-vl
      
      * fix: only check model type if config exists
      
      * fix: adjust sharding and lm head logic
      
      * fix qwen2 failure in intel cpu
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      
      * fix: return correct shape logits and add streaming test
      
      * fix: remove unused import and refactor test
      
      ---------
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      01dacf8e
  10. 30 Oct, 2024 1 commit
    • drbh's avatar
      Support qwen2 vl (#2689) · befd9f67
      drbh authored
      * feat: add support for qwen2 vl model
      
      * feat: fix token padding, enable warmup and process basic request
      
      * fix: improve get_position_ids, add lift embed_tokens
      
      * fix: remove get_cos_sin_hack dev function
      
      * feat: add simple test chat with meesage and text
      
      * fix: lint test
      
      * fix: adjust positional embeddings for multi dimensional position ids
      
      * fix: update docs and lint unused vars
      
      * fix: include linted file
      
      * fix: add norm after text output
      
      * fix: format model file
      
      * fix: adjust for ruff lints
      
      * fix: remove unused rotate_half
      
      * feat: refactors and calc num features
      
      * fix: prefer position_ids passed from vlm causal lm and reset ids on batch
      
      * fix: adjust get_position_ids if not available and add required args to signatures
      
      * fix: adjust resize case for qwen2_vl warmup
      
      * fix: avoid qwen2 vl specific paths with qwen2
      befd9f67
  11. 28 Oct, 2024 4 commits
  12. 26 Oct, 2024 1 commit
  13. 25 Oct, 2024 3 commits
  14. 24 Oct, 2024 2 commits
    • Daniël de Kok's avatar
      Add support for FP8 KV cache scales (#2628) · eab07f74
      Daniël de Kok authored
      * Add support for FP8 KV cache scales
      
      Since FP8 only has limited dynamic range, we can scale keys/values
      before storing them into the cache (and unscale them in attention). To
      avoid rescaling the cache as the absmax values change, good scales are
      usually determined per layer using calibration calibration data and stored
      in the checkpoint.
      
      This change adds support for for using key-value scales and loading them
      from checkpoints in the two most common formats:
      
      - Separate per-layer `k_scale` and `v_scale` scalars.
      - Per-layer `kv_scale` scalar (older format).
      
      Currently, scales are only used with an `float8_e4m3fn` cache.
      
      Besides adding support for key/value scales, the `fp8_quantize` function
      is also extended to support quantization with a kernel vendored from
      vLLM. This is slightly faster than the PyTorch implementation, but also
      scales in FP32, potentially improving accuracy.
      
      * Update FP8 KV cache test to use checkpoint with scales
      
      * `can_scale`: check that the attention is flashinfer
      eab07f74
    • Daniël de Kok's avatar
      Fix Phi 3.5 MoE tests (#2684) · 14a0df3a
      Daniël de Kok authored
      PR #2682 also fixed in issue in Phi MoE, but it changes the test
      outputs a bit. Fix this.
      14a0df3a
  15. 21 Oct, 2024 1 commit
    • Daniël de Kok's avatar
      Test Marlin MoE with `desc_act=true` (#2622) · 7f54b733
      Daniël de Kok authored
      Update the Mixtral GPTQ test to use a model with `desc_act=true` and
      `group_size!=-1` to ensure that we are checking activation
      sorting/non-full K (with tensor parallelism). The `desc_act=false` case
      is already checked by the Mixtral AWQ test.
      7f54b733
  16. 18 Oct, 2024 1 commit
  17. 16 Oct, 2024 1 commit
    • OlivierDehaene's avatar
      feat: prefill chunking (#2600) · a6a0c97e
      OlivierDehaene authored
      
      
      * wip
      
      * rollback
      
      * refactor to use prefix/postfix namming + fix all_input_ids_tensor
      
      * maybe patching vlms?
      
      * fix filter and concat
      
      * wip, no filter, no concat
      
      * current
      
      * add prepare_for_prefill
      
      * working
      
      * load tested
      
      * re-create slots
      
      * re-create slots
      
      * fix slot_filtering_indices
      
      * feedback loop
      
      * remove log
      
      * fix benchmarker
      
      * fix vlm and seq2seq
      
      * rename to cache and input lengths
      
      * fix prefill logprobs
      
      * fix launcher
      
      * fix logprobs?
      
      * idk at this point
      
      * max input length
      
      * omfg
      
      * remove debugging lines
      
      * fix tests
      
      * fix mllama
      
      * fix cargo tests
      
      * remove support chunking for paged
      
      * Fixing non blocked attentions
      
      * Fixing dtype + AMD, Ipex targets.
      
      * lint fix.
      
      * rename
      
      * Fix prefix_caching variable, remove defaults in server (confusing a lot
      of the times).
      
      * Add simple resolution when user specifies ATTENTION=paged.
      
      * Put back non default simple tests.
      
      * Fix env name
      
      ---------
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      a6a0c97e
  18. 10 Oct, 2024 2 commits
    • Nicolas Patry's avatar
      Intel ci (#2630) · 3dbdf63e
      Nicolas Patry authored
      * Intel CI ?
      
      * Let's try non sharded gemma.
      
      * Snapshot rename
      
      * Apparently container can be gone already.
      3dbdf63e
    • drbh's avatar
      feat: allow tool calling to respond without a tool (#2614) · e36dfaa8
      drbh authored
      
      
      * feat: process token stream before returning to client
      
      * fix: expect content in test
      
      * fix: improve comparison via ruff lint
      
      * fix: return event in all cases
      
      * fix: always send event on error, avoid unwraps, refactor and improve tests
      
      * fix: prefer no_tool over notify_error to improve reponse
      
      * fix: adjust chat input test for no_tool
      
      * fix: adjust test expected content
      
      ---------
      Co-authored-by: default avatarSystem administrator <root@ip-10-90-0-186.ec2.internal>
      e36dfaa8
  19. 09 Oct, 2024 2 commits
    • Nicolas Patry's avatar
      AMD CI (#2589) · 43f39f68
      Nicolas Patry authored
      * Only run 1 valid test.
      
      * TRying the tailscale action quickly.
      
      * ?
      
      * bash spaces.
      
      * Remove tailscale.
      
      * More quotes.
      
      * mnt2 ?
      
      * Othername to avoid recursive directories.
      
      * Good old tmate.
      
      * Remove tmate.
      
      * Trying a few things.
      
      * Remove some stuff.
      
      * Sleep ?
      
      * Tmp
      
      * busybox
      
      * Launcher tgi
      
      * Starting hello
      
      * Busybox in python
      
      * No device.
      
      * Removing all variables ?
      
      * A un moment donné.
      
      * Tmp
      
      * Tmp2
      
      * DEvice request, no container name
      
      * No device requests
      
      * Without pytest.
      
      * No pytest.
      
      * from env
      
      * Start with devices
      
      * Attemp #1
      
      * Remove stdin messing
      
      * Only 1 test, no container name
      
      * Raw tgi
      
      * Sending args.
      
      * Show pip freeze.
      
      * Start downloading with token
      
      * Giving HIP devices.
      
      * Mount volume + port forward
      
      * Without pytest.
      
      * No token
      
      * Repeated arguments
      
      * Wrong kwarg.
      
      * On 2 GPUs
      
      * Fallback to single shard CI test.
      
      * Testing
      
      * yaml
      
      * Common cache ?
      
      * Trailing slash ?
      
      * Docker volume split.
      
      * Fix docker volume
      
      * Fixing ?
      
      * ?
      
      * Try no devices ?
      
      * Flash llama on intel CPU ?
      
      * Fix nvidia ?
      
      * Temp deactivate intel, activate nvidia ?
      43f39f68
    • Daniël de Kok's avatar
      nix: add black and isort to the closure (#2619) · 9ed0c85f
      Daniël de Kok authored
      To make sure that everything is formatted with the same black version
      as CI.
      
      I sometimes use isort for new files to get nicely ordered imports,
      so add it as well. Also set the isort configuration to format in a
      way that is compatible with black.
      9ed0c85f
  20. 08 Oct, 2024 1 commit
  21. 04 Oct, 2024 1 commit
    • Daniël de Kok's avatar
      Add basic FP8 KV cache support (#2603) · 2358c2bb
      Daniël de Kok authored
      * Add basic FP8 KV cache support
      
      This change adds rudimentary FP8 KV cache support. The support is
      enabled by passing `--kv-cache-dtype fp8_e5m2` to the launcher. Doing so
      uses this type for the KV cache. However support is still limited:
      
      * Only the `fp8_e5m2` type is supported.
      * The KV cache layout is the same as `float16`/`bfloat16` (HND).
      * The FP8 KV cache is only supported for FlashInfer.
      * Loading of scales is not yet supported.
      
      * Fix Cargo.toml
      2358c2bb
  22. 03 Oct, 2024 1 commit
  23. 02 Oct, 2024 2 commits
    • drbh's avatar
      Unroll notify error into generate response (#2597) · d22b0c1f
      drbh authored
      * feat: unroll notify_error if no tool is choosen
      
      * fix: expect simple message when no tool is selected
      
      * fix: improve test to avoid notify_error
      
      * fix: improve docs and indicate change in expected response
      
      * fix: adjust linting in test file
      d22b0c1f
    • Nicolas Patry's avatar
      Mllama flash version (#2585) · d18ed5cf
      Nicolas Patry authored
      * Working loading state.
      
      * Preprocessing.
      
      * Working state ? (Broke idefics1 temporarily).
      
      * Cleaner condition.
      
      * Fix idefics.
      
      * Updating config, removing TODO
      
      * Mllama
      
      * Ugrade transformers 4.45
      
      * Flashing mllama.
      
      * Starting to get there.
      
      * Working state.
      
      * Integrations tests for mllama (cutting to 10 tokens because there seems'
      to be instability after (meaning size of the batch matters.
      
      * Updating model link.
      
      * Earlier assert.
      
      * Fix vlm ?
      
      * remove log.
      
      * Force ignore all images but last.
      
      * Default dtype bfloat16.
      
      * Update integration test after switch to bf16.
      
      * Remove dead code.
      
      * Removed dead code.
      
      * Upgrade the flake to latest transformers/tokenizers
      
      * Move to hf tgi-nix
      
      * Upgrade to 0.5.0
      d18ed5cf
  24. 30 Sep, 2024 2 commits
    • drbh's avatar
      feat: support phi3.5 moe (#2479) · 93a7042d
      drbh authored
      
      
      * feat: support phi3.5 moe model loading
      
      * fix: prefer llama base model and improve rotary logic
      
      * feat: return reasonable generation and add integration test
      
      * fix: run lint and update docs
      
      * fix: rerun lint for openapi docs
      
      * fix: prefer do_sample false unless temp is set by user, and update chat tests
      
      * fix: small typo adjustments
      
      * fix: consolidate long rope paths
      
      * fix: revert greedy by default and test changes
      
      * Vendor configuration so that we don't have to `trust_remote_code`
      
      * Use SparseMoELayer
      
      * Add support for dense MoE
      
      * Some type annotations
      
      * Add the usual model tests
      
      * Ruff.
      
      ---------
      Co-authored-by: default avatarDaniël de Kok <me@danieldk.eu>
      Co-authored-by: default avatarNicolas Patry <patry.nicolas@protonmail.com>
      93a7042d
    • Daniël de Kok's avatar
      Add support for GPTQ-quantized MoE models using MoE Marlin (#2557) · 90a1d04a
      Daniël de Kok authored
      This change add support for MoE models that use GPTQ quantization.
      Currently only models with the following properties are supported:
      
      - No `desc_act` with tensor parallelism, unless `group_size=-1`.
      - No asymmetric quantization.
      - No AWQ.
      90a1d04a
  25. 24 Sep, 2024 1 commit
  26. 19 Sep, 2024 1 commit
    • Nicolas Patry's avatar
      Stream options. (#2533) · f512021e
      Nicolas Patry authored
      * Stream options.
      
      * Fetch stuff from nix integration test for easier testing.
      
      * Adding the assert.
      
      * Only send the usage when asked for.
      
      * Update the docs.
      
      * Impure test because we need network.
      
      * develop.
      
      * Optional usage.
      
      * Fixes.
      
      * Workflow
      f512021e
  27. 17 Sep, 2024 1 commit
    • Daniël de Kok's avatar
      Move to moe-kernels package and switch to common MoE layer (#2511) · ce85efa9
      Daniël de Kok authored
      * Move to moe-kernels package and switch to common MoE layer
      
      This change introduces the new `moe-kernels` package:
      
      - Add `moe-kernels` as a dependency.
      - Introduce a `SparseMoELayer` module that can be used by MoE
        models.
      - Port over Mixtral and Deepseek.
      
      * Make `cargo check` pass
      
      * Update runner
      ce85efa9
  28. 16 Sep, 2024 2 commits
    • Nicolas Patry's avatar
      Adding a test for FD. (#2516) · 38fcafcf
      Nicolas Patry authored
      * Adding a test for FD.
      
      * Fixing flashdecoding (empty batch doesn't work).
      
      * Fixing the invalid popping.
      
      * Fixing radix with block_size > 1
      
      * Last reference.
      
      * Use an actual hash.
      
      * Update hash for slice.len() == 1
      
      * Update the locks.
      
      * Increasing docker timeout.
      38fcafcf
    • Daniël de Kok's avatar
      Add tests for Mixtral (#2520) · 77746552
      Daniël de Kok authored
      Disable by default because CI runners do not have enough GPUs.
      77746552
  29. 11 Sep, 2024 1 commit