1. 04 Jul, 2024 1 commit
  2. 03 Jul, 2024 5 commits
  3. 02 Jul, 2024 6 commits
  4. 01 Jul, 2024 12 commits
  5. 28 Jun, 2024 1 commit
  6. 27 Jun, 2024 6 commits
  7. 25 Jun, 2024 9 commits
    • drbh's avatar
      be2d3803
    • Daniël de Kok's avatar
      Add support for Marlin 2:4 sparsity (#2102) · f1f98e36
      Daniël de Kok authored
      This change adds support for 2:4 sparsity when using Marlin
      quantization. The 2:4 kernel is used when:
      
      * The quantizer is `marlin`;
      * the quantizer checkpoint format is `marlin_24`.
      
      Fixes #2098.
      f1f98e36
    • Daniël de Kok's avatar
      Support AWQ quantization with bias (#2117) · 14980df2
      Daniël de Kok authored
      When the AWQ quantizer was used with a layer that uses a bias,
      the bias tensor was not correctly passed/used. Instead, the
      value `true`/`1.0` was added to the linear transformation.
      
      Correctly pass through the bias when it is not `None`.
      
      Fixes #2106.
      14980df2
    • drbh's avatar
      Enable multiple LoRa adapters (#2010) · 04e1af94
      drbh authored
      
      
      * feat: first draft load multiple lora
      
      * feat: load weights within layer and refactor lora pass
      
      * fix: refactor and reduce lora math
      
      * feat: baseline impl single request multi lora support
      
      * feat: prefer lorax implementation and port loading logic
      
      * fix: prefer adapter_data and refactors
      
      * feat: perfer loraxs custom punica kernels and add mlp loras
      
      * fix: adjust batch for bgmv
      
      * fix: adjust adapter_segments logic when in batch
      
      * fix: refactor and move changes to v3 proto
      
      * fix: pass model_id for all flash causal lms
      
      * fix: pass model_id for all causal and seq2seq lms
      
      * fix: add model_id to model test
      
      * feat: add lora support to mistral and refactors
      
      * feat: prefer model id in request
      
      * fix: include rust code for adapter id
      
      * feat: bump launcher and add new lora docs
      
      * feat: support base model generation and refactors
      
      * fix: rename doc to retry ci build
      
      * feat: support if vlm models
      
      * fix: add adapter_data param and avoid missing layers
      
      * fix: add adapter_data param to phi and neox
      
      * fix: update all models forwards to include adapter_data
      
      * fix: add model_id to IdeficsCausalLM
      
      * Update lora.md
      
      Fixed a typo
      
      * Update lora.md
      
      Fixing spam image
      
      * fix: add lora kernel to dockerfile, support running without kernels and refactors
      
      * fix: avoid dockerfile conflict
      
      * fix: refactors and adjust flash llama lora logic
      
      * fix: skip llama test due to CI issue (temp)
      
      * fix: skip llama test CI (temp) 2
      
      * fix: revert skips and prefer updated ci token for tests
      
      * fix: refactors and helpful comments
      
      * fix: add noop in TensorParallelAdapterRowLinear too
      
      * fix: refactor and move shard_lora_weights logic
      
      * fix: exit early if no adapter_data
      
      ---------
      Co-authored-by: default avatarDerek <datavistics@gmail.com>
      04e1af94
    • Nicolas Patry's avatar
      Fix CI . (#2118) · a2a97b05
      Nicolas Patry authored
      Fix clippy.
      a2a97b05
    • Daniël de Kok's avatar
      Add pytest release marker (#2114) · fc9c3153
      Daniël de Kok authored
      * Add pytest release marker
      
      Annotate a test with `@pytest.mark.release` and it only gets run
      with `pytest integration-tests --release`.
      
      * Mark many models as `release` to speed up CI
      fc9c3153
    • Wang, Yi's avatar
      fix cpu and xpu issue (#2116) · e563983d
      Wang, Yi authored
      
      Signed-off-by: default avatarWang, Yi A <yi.a.wang@intel.com>
      e563983d
    • Nicolas Patry's avatar
      Removing IPEX_AVAIL. (#2115) · 9e2fdf57
      Nicolas Patry authored
      * Removing IPEX_AVAIL.
      
      Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
      except for a very few spots.
      
      The biggest number of spots is the kv-cache layout and the flash_xxx.py
      files.
      Since those files should be removed soon and factored away, we should
      not need them.
      
      * Forgot a few places.
      
      * Unrelated change.
      
      * Fixing HF_TOKEN.
      
      * HF_TOKEN
      9e2fdf57
    • drbh's avatar
      feat: add simple tests for weights (#2092) · 3f3b7ffd
      drbh authored
      * feat: add simple tests for weights
      
      * fix: adjust types and add tests
      
      * fix: adjust so all tests pass
      
      * feat: improve weight tests
      
      * fix: add missing tests and renames
      
      * fix: tweak shapes
      3f3b7ffd