- 07 Aug, 2024 1 commit
-
-
almersawi authored
Co-authored-by:Islam Almersawi <islam.almersawi@openinnovation.ai>
-
- 06 Aug, 2024 3 commits
- 05 Aug, 2024 1 commit
-
-
drbh authored
* fix: attempt forward on flash attn2 to check hardware support * fix: warn window_size_left when using flash attn 1 * fix: prefer version check over test op and avoid window_size_left if not flash attn2 * fix: improve condtional and error message * fix: update sliding window conditional * fix: simplify changes and revert model changes * fix: avoid changing conditional * fix: typo tweak
-
- 01 Aug, 2024 2 commits
-
-
Daniël de Kok authored
- Always return the hidden states. - Create the output tensor inside the `attention` and `paged_attention` functions. This removes the difference between how the output is handled between attention (output parameter) and paged attention (return value). This also removes the assumption that the attention implementation can write to an output tensor (in preparation of FlashInfer).
-
Wang, Yi authored
Signed-off-by:Wang, Yi A <yi.a.wang@intel.com>
-
- 31 Jul, 2024 2 commits
-
-
drbh authored
* MODEL_ID propagation fix * fix: remove global model id --------- Co-authored-by:root <root@tw031.pit.tensorwave.lan>
-
Daniël de Kok authored
The `GPTWeightLoader` was structured like this in pseudocode: if marlin: Set up tensors in a way that GPTQ-Marlin expects else: Set up tensors in a way that ExLlama/GPTQ/AWQ expect However, the GPT-Marlin implementation details should really be in the `marlin` module. So move the former part out to a separate `GPTQMarlinWeightsLoader`.
-
- 30 Jul, 2024 1 commit
-
-
Daniël de Kok authored
- Create `quantization_config` option in the model config. - Don't store the quantizer config in tensors anymore.
-
- 29 Jul, 2024 2 commits
-
-
Erik Kaunismäki authored
* quick fix * allow silent failure * explicit todo that this is only short term
-
Daniël de Kok authored
-
- 26 Jul, 2024 2 commits
-
-
drbh authored
* feat: add ruff and resolve issue * fix: update client exports and adjust after rebase * fix: adjust syntax to avoid circular import * fix: adjust client ruff settings * fix: lint and refactor import check and avoid model enum as global names * fix: improve fbgemm_gpu check and lints * fix: update lints * fix: prefer comparing model enum over str * fix: adjust lints and ignore specific rules * fix: avoid unneeded quantize check
-
Daniël de Kok authored
-
- 25 Jul, 2024 1 commit
-
-
Daniël de Kok authored
* Fix GPTQ autotune data type to be compatible with Torch 2.4.0 * Update poetry lock file * Fix small PaliGemma logprob differences after the torch update
-
- 24 Jul, 2024 4 commits
-
-
drbh authored
* fix: refactor adapter weight loading and mapping * feat: enable lora load from directory * fix: adjust launcher for local lora adapters * feat: improve weight loading and add tests * fix: improve logging and rebase syntax issue * fix: impove adapter merge comments and remove unused conditional * fix: improve get_model_with_lora_adapters naming * fix: comment typo
-
Daniël de Kok authored
The marlin.py file was getting large, split it up.
-
Wang, Yi authored
fix of use of unquantized weights in cohere GQA loading, also enable the model in intel platform Signed-off-by:Wang, Yi A <yi.a.wang@intel.com>
-
Wang, Yi authored
* fix crash in multi-modal Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * update according to review comment Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> * fix llava_next regression in latest main Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by:
Wang, Yi A <yi.a.wang@intel.com>
-
- 23 Jul, 2024 5 commits
-
-
Daniël de Kok authored
* Add support for Llama 3 rotary embeddings * Update transformers to 4.43
-
shaltielshmid authored
* Support passing head_dim through config * Using `head_dim` as a fallback is necessary since it's a non standard key in mistralConfig (as defined in transformers). * Shorter diff. --------- Co-authored-by:Nicolas Patry <patry.nicolas@protonmail.com>
-
Daniël de Kok authored
* Add support for repacking AWQ weights for GPTQ-Marlin So far we couldn't support AWQ because virtually all AWQ models use symmetric quantization, which GPTQ-Marlin did not suppors. GPTQ-Marlin has recently added support AWQ repacking and AWQ asymmetric quantization (zero_point=True). This change updates all GPTQ-Marlin kernels from upstream and wires up AWQ support. For now enabling AWQ using Marlin requires running TGI with `--quantize gptq`. * Enable Marlin for supported AWQ configurations by default This makes the AWQ -> GPTQ repack test redundant, since we are now testing this with the regular AWQ test.
-
OlivierDehaene authored
* fix(l4): fix fp8 logic on l4 * also quant weights with single scale * use marlin even on 89
-
Nicolas Patry authored
-
- 22 Jul, 2024 3 commits
-
-
Nicolas Patry authored
* Softcapping for gemma2. * Less clutter. * No access to transformers config, only config_dict here. * 0.0 is the null value in the C++ API.
-
OlivierDehaene authored
* fix(server): fix fp8 weight loading * fixed scales loading * update snap * revert default dtype
-
icyboy™ authored
* Update idefics_causal_lm.py Fix syntax issues * fix dbrx & opt model prefix bug * Hotfix: fix of use of unquantized weights in Mixtral GQA loading
-
- 21 Jul, 2024 1 commit
-
-
OlivierDehaene authored
-
- 20 Jul, 2024 1 commit
-
-
OlivierDehaene authored
* feat(fp8): add support for fbgemm * allow loading fp8 weights directly * update outlines * fix makefile * build fbgemm * avoid circular import and fix dockerfile * add default dtype * refactored weights loader * fix auto conversion * fix quantization config parsing * force new nccl on install * missing get_weights implementation * increase timeout
-
- 19 Jul, 2024 6 commits
-
-
Daniël de Kok authored
Deepseek V2 is a MoE model from Deepseek. Relevant variations compared to other models: - Grouped top-K in expert selection. - mscale in yarn is calculated using the `mscale` and `mscale_all_dim` configuration options. - `mscale_all_dim` is also used in scaling attention softmax. - Permuting of the query/key representations before applying rotary embeddings. - Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`). So, we need weight loads that supports quantized weights. To this end `{Weights,WeightLoader}.get_weight` was added. - The query/key head dimensionality differs from that of the value, so we need to pad during attention. - Heads with size 192, needs an extension to our paged attention fork and we need to ensure that the KV cache is allocated with the correct size. - Shared experts. -
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
* Improve the handling of quantized weights Handling of quantized weights was split between two mechanisms: - For quantized checkpoints, we used the new weight loader infrastructure. - For quantization while loading (EETQ, FP8, bitsandbytes) we instead relied on conditional in `get_linear`. Weight loaders support context managers to selectively load particular layers with different weight loaders, which is useful for models like Idefics2 AWQ, which uses a quantized text model, but unquantized vision and connector models. However, the context manager would be overrided by `get_linear`, which string-checks `quantizer`. Also, the context manager would not work with EETQ, FP8, and bitsandbytes. This change migrates all quantizers to the weight loader infrastructure. This has several benefits: - We can use context managers with all quantizers. - All the implementation details move down to the quantizer layers, `get_linear` does not need to know how to handle quantizer linear layers. - All quantizer weights are strongly typed, we don't pass around raw tensors. - We don't have to pass around the `quantizer` string everywhere. * Exclude non-MLP layers when using FP8 quantization with Llama
-
- 18 Jul, 2024 1 commit
-
-
OlivierDehaene authored
-
- 16 Jul, 2024 3 commits
-
-
Daniël de Kok authored
Fixes #2236.
-
Daniël de Kok authored
-
Daniël de Kok authored
Fixes #2036.
-
- 15 Jul, 2024 1 commit
-
-
drbh authored
* feat: simple mistral lora integration tests * fix: include args in docker launcher * fix: disable cuda graphs with lora and warn * fix: adjust docs and precommit issues * fix: re update docs
-