- 23 Jul, 2024 5 commits
-
-
Nicolas Patry authored
* Preparing for release. * Updating docs. * Fixing token within the docker image for the launcher.
-
shaltielshmid authored
* Support passing head_dim through config * Using `head_dim` as a fallback is necessary since it's a non standard key in mistralConfig (as defined in transformers). * Shorter diff. --------- Co-authored-by:Nicolas Patry <patry.nicolas@protonmail.com>
-
Daniël de Kok authored
* Add support for repacking AWQ weights for GPTQ-Marlin So far we couldn't support AWQ because virtually all AWQ models use symmetric quantization, which GPTQ-Marlin did not suppors. GPTQ-Marlin has recently added support AWQ repacking and AWQ asymmetric quantization (zero_point=True). This change updates all GPTQ-Marlin kernels from upstream and wires up AWQ support. For now enabling AWQ using Marlin requires running TGI with `--quantize gptq`. * Enable Marlin for supported AWQ configurations by default This makes the AWQ -> GPTQ repack test redundant, since we are now testing this with the regular AWQ test.
-
OlivierDehaene authored
* fix(l4): fix fp8 logic on l4 * also quant weights with single scale * use marlin even on 89
-
Nicolas Patry authored
-
- 22 Jul, 2024 6 commits
-
-
Adrien authored
-
Nicolas Patry authored
* Softcapping for gemma2. * Less clutter. * No access to transformers config, only config_dict here. * 0.0 is the null value in the C++ API.
-
OlivierDehaene authored
* fix(server): fix fp8 weight loading * fixed scales loading * update snap * revert default dtype
-
Adrien authored
* test new instances Signed-off-by:
Adrien <adrien@huggingface.co> * improve build ci Signed-off-by:
Adrien <adrien@huggingface.co> --------- Signed-off-by:
Adrien <adrien@huggingface.co>
-
Erik Kaunismäki authored
Update README.md point to huggingface_hub inference clients instead
-
icyboy™ authored
* Update idefics_causal_lm.py Fix syntax issues * fix dbrx & opt model prefix bug * Hotfix: fix of use of unquantized weights in Mixtral GQA loading
-
- 21 Jul, 2024 1 commit
-
-
OlivierDehaene authored
-
- 20 Jul, 2024 3 commits
-
-
OlivierDehaene authored
* feat(fp8): add support for fbgemm * allow loading fp8 weights directly * update outlines * fix makefile * build fbgemm * avoid circular import and fix dockerfile * add default dtype * refactored weights loader * fix auto conversion * fix quantization config parsing * force new nccl on install * missing get_weights implementation * increase timeout
-
Daniël de Kok authored
-
Adrien authored
* re-push to internal registry Signed-off-by:
Adrien <adrien@huggingface.co> * fix name Signed-off-by:
Adrien <adrien@huggingface.co> * debug Signed-off-by:
Adrien <adrien@huggingface.co> * debug Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * wip debug Signed-off-by:
Adrien <adrien@huggingface.co> * add debug Signed-off-by:
Adrien <adrien@huggingface.co> * should Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * ww Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * ww Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * wip Signed-off-by:
Adrien <adrien@huggingface.co> * debug Signed-off-by:
Adrien <adrien@huggingface.co> * w Signed-off-by:
Adrien <adrien@huggingface.co> * revert tests Signed-off-by:
Adrien <adrien@huggingface.co> * last reverts Signed-off-by:
Adrien <adrien@huggingface.co> * another one Signed-off-by:
Adrien <adrien@huggingface.co> --------- Signed-off-by:
Adrien <adrien@huggingface.co>
-
- 19 Jul, 2024 9 commits
-
-
Daniël de Kok authored
Deepseek V2 is a MoE model from Deepseek. Relevant variations compared to other models: - Grouped top-K in expert selection. - mscale in yarn is calculated using the `mscale` and `mscale_all_dim` configuration options. - `mscale_all_dim` is also used in scaling attention softmax. - Permuting of the query/key representations before applying rotary embeddings. - Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`). So, we need weight loads that supports quantized weights. To this end `{Weights,WeightLoader}.get_weight` was added. - The query/key head dimensionality differs from that of the value, so we need to pad during attention. - Heads with size 192, needs an extension to our paged attention fork and we need to ensure that the KV cache is allocated with the correct size. - Shared experts. -
drbh authored
* fix: adjust default tool choice * feat: improve tool choice syntax and response parsing/errors * fix: remove dev tests * feat: add ToolChoice to docs
-
Erik Kaunismäki authored
quick fix
-
Erik Kaunismäki authored
* draft of usage stats * fix wrong link * launcher doesn't need sysinfo dep * only tokenizer class instead of hole struct * unused import * fix clippy errors * update openAPI doc * cargo fmt * fix error in passing flags to router * try again to update docs * run pre-commit locally * Update router/src/main.rs Co-authored-by:
Hugo Larcher <hugo.larcher@huggingface.co> * Update router/src/main.rs Co-authored-by:
Hugo Larcher <hugo.larcher@huggingface.co> * on crash use anonymous error event * delete json_output and ngrok * more robust way of checking if is in container * more robust nvidia smi * parse xpu more robustly * fix errors * add nvidia-smi details in docs * cargo fmt * fix clippy * should make docs check pass * Update router/src/usage_stats.rs Co-authored-by:
Hugo Larcher <hugo.larcher@huggingface.co> * error reason can't be in nested json * cargo fmt --------- Co-authored-by:
Hugo Larcher <hugo.larcher@huggingface.co> Co-authored-by:
Erik Kaunismäki <erikkaum@Eriks-MacBook-Pro.local>
-
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
-
Daniël de Kok authored
* Improve the handling of quantized weights Handling of quantized weights was split between two mechanisms: - For quantized checkpoints, we used the new weight loader infrastructure. - For quantization while loading (EETQ, FP8, bitsandbytes) we instead relied on conditional in `get_linear`. Weight loaders support context managers to selectively load particular layers with different weight loaders, which is useful for models like Idefics2 AWQ, which uses a quantized text model, but unquantized vision and connector models. However, the context manager would be overrided by `get_linear`, which string-checks `quantizer`. Also, the context manager would not work with EETQ, FP8, and bitsandbytes. This change migrates all quantizers to the weight loader infrastructure. This has several benefits: - We can use context managers with all quantizers. - All the implementation details move down to the quantizer layers, `get_linear` does not need to know how to handle quantizer linear layers. - All quantizer weights are strongly typed, we don't pass around raw tensors. - We don't have to pass around the `quantizer` string everywhere. * Exclude non-MLP layers when using FP8 quantization with Llama
-
- 18 Jul, 2024 1 commit
-
-
OlivierDehaene authored
-
- 16 Jul, 2024 3 commits
-
-
Daniël de Kok authored
Fixes #2236.
-
Daniël de Kok authored
-
Daniël de Kok authored
Fixes #2036.
-
- 15 Jul, 2024 3 commits
-
-
Hugo Larcher authored
Remove bitsandbytes installation when running cpu-only install
-
Erik Kaunismäki authored
* fix to not ignore HUGGINGFACE_HUB_CACHE in cache * delete printlns * delete newlines * maybe fix trailing whitespace
-
drbh authored
* feat: simple mistral lora integration tests * fix: include args in docker launcher * fix: disable cuda graphs with lora and warn * fix: adjust docs and precommit issues * fix: re update docs
-
- 12 Jul, 2024 2 commits
-
-
Daniël de Kok authored
Packing of asymmetric quantization is broken, all (q)zeros values of `0` get reset to `1`, resulting in a loss of accuracy. So instead use symmetric quantization. To be able to distinguish models with symmetric and asymmetric quantization, a new config tensor `gptq_sym` is added. If this tensor is not present, we assume `sym=False`.
-
SeongBeomLEE authored
-
- 11 Jul, 2024 2 commits
-
-
drbh authored
* fix: append DONE message to chat stream * fix: update completions endpoint
-
Daniël de Kok authored
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs with compute capability >=8.0 and <8.9. Co-authored-by:Florian Zimmermeister <flozi00.fz@gmail.com>
-
- 09 Jul, 2024 4 commits
-
-
Daniël de Kok authored
Quantized weights were loaded in the `Weights` class, but this was getting quite unwieldy, where every higher level method to load weights was a long conditional to cover all the different quantizers. This change moves loading of quantized weights out of the `Weights` class. This is done by defining a simple `WeightsLoader` interface that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`, and `MarlinWeightsLoader`. These implementations are in the quantizers' respective modules. The `Weights` class provides the low-level load operations (such as loading tensors or sharded tensors), but delegates loads that need quantizer-specific weight processing to a loader. The loaders still use the low-level functionality provided by `Weights`. I initially tried making a hierarchy where a class like `GPTQWeights` would inherit from `Weights`. But it is not very flexible (e.g. does not work well with the new weight storage mock used in tests) and the implicit indirections made the code harder to follow.
-
Nicolas Patry authored
* Updating the self check * Fix. * Revert the CLI . * cli. * Space. * Revert cargo update.
-
vinkamath authored
Co-authored-by:Vinayak Kamath <Vinayak.Kamath@target.com>
-
Nicolas Patry authored
-
- 08 Jul, 2024 1 commit
-
-
Guillaume LEGENDRE authored
* Update build.yaml * Update build.yaml * change to S3 cache * change to CPU Runners * remove comments
-