- 05 Mar, 2024 11 commits
-
-
Arthur authored
* initial-commit * start cleaning * small nits * small nits * current updates * add kernels * small refactoring little step * add comments * styling * nit * nits * Style * Small changes * Push dummy mambda simple slow * nit * Use original names * Use original names and remove norm * Updates for inference params * Style nd updates * nits * Match logits * Add a test * Add expected generated text * nits doc, imports and styling * style * oups * dont install kernels, invite users to install the required kernels * let use use the original packages * styling * nits * fix some copieds * update doc * fix-copies * styling done * nits * fix import check * run but wrong cuda ress * mamba CUDA works :) * fix the fast path * config naming nits * conversion script is not required at this stage * finish fixing the fast path: generation make sense now! * nit * Let's start working on the CIs * style * better style * more nits * test nit * quick fix for now * nits * nit * nit * nit * nits * update test rest * fixup * update test * nit * some fixes * nits * update test values * fix styling * nit * support peft * integrations tests require torchg * also add slow markers * styling * chose forward wisely * nits * update tests * fix gradient checkpointing * fixup * nit * fix doc * check copies * fix the docstring * fix some more tests * style * fix beam search * add init schene * update * nit * fix * fixup the doc * fix the doc * fixup * tentative update but slow is no longer good * nit * should we always use float32? * nits * revert wrong changes * res in float32 * cleanup * skip fmt for now * update generation values * update test values running original model * fixup * update tests + rename inference_params to cache_params + make sure training does not use cache_params * small nits * more nits * fix final CIs * style * nit doc * I hope final doc nits * nit * 🫠 * final touch! * fix torch import * Apply suggestions from code review Co-authored-by:
Lysandre Debut <hi@lysand.re> * Apply suggestions from code review * fix fix and fix * fix base model prefix! * nit * Update src/transformers/models/mamba/__init__.py * Update docs/source/en/model_doc/mamba.md Co-authored-by:
Lysandre Debut <hi@lysand.re> * nit --------- Co-authored-by:
Lysandre Debut <hi@lysand.re>
-
Joao Gante authored
-
Arthur authored
* fix udop imports * sort imports
-
Arthur authored
* style * revert with RP * nit * exact revert
-
Arthur Zucker authored
-
Arthur authored
* update * ... * nits * arf * 🧼 * beat the last guy * style everyone
-
Fanli Lin authored
* use torch_device * Update tests/pipelines/test_pipelines_text_generation.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * fix style --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Joshua Lochner authored
Update starcoder2 paper link
-
Raushan Turganbay authored
* fix mal_length for blip * update also min length * fixes * add a comment * Update src/transformers/models/instructblip/modeling_instructblip.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * Update src/transformers/models/blip_2/modeling_blip_2.py Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com> * make fixup * fix length when user passed * remove else * remove brackets --------- Co-authored-by:
Joao Gante <joaofranciscocardosogante@gmail.com>
-
Ilyas Moutawwakil authored
* added exllama kernels support for awq models * doc * style * Update src/transformers/modeling_utils.py Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com> * refactor * moved exllama post init to after device dispatching * bump autoawq version * added exllama test * style * configurable exllama kernels * copy exllama_config from gptq * moved exllama version check to post init * moved to quantization dockerfile --------- Co-authored-by:
Marc Sun <57196510+SunMarc@users.noreply.github.com>
-
Younes Belkada authored
fix the bitwise or issue
-
- 04 Mar, 2024 14 commits
-
-
njackman-2344 authored
* torchscript and trainer md es translation * corrected md es files and even corrected spelling in en md * made es corrections to trainer.md * deleted entrenamiento... title on yml * placed entrenamiento in right place
-
NielsRogge authored
* First draft * More improvements * More improvements * More fixes * Fix copies * More improvements * More fixes * More improvements * Convert checkpoint * More improvements, set up tests * Fix more tests * Add UdopModel * More improvements * Fix equivalence test * More fixes * Redesign model * Extend conversion script * Use real inputs for conversion script * Add image processor * Improve conversion script * Add UdopTokenizer * Add fast tokenizer * Add converter * Update README's * Add processor * Add fully fledged tokenizer * Add fast tokenizer * Use processor in conversion script * Add tokenizer tests * Fix one more test * Fix more tests * Fix tokenizer tests * Enable fast tokenizer tests * Fix more tests * Fix additional_special_tokens of fast tokenizer * Fix tokenizer tests * Fix more tests * Fix equivalence test * Rename image to pixel_values * Rename seg_data to bbox * More renamings * Remove vis_special_token * More improvements * Add docs * Fix copied from * Update slow tokenizer * Update fast tokenizer design * Make text input optional * Add first draft of processor tests * Fix more processor tests * Fix decoder_start_token_id * Fix test_initialization * Add integration test * More improvements * Improve processor, add test * Add more copied from * Add more copied from * Add more copied from * Add more copied from * Remove print statement * Update README and auto mapping * Delete files * Delete another file * Remove code * Fix test * Fix docs * Remove asserts * Add doc tests * Include UDOP in exotic model tests * Add expected tesseract decodings * Add sentencepiece * Use same design as T5 * Add UdopEncoderModel * Add UdopEncoderModel to tests * More fixes * Fix fast tokenizer * Fix one more test * Remove parallelisable attribute * Fix copies * Remove legacy file * Copy from T5Tokenizer * Fix rebase * More fixes, copy from T5 * More fixes * Fix init * Use ArthurZ/udop for tests * Make all model tests pass * Remove UdopForConditionalGeneration from auto mapping * Fix more tests * fixups * more fixups * fix the tokenizers * remove un-necessary changes * nits * nits * replace truncate_sequences_boxes with truncate_sequences for fix-copies * nit current path * add a test for input ids * ids that we should get taken from c9f7a32f57440d90ff79890270d376a1cc0acb68 * nits converting * nits * apply ruff * nits * nits * style * fix slow order of addition * fix udop fast range as well * fixup * nits * Add docstrings * Fix gradient checkpointing * Update code examples * Skip tests * Update integration test * Address comment * Make fixup * Remove extra ids from tokenizer * Skip test * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Update year * Address comment * Address more comments * Address comments * Add copied from * Update CI * Rename script * Update model id * Add AddedToken, skip tests * Update CI * Fix doc tests * Do not use Tesseract for the doc tests * Remove kwargs * Add original inputs * Update casting * Fix doc test * Update question * Update question * Use LayoutLMv3ImageProcessor * Update organization * Improve docs * Update forward signature * Make images optional * Remove deprecated device argument * Add comment, add add_prefix_space * More improvements * Remove kwargs --------- Co-authored-by:
ArthurZucker <arthur.zucker@gmail.com> Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Donggeun Yu authored
* Update ms_deform_attn_cuda.cu * Update ms_deform_attn_cuda.cuh * Update modeling_deformable_detr.py * Update src/transformers/models/deformable_detr/modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update modeling_deformable_detr.py * python utils/check_copies.py --fix_and_overwrite * Fix dtype missmatch error * Update test_modeling_deformable_detr.py * Update test_modeling_deformable_detr.py * Update modeling_deformable_detr.py * Update modeling_deformable_detr.py * Support DeformableDETR with bfloat16 * Add test code * Use AT_DISPATCH_FLOATING_TYPES_AND2 Use AT_DISPATCH_FLOATING_TYPES_AND2 * Update tests/models/deformable_detr/test_modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Update tests/models/deformable_detr/test_modeling_deformable_detr.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix not found require_torch_bf16 function --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Yoach Lacombe authored
-
Sven Schultze authored
* Fix grad_norm unserializable tensor log failure * Fix origin of grad_norm logs to be in deepspeed get_global_grad_norm()
-
Zach Mueller authored
Fully revert atomic checkpointing
-
Nick DeGroot authored
*
🐛 Fix oneformer instance post processing when using panoptic task type *✅ Add unit test for oneformer instance post processing panoptic bug --------- Co-authored-by:Nick DeGroot <1966472+nickthegroot@users.noreply.github.com>
-
Sean (Seok-Won) Yi authored
Fix: Fixed the previous tracking URI setting logic to prevent clashes with original MLflow code. (#29096) * Changed logic for setting the tracking URI. The previous code was calling the `mlflow.set_tracking_uri` function regardless of whether or not the environment variable `MLFLOW_TRACKING_URI` is even set. This led to clashes with the original MLflow implementation and therefore the logic was changed to only calling the function when the environment variable is explicitly set. * Check if tracking URI has already been set. The previous code did not consider the possibility that the tracking URI may already be set elsewhere and was therefore (erroneously) overriding previously set tracking URIs using the environment variable. * Removed redundant parentheses. Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix docstring to reflect library convention properly. Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * Fix docstring to reflect library convention properly. "Unset by default" is the correct expression rather than "Default to `None`." Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
NielsRogge authored
* First commit * Improve conversion script * Convert more checkpoints * Update src/transformers/models/sam/convert_sam_original_to_hf_format.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * Rename file * More updates * Update docstring * Update script --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Traun Leyden authored
-
Y4hL authored
* Add mlx support * Fix import order and use def instead of lambda * Another fix for ruff format :) * Add detecting mlx from repr, add is_mlx_array
-
Siming Dai authored
Fix mixtral load balancing loss Co-authored-by:dingkunbo <dingkunbo@baidu.com>
-
Poedator authored
update path to hub files need to add `tree/` to path to files at HF hub. see example path: `https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main`
-
Fanli Lin authored
* use require_torch_gpu * enable on XPU
-
- 01 Mar, 2024 10 commits
-
-
David Valente authored
* Correct zero division error in inverse sqrt scheduler * default timescale to 10_000
-
Zach Mueller authored
* Fix deprecated arg issue * Trainer check too * Check for dict or dataclass * Simplify, make config always AcceleratorConfig * Upstream to Trainer
-
Marc Sun authored
-
Jingya HUANG authored
enable subfolder
-
amyeroberts authored
* Fix yolos processing * Add back slow marker - protects for pycocotools in slow * Slow decorator goes above copied from header
-
Sanchit Gandhi authored
* [Whisper Tok] Update integration test * make style
-
Arthur authored
* use the generation config 🫠 * fixup
-
Younes Belkada authored
* fix ESM 8bit * Apply suggestions from code review Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Leon Engländer authored
* LlamaForQuestionAnswering self.transformer->self.model * fix "Copied from" string * Llama QA model: set base_model_prefix = "transformer"
-
Song Fuchang authored
Expose `offload_buffers` parameter of `accelerate` to `PreTrainedModel.from_pretrained` method (#28755) Expose offload_buffers parameter to from_pretrained method
-
- 29 Feb, 2024 5 commits
-
-
Lucain authored
-
NielsRogge authored
Fix issue
-
Yih-Dar authored
* more fixes * more fixes --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Younes Belkada authored
Update test_modeling_llama.py
-
Younes Belkada authored
fix failing tests for peft integration
-