- 31 May, 2024 1 commit
-
-
Arthur authored
* helper * Apply suggestions from code review Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> * updates * more doc --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
- 30 May, 2024 4 commits
-
-
Younes Belkada authored
remove `IS_GITHUB_CI`
-
Younes Belkada authored
Replace all occurences of `load_in_8bit` with bnb config
-
zspo authored
fix get_scheduler args
-
Younes Belkada authored
add validation for bnb config
-
- 29 May, 2024 12 commits
-
-
Yih-Dar authored
* remove * build --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Dhruv Pai authored
* Modified test * Added on_optimizer_step to callbacks * Move callback after step is called * Added on optimizer step callback
-
Joao Gante authored
* add Raushan * add Raushan
-
Younes Belkada authored
Update overview.md
-
Yih-Dar authored
* fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Zach Mueller authored
-
Matt authored
-
Matt authored
* Fix env.py in cases where torch is not present * Simplify the fix (and avoid some issues)
-
Huazhong Ji authored
* Improve `transformers-cli env` reporting * move the line `"Using GPU in script?": "<fill in>"` to in if conditional statement * same option for npu
-
Lucain authored
* Fix has_file in offline mode * harmonize env variable for offline mode * Switch to HF_HUB_OFFLINE * fix test * revert test_offline to test TRANSFORMERS_OFFLINE * Add new offline test * merge conflicts * docs
-
Younes Belkada authored
* add mistral v3 conversion script * Update src/transformers/models/mistral/convert_mistral_weights_to_hf.py Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com> * fixup --------- Co-authored-by:
Arthur <48595927+ArthurZucker@users.noreply.github.com>
-
Raushan Turganbay authored
* quanto latest version was refactored * add error msg * incorrect compare sign * Update src/transformers/cache_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
- 28 May, 2024 22 commits
-
-
amyeroberts authored
* Deprecate models - graphormer - time_series_transformer - xlm_prophetnet - qdqbert - nat - ernie_m - tvlt - nezha - mega - jukebox - vit_hybrid - x_clip - deta - speech_to_text_2 - efficientformer - realm - gptsan_japanese * Fix up * Fix speech2text2 imports * Make sure message isn't indented * Fix docstrings * Correctly map for deprecated models from model_type * Uncomment out * Add back time series transformer and x-clip * Import fix and fix-up * Fix up with updated ruff
-
Younes Belkada authored
Update _redirects.yml
-
Younes Belkada authored
* fix flan t5 tests * better format
-
Jonny Li authored
-
Albert Villanova del Moral authored
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Younes Belkada authored
Update modeling_opt.py
-
Younes Belkada authored
add accelerate
-
Sigbj酶rn Skj忙ret authored
* Render chat template tojson filter as unicode * ruff--
-
Younes Belkada authored
* add peft references * add peft references * Update docs/source/en/peft.md * Update docs/source/en/peft.md
-
Raushan Turganbay authored
* fix tests * style * Update tests/generation/test_utils.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Lysandre Debut authored
* Fix failing tokenizer tests * Use small tokenizer * Fix remaining reference
-
NielsRogge authored
* Update docs * Add PaliGemma resources * Address comment * Update docs
-
Sina Taslimi authored
-
Pavel Iakubovskii authored
* Add test for multiple images * [run slow] owlv2 * Fix box rescaling * [run slow] owlv2
-
Pavel Iakubovskii authored
Remove float64
-
oOraph authored
* Unit test to verify fix Signed-off-by:
Raphael Glon <oOraph@users.noreply.github.com> * fix from_pretrained in offline mode when model is preloaded in cache Signed-off-by:
Raphael Glon <oOraph@users.noreply.github.com> * minor: fmt Signed-off-by:
Raphael Glon <oOraph@users.noreply.github.com> --------- Signed-off-by:
Raphael Glon <oOraph@users.noreply.github.com> Co-authored-by:
Raphael Glon <oOraph@users.noreply.github.com>
-
Hengwen Tong authored
* Remove backend checks in training_args.py * Expilicit initialize the device --------- Co-authored-by:tonghengwen <tonghengwen@cambricon.com>
-
AP authored
Update quicktour.md to fix broken link Missing '/' in attention mask link in the transformers quicktour
-
Clint Adams authored
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
use main Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 27 May, 2024 1 commit
-
-
Yih-Dar authored
skip Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-