- 07 Jul, 2023 2 commits
-
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Arthur authored
* update * add umt5 to auto tokenizer mapping * nits * fixup * fix failing torch test
-
- 06 Jul, 2023 5 commits
-
-
Zach Mueller authored
Fix integration
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sourab Mangrulkar authored
* update ds and fsdp ckpt logic * refactoring * fix
🐛 * resolve comment * fix issue with overriding of the fsdp config set by accelerate -
Zhao Tianyu authored
* add attention dropout, post attention dropout, post mlp dropout to gpt-neox * fix typo * add documentation * fix too long line * ran Checking/fixing src/transformers/models/gpt_neox/configuration_gpt_neox.py src/transformers/models/gpt_neox/modeling_gpt_neox.py python utils/custom_init_isort.py python utils/sort_auto_mappings.py doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source python utils/check_doc_toc.py --fix_and_overwrite running deps_table_update updating src/transformers/dependency_versions_table.py python utils/check_copies.py python utils/check_table.py python utils/check_dummies.py python utils/check_repo.py Checking all models are included. Checking all models are public. Checking all models are properly tested. Checking all objects are properly documented. Checking all models are in at least one auto class. Checking all names in auto name mappings are defined. Checking all keys in auto name mappings are defined in `CONFIG_MAPPING_NAMES`. Checking all auto mappings could be imported. Checking all objects are equally (across frameworks) in the main __init__. python utils/check_inits.py python utils/check_config_docstrings.py python utils/check_config_attributes.py python utils/check_doctest_list.py python utils/update_metadata.py --check-only python utils/check_task_guides.py
-
Yuchao Dai authored
* LlamaTokenizer should be picklable * make fixup
-
- 05 Jul, 2023 6 commits
-
-
Matt authored
* Add Nucleotide Transformer notebooks and restructure lists * Add missing linebreak!
-
Rafael Padilla authored
-
Yih-Dar authored
* fix * fix * fix * [test all] commit * [test all] commit * [test all] commit --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Nripesh Niketan authored
* Add mps function utils * black formating * format fix * Added MPS functionality to transformers * format fix
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 04 Jul, 2023 7 commits
-
-
Sylvain Gugger authored
* Make warning disappear for remote code in pipelines * Make sure it works twice in a row * No need for that
-
Sylvain Gugger authored
* Add finetuned_from tag in the autogenerated model card * Update name
-
Rafael Padilla authored
* including the threshold alert in warning messages. * Updating doc owlvit.md including post_process_object_detection function with threshold. * fix
-
amyeroberts authored
* Sort filenames alphabetically * Add check for order
-
Prathik Rao authored
* open llama fp16 bug fix * bug fix * bug fixed * make style * Update modeling_llama.py * apply formatting * Address amy's comment --------- Co-authored-by: Prathik Rao <prathikrao@microsoft.com@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net> Co-authored-by:root <root@orttrainingdev8.d32nl1ml4oruzj4qz3bqlggovf.px.internal.cloudapp.net>
-
Sanchit Gandhi authored
* Fix audio feature extractor deps * use audio utils window over torch window
-
Shahad Mahmud authored
precompiled_charsmap checking before adding to the normalizers' list for XLNetTokenizerFast conversion. (#24618) * precompiled_charsmap checking before adding to the normalizers' list. * precompiled_charsmap checking for all Sentencepiece tokenizer models * precompiled_charsmap checking for SPM tokenizer models - correct formatting
-
- 03 Jul, 2023 7 commits
-
-
Joao Gante authored
-
Joao Gante authored
-
Gema Parreño authored
* fix loading dataset link * Update examples/tensorflow/translation/run_translation.py Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> * Update examples/tensorflow/translation/run_translation.py Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com> --------- Co-authored-by:
Matt <Rocketknight1@users.noreply.github.com> Co-authored-by:
amyeroberts <22614925+amyeroberts@users.noreply.github.com>
-
Yih-Dar authored
fix Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Eli Simhayev authored
* [Time-Series] Added blog-post to tips * added Resources to time series models docs * removed "with Bert"
-
Nayeon Han authored
* docs: ko: `perplexity.mdx` * translate comment * reference english file * change extension * update toctree
-
Arthur authored
* add tokenization template * update conversion script * update modeling code * update * update convert checkpoint * update modeling * revert changes on convert script * new conversion script for new format * correct position bias * cleaning a bit * Credit co authors Co-authored-by:
agemagician <ahmed.elnaggar@tum.de> Co-authored-by: stefan-it <> * styling * Add docq * fix copies * add co author * Other Author * Merge branch 'main' of https://github.com/huggingface/transformers into add-umt5 * add testing * nit * Update docs/source/en/model_doc/umt5.mdx Co-authored-by:
Stefan Schweter <stefan@schweter.it> * fix t5 * actual fix? * revert wrong changes * remove * update test * more fixes * revert some changes * add SPIECE_UNDERLINE * add a commone xample * upfate * fix copies * revert changes on t5 conversion script * revert bytefallback changes since there was no addition yet * fixup * fixup * ingore umt5 cutom testing folder * fix readmes * revertT5 changes * same outputs * fixup * update example * Apply suggestions from code review * style * draft addition of all new files * current update * fix attention and stuff * finish refactoring * auto config * fixup * more nits * add umt5 to init * use md format * Update README.md Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * revert changes on mt5 * revert mt4 changes * update test * more fixes * add to mapping * fix-copies * fix copies * foix retain grad * fix some tests * nits * done * Update src/transformers/models/umt5/modeling_umt5.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/en/model_doc/umt5.md * Update src/transformers/models/umt5/__init__.py * Update docs/source/en/model_doc/umt5.md Co-authored-by:
Stefan Schweter <stefan@schweter.it> * Update src/transformers/models/umt5/modeling_umt5.py * update conversion script + use google checkpoints * nits * update test and modelling * stash slow convert * update fixupd * don't change slow --------- Co-authored-by: stefan-it <> Co-authored-by:
Stefan Schweter <stefan@schweter.it> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 01 Jul, 2023 1 commit
-
-
ydshieh authored
-
- 30 Jun, 2023 8 commits
-
-
Serge Matveenko authored
* Limit Pydantic to V1 in dependencies Pydantic is about to release V2 release which will break a lot of things. This change prevents `transformers` to be used with Pydantic V2 to avoid breaking things. * more --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
* fix * fix * fix * fix * fix * fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Stas Bekman authored
* [modeling_clip.py] improve readability * apply to other models * fix
-
Matt authored
* hidden layers, huh, what are they good for (absolutely nothing) * Some tests break with 1 hidden layer, use 2 * Use 1 hidden layer in a few slow models * Use num_hidden_layers=2 everywhere * Slightly higher tol for groupvit * Slightly higher tol for groupvit
-
Yih-Dar authored
* fix * fix * fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
JB (Don) authored
* Adding warning messages to BERT for missing attention masks These warning messages when there are pad tokens within the input ids and no attention masks are given. The warning message should only show up once. * Adding warning messages to BERT for missing attention masks These warning messages are shown when the pad_token_id is not None and no attention masks are given. The warning message should only show up once. * Ran fix copies to copy over the changes to some of the other models * Add logger.warning_once.cache_clear() to the test * Shows warning when there are no attention masks and input_ids start/end with pad tokens * Using warning_once() instead and fix indexing in input_ids check --------- Co-authored-by:JB Lau <hckyn@voyager2.local>
-
Jeroen Van Goey authored
* Udate link to RunHouse hardware setup documentation. * Fix link to hardware setup in other location as well
-
Arthur authored
* don't add space before single letter chars that don't have a merge * fix the fix * fixup * add a test * more testing * fixup * hack to make sure fast is also fixed * update switch transformers test * revert convert slow * Update src/transformers/models/t5/tokenization_t5.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add typechecking * quality --------- Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 29 Jun, 2023 4 commits
-
-
Sourab Mangrulkar authored
* fix push to hub for peft ckpts * oops
-
MS Kim(tony9402) authored
* fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations * fix annotations
-
Yih-Dar authored
* fix --------- Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Sylvain Gugger authored
* Fix ESM models buffers * Remove modifs * Tied weights keys are needed silly * quality
-