"tests/models/vscode:/vscode.git/clone" did not exist on "096838836d3c7a7d6782d152a4feabd777f2693d"
- 11 Oct, 2021 1 commit
-
-
Lahfa Samy authored
Replace assert by ValueError of src/transformers/models/electra/modeling_{electra,tf_electra}.py and all other models that had copies (#13955) * Replace all assert by ValueError in src/transformers/models/electra * Reformat with black to pass check_code_quality test * Change some assert to ValueError of modeling_bert & modeling_tf_albert * Change some assert in multiples models * Change multiples models assertion to ValueError in order to validate check_code_style test and models template test. * Black reformat * Change some more asserts in multiples models * Change assert to ValueError in modeling_layoutlm.py to fix copy error in code_style_check * Add proper message to ValueError in modeling_tf_albert.py Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Simplify logic in models/bert/modeling_bert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add ValueError message to models/convbert/modeling_tf_convbert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add error message for ValueError to modeling_tf_electra.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Simplify logic in models/tapas/modeling_tapas.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Simplify logic in models/electra/modeling_electra.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Add ValueError message in src/transformers/models/bert/modeling_tf_bert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Simplify logic in src/transformers/models/rembert/modeling_rembert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Simplify logic in src/transformers/models/albert/modeling_albert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 22 Sep, 2021 1 commit
-
-
Sylvain Gugger authored
* Make gradient_checkpointing a training argument * Update src/transformers/modeling_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Update src/transformers/configuration_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Fix tests * Style * document Gradient Checkpointing as a performance feature * Small rename * PoC for not using the config * Adapt BC to new PoC * Forgot to save * Rollout changes to all other models * Fix typo Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-
- 31 Aug, 2021 1 commit
-
-
Jongheon Kim authored
Set missing seq_length variable when using inputs_embeds with ALBERT & Remove code duplication (#13152) * Set seq_length variable when using inputs_embeds * remove code duplication
-
- 16 Aug, 2021 3 commits
-
-
Lysandre authored
-
Lysandre authored
-
weierstrass_walker authored
-
- 26 Jul, 2021 1 commit
-
-
Philip May authored
* add classifier_dropout to Electra * no type annotations yet Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * add classifier_dropout to Electra * add classifier_dropout to Electra ForTokenClass. * add classifier_dropout to bert * add classifier_dropout to roberta * add classifier_dropout to big_bird * add classifier_dropout to mobilebert * empty commit to trigger CI * add classifier_dropout to reformer * add classifier_dropout to ConvBERT * add classifier_dropout to Albert * add classifier_dropout to Albert Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 22 Jun, 2021 1 commit
-
-
Hamid Shojanazeri authored
* registering a buffer for token_type_ids, to pass the error of device-id getting hardcoded when tracing * sytle format * adding persistent flag to the resgitered buffers that prevent from adding them to the state_dict and addresses the Backward compatibility issue * adding the try catch to the fix as persistent flag is only available from PT >1.6 * adding version check * added the condition to only use the token_type_ids buffer when its autogenerated not passed by user * adding comments and making the conidtion where token_type_ids are None to use the registered buffer * taking out position-embeddding from the if block * adding comments * handling the case if buffer for position_ids was not registered * reverted the changes on position_ids, fix the issue with size of token_type_ids buffer, moved the modification for generated token_type_ids to Bertmodel, instead of Embeddings * reverting the token_type_ids in case of None to the previous version * reverting changes on position_ids adding back the if block * changes added by running make fix-copies * changes added by running make fix-copies and added the import version as it was getting used * changes added by running make fix-copies * changes added by running make fix-copies * fixing the import format * fixing the import format * modified to use temp tensor for trimed and expanded token_type_ids buffer * changes made by fix-copies after temp tensor modifications * changes made by fix-copies after temp tensor modifications * changes made by fix-copies after temp tensor modifications * clean up * clean up * clean up * clean up * Nit * Nit * Nit * modified according to support device conversion on traced models * modified according to support device conversion on traced models * modified according to support device conversion on traced models * modified according to support device conversion on traced models * changes based on latest in master * Adapt templates * Add version import Co-authored-by:
Ubuntu <ubuntu@ip-172-31-32-81.us-west-2.compute.internal> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 07 Jun, 2021 1 commit
-
-
François Lagunas authored
* Fixing bug that appears when using distilation (and potentially other uses). During backward pass Pytorch complains with: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation This happens because the QA model code modifies the start_positions and end_positions input tensors, using clamp_ function: as a consequence the teacher and the student both modifies the inputs, and backward pass fails. * Fixing all models QA clamp_ bug.
-
- 01 Jun, 2021 1 commit
-
-
Fan Zhang authored
* modify qa-trainer * fix flax model
-
- 20 May, 2021 1 commit
-
-
Sylvain Gugger authored
* Fix regression in regression * Add test
-
- 04 May, 2021 1 commit
-
-
abhishek thakur authored
* add to bert * review comments * Update src/transformers/configuration_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/configuration_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * self.config.problem_type * fix style * fix * fin * fix * update doc * fix * test * Test more problem types * Update src/transformers/configuration_utils.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * fix * remove * fix * quality * make fix-copies * remove test Co-authored-by:
abhishek thakur <abhishekkrthakur@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 26 Apr, 2021 1 commit
-
-
Patrick von Platen authored
-
- 07 Apr, 2021 1 commit
-
-
Stas Bekman authored
* The 'warn' method is deprecated * fix test
-
- 31 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* First third * Styling and fix mistake * Quality * All the rest * Treat %s and %d * typo * Missing ) * Apply suggestions from code review Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 05 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* Fix embeddings for PyTorch 1.8 * Try with PyTorch 1.8.0 * Fix embeddings init * Fix copies * Typo * More typos
-
- 03 Mar, 2021 1 commit
-
-
Sylvain Gugger authored
* Refactor checkpoint name in BERT and MobileBERT * Add option to check copies * Add QuestionAnswering * Add last models * Make black happy
-
- 19 Jan, 2021 2 commits
-
-
Sylvain Gugger authored
* Fix model templates and use less than 119 chars * Missing new line
-
Yusuke Mori authored
* Update past_key_values in gpt2 (#9391) * Update generation_utils, and rename some items * Update modeling_gpt2 to avoid an error in gradient_checkpointing * Remove 'reorder_cache' from util and add variations to XLNet, TransfoXL, GPT-2 * Change the location of '_reorder_cache' in modeling files * Add '_reorder_cache' in modeling_ctrl * Fix a bug of my last commit in CTRL * Add '_reorder_cache' to GPT2DoubleHeadsModel * Manage 'use_cache' in config of test_modeling_gpt2 * Clean up the doc string * Update src/transformers/models/gpt2/modeling_gpt2.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Fix the doc string (GPT-2, CTRL) * improve gradient_checkpointing_behavior Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Jan, 2021 1 commit
-
-
Patrick von Platen authored
* fix generation models * fix led * fix docs * add is_decoder * fix last docstrings * make style * fix t5 cross attentions * correct t5
-
- 23 Dec, 2020 1 commit
-
-
Suraj Patil authored
* add past_key_values * add use_cache option * make mask before cutting ids * adjust position_ids according to past_key_values * flatten past_key_values * fix positional embeds * fix _reorder_cache * set use_cache to false when not decoder, fix attention mask init * add test for caching * add past_key_values for Roberta * fix position embeds * add caching test for roberta * add doc * make style * doc, fix attention mask, test * small fixes * adress patrick's comments * input_ids shouldn't start with pad token * use_cache only when decoder * make consistent with bert * make copies consistent * add use_cache to encoder * add past_key_values to tapas attention * apply suggestions from code review * make coppies consistent * add attn mask in tests * remove copied from longformer * apply suggestions from code review * fix bart test * nit * simplify model outputs * fix doc * fix output ordering
-
- 02 Dec, 2020 1 commit
-
-
Patrick von Platen authored
* fix resize tokens * correct mobile_bert * move embedding fix into modeling_utils.py * refactor * fix lm head resize * refactor * break lines to make sylvain happy * add news tests * fix typo * improve test * skip bart-like for now * check if base_model = get(...) is necessary * clean files * improve test * fix tests * revert style templates * Update templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/modeling_{{cookiecutter.lowercase_modelname}}.py
-
- 27 Nov, 2020 1 commit
-
-
Patrick von Platen authored
* correct dpr test and bert pos fault * fix dpr bert config problem * fix layoutlm * add config to dpr as well
-
- 25 Nov, 2020 1 commit
-
-
Patrick von Platen authored
* fix mems in xlnet * fix use_mems * fix use_mem_len * fix use mems * clean docs * fix tf typo * make xlnet tf for generation work * fix tf test * refactor use cache * add use cache for missing models * correct use_cache in generate * correct use cache in tf generate * fix tf * correct getattr typo * make sylvain happy * change in docs as well * do not apply to cookie cutter statements * fix tf test * make pytorch model fully backward compatible
-
- 24 Nov, 2020 1 commit
-
-
zhiheng-huang authored
* Support BERT relative position embeddings * Fix typo in README.md * Address review comment * Fix failing tests * [tiny] Fix style_doc.py check by adding an empty line to configuration_bert.py * make fix copies * fix configs of electra and albert and fix longformer * remove copy statement from longformer * fix albert * fix electra * Add bert variants forward tests for various position embeddings * [tiny] Fix style for test_modeling_bert.py * improve docstring * [tiny] improve docstring and remove unnecessary dependency * [tiny] Remove unused import * re-add to ALBERT * make embeddings work for ALBERT * add test for albert Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 Nov, 2020 1 commit
-
-
Stas Bekman authored
* consistent ignore keys + make private * style * - authorized_missing_keys => _keys_to_ignore_on_load_missing - authorized_unexpected_keys => _keys_to_ignore_on_load_unexpected * move public doc of private attributes to private comment
-
- 17 Nov, 2020 2 commits
-
-
Sylvain Gugger authored
* Remove old deprecated arguments Co-authored-by:
LysandreJik <lysandre.debut@reseau.eseo.fr> * Remove needless imports * Fix tests Co-authored-by:
LysandreJik <lysandre.debut@reseau.eseo.fr>
-
Sylvain Gugger authored
* Put models in subfolders * Styling * Fix imports in tests * More fixes in test imports * Sneaky hidden imports * Fix imports in doc files * More sneaky imports * Finish fixing tests * Fix examples * Fix path for copies * More fixes for examples * Fix dummy files * More fixes for example * More model import fixes * Is this why you're unhappy GitHub? * Fix imports in conver command
-
- 16 Nov, 2020 1 commit
-
-
Sylvain Gugger authored
* Use the CI to identify failing tests * Remove from all examples and tests * More default switch * Fixes * More test fixes * More fixes * Last fixes hopefully * Use the CI to identify failing tests * Remove from all examples and tests * More default switch * Fixes * More test fixes * More fixes * Last fixes hopefully * Run on the real suite * Fix slow tests
-
- 09 Nov, 2020 1 commit
-
-
Patrick von Platen authored
* add training tests * correct longformer * fix docs * fix some tests * fix some more train tests * remove ipdb * fix multiple edge case model training * fix funnel and prophetnet * clean gpt models * undo renaming of albert
-
- 06 Nov, 2020 1 commit
-
-
Yossi Synett authored
[All Seq2Seq model + CLM models that can be used with EncoderDecoder] Add cross-attention weights to outputs (#8071) * Output cross-attention with decoder attention output * Update src/transformers/modeling_bert.py * add cross-attention for t5 and bart as well * fix tests * correct typo in docs * add sylvains and sams comments * correct typo Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 30 Oct, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 28 Oct, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 26 Oct, 2020 1 commit
-
-
Sylvain Gugger authored
* Important files * Styling them all * Revert "Styling them all" This reverts commit 7d029395fdae8513b8281cbc2a6c239f8093503e. * Syling them for realsies * Fix syntax error * Fix benchmark_utils * More fixes * Fix modeling auto and script * Remove new line * Fixes * More fixes * Fix more files * Style * Add FSMT * More fixes * More fixes * More fixes * More fixes * Fixes * More fixes * More fixes * Last fixes * Make sphinx happy
-
- 11 Oct, 2020 1 commit
-
-
Miguel Victor authored
-
- 25 Sep, 2020 1 commit
-
-
Patrick von Platen authored
* fix multi-gpu * fix longformer * force to delete unnecessary layers * fix notifications * fix warning * fix roberta * fix tests * remove hasattr * fix tests * fix roberta * merge and clean authorized keys
-
- 24 Sep, 2020 1 commit
-
-
Sylvain Gugger authored
-
- 23 Sep, 2020 1 commit
-
-
Sylvain Gugger authored
* Clean up model documentation * Formatting * Preparation work * Long lines * Main work on rst files * Cleanup all config files * Syntax fix * Clean all tokenizers * Work on first models * Models beginning * FaluBERT * All PyTorch models * All models * Long lines again * Fixes * More fixes * Update docs/source/model_doc/bert.rst Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update docs/source/model_doc/electra.rst Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Last fixes Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 07 Sep, 2020 1 commit
-
-
Lysandre Debut authored
-
- 04 Sep, 2020 1 commit
-
-
Stas Bekman authored
* remove the implied defaults to :obj:`None` * fix bug in the original * replace to :obj:`True`, :obj:`False`
-