- 31 Jan, 2022 1 commit
-
-
Sylvain Gugger authored
-
- 24 Jan, 2022 1 commit
-
-
Yih-Dar authored
* fix missing import jnp * Fix missing jax and k=1 Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 21 Jan, 2022 1 commit
-
-
Jonas Kuball authored
* Add missing __spec__ for transformers.models.auto * Moves the __spec__-test to the UnitTest class * Adds module_spec to all instances of _LazyModule * Refactors an old test from pytest to unittest
-
- 19 Jan, 2022 1 commit
-
-
Matt authored
* Rename compute_loss to hf_compute_loss to avoid conflicts with the new Keras method * make style * Adding deprecation warning to `compute_loss` * Fix sneaky reference to compute_loss * Replace logger.warning with warnings.warn * Clarifying warning and deprecation timeline
-
- 14 Jan, 2022 1 commit
-
-
Sylvain Gugger authored
* Check the repo consistency in model templates test * Fix doc template * Fix docstrings * Fix last docstring
-
- 11 Jan, 2022 2 commits
-
-
Sylvain Gugger authored
-
NielsRogge authored
-
- 10 Jan, 2022 2 commits
-
-
Suraj Patil authored
* fix doc examples * remove double colons
-
Sylvain Gugger authored
-
- 22 Dec, 2021 1 commit
-
-
Sylvain Gugger authored
* Convert all tutorials and guides * Convert all remaining rst to mdx * Track and fix bad links
-
- 21 Dec, 2021 2 commits
-
-
Sylvain Gugger authored
* Convert docstrings of all configurations and tokenizers * Processors and fixes * Last modeling files and fixes to models * Pipeline modules * Utils files * Data submodule * All the other files * Style * Missing examples * Style again * Fix copies * Say bye bye to rst docstrings forever
-
Sylvain Gugger authored
* Convert file_utils docstrings to Markdown * Test on BERT * Return block indent * Temporarily disable doc styler * Remove from quality checks as well * Remove doc styler mess * Remove check from circleCI * Fix typo * Convert file_utils docstrings to Markdown * Test on BERT * Return block indent * Temporarily disable doc styler * Remove from quality checks as well * Remove doc styler mess * Remove check from circleCI * Fix typo * Let's go on all other model files * Add templates too * Styling and quality
-
- 17 Dec, 2021 1 commit
-
-
Daniel Stancl authored
* Implement head_mask for Flax BERT and other models copied from BERT * Remove `from jax._src.nn.functions import sigmoid` Remove `from jax._src.nn.functions import sigmoid` unintentionally added by IDE * Remove no more valid copy statement * Apply patil-suraj's suggestions from code review * Apply suggestions from the code review * Update Flax template * Fix a typo * Also update template for CausalLM modules
-
- 16 Dec, 2021 1 commit
-
-
Lysandre Debut authored
* First try * Update instructions
-
- 13 Dec, 2021 2 commits
-
-
Yih-Dar authored
* avoid tf.tile in embeddings * remove more tf.tile in embeddings * clean Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
Yih-Dar authored
Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 10 Dec, 2021 1 commit
-
-
Yih-Dar authored
Fix examples: 'CausalLMOutputWithCrossAttentions' object has no attribute 'last_hidden_state' (#14678) Co-authored-by:ydshieh <ydshieh@users.noreply.github.com>
-
- 30 Nov, 2021 1 commit
-
-
Thomas Viehmann authored
* use functional interface instead of instantiating module and immediately calling it * fix torch.nn.functional to nn.functional. Thank you Stas!
-
- 18 Nov, 2021 1 commit
-
-
Sylvain Gugger authored
* Add a post init method to all models * Fix tests * Fix last tests * Fix templates * Add comment * Forgot to save
-
- 11 Nov, 2021 1 commit
-
-
Suraj Patil authored
* fix inits * fix embed dtype * fix embed dtype * add test to check default dtype * quality * add type conversion methods for flax models * more robust casting * cast sinusoidal positions * update pegasus * update albert * update test * make sure dtype is passed to every module * style * fix electra dense * fix t5 * quality * add more tests * better name * use the dtype for lm head computation * fix albert * style * fix albert embed dtype * more tests * fix vision enc-dec * cleanup * fix embed dtype pegasus * fix default param test * doc * update template * fix final_logits_bias dtype * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix doc * fix doc * add detailed docstring for dtype parameter * remove un-necessary import Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 09 Nov, 2021 1 commit
-
-
Patrick von Platen authored
* [Bert2Bert] allow bert2bert + relative embeddings * up * Update README_ko.md * up * up
-
- 29 Oct, 2021 1 commit
-
-
Sylvain Gugger authored
* Generalize problem_type to all classification models * Missing import * Deberta BC and fix tests * Fix template * Missing imports * Revert change to reformer test * Fix style
-
- 15 Oct, 2021 1 commit
-
-
Patrick von Platen authored
* up * finish * up * up * finish
-
- 12 Oct, 2021 1 commit
-
-
Yih-Dar authored
* Add cross attentions to TFGPT2Model * Add TFEncoderDecoderModel * Add TFBaseModelOutputWithPoolingAndCrossAttentions * Add cross attentions to TFBertModel * Fix past or past_key_values argument issue * Fix generation * Fix save and load * Add some checks and comments * Clean the code that deals with past keys/values * Add kwargs to processing_inputs * Add serving_output to TFEncoderDecoderModel * Some cleaning + fix use_cache value issue * Fix tests + add bert2bert/bert2gpt2 tests * Fix more tests * Ignore crossattention.bias when loading GPT2 weights into TFGPT2 * Fix return_dict_in_generate in tf generation * Fix is_token_logit_eos_token bug in tf generation * Finalize the tests after fixing some bugs * Fix another is_token_logit_eos_token bug in tf generation * Add/Update docs * Add TFBertEncoderDecoderModelTest * Clean test script * Add TFEncoderDecoderModel to the library * Add cross attentions to TFRobertaModel * Add TFRobertaEncoderDecoderModelTest * make style * Change the way of position_ids computation * bug fix * Fix copies in tf_albert * Remove some copied from and apply some fix-copies * Remove some copied * Add cross attentions to some other TF models * Remove encoder_hidden_states from TFLayoutLMModel.call for now * Make style * Fix TFRemBertForCausalLM * Revert the change to longformer + Remove copies * Revert the change to albert and convbert + Remove copies * make quality * make style * Add TFRembertEncoderDecoderModelTest * make quality and fix-copies * test TFRobertaForCausalLM * Fixes for failed tests * Fixes for failed tests * fix more tests * Fixes for failed tests * Fix Auto mapping order * Fix TFRemBertEncoder return value * fix tf_rembert * Check copies are OK * Fix missing TFBaseModelOutputWithPastAndCrossAttentions is not defined * Add TFEncoderDecoderModelSaveLoadTests * fix tf weight loading * check the change of use_cache * Revert the change * Add missing test_for_causal_lm for TFRobertaModelTest * Try cleaning past * fix _reorder_cache * Revert some files to original versions * Keep as many copies as possible * Apply suggested changes - Use raise ValueError instead of assert * Move import to top * Fix wrong require_torch * Replace more assert by raise ValueError * Add test_pt_tf_model_equivalence (the test won't pass for now) * add test for loading/saving * finish * finish * Remove test_pt_tf_model_equivalence * Update tf modeling template * Remove pooling, added in the prev. commit, from MainLayer * Update tf modeling test template * Move inputs["use_cache"] = False to modeling_tf_utils.py * Fix torch.Tensor in the comment * fix use_cache * Fix missing use_cache in ElectraConfig * Add a note to from_pretrained * Fix style * Change test_encoder_decoder_save_load_from_encoder_decoder_from_pt * Fix TFMLP (in TFGPT2) activation issue * Fix None past_key_values value in serving_output * Don't call get_encoderdecoder_model in TFEncoderDecoderModelTest.test_configuration_tie until we have a TF checkpoint on Hub * Apply review suggestions - style for cross_attns in serving_output * Apply review suggestions - change assert + docstrings * break the error message to respect the char limit * deprecate the argument past * fix docstring style * Update the encoder-decoder rst file * fix Unknown interpreted text role "method" * fix typo Co-authored-by:
ydshieh <ydshieh@users.noreply.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 24 Sep, 2021 1 commit
-
-
Tommy Chiang authored
We use `torch.unique` here only to check whether every elements have the same value. Therefore, we can use `torch.unique_consecutive` here. This function eliminates all but the first element from every consecutive group of equivalent elements. Like, if we apply this function to `[1, 2, 2, 1]`, it will result in `[1, 2, 1]`. As you could see, this is enough for checking whether every elements have the same value. Since `torch.unique_consecutive` do less thing, it is much more faster. On my computer, it is 25x faster on GPU and 15x faster on CPU.
-
- 22 Sep, 2021 1 commit
-
-
Sylvain Gugger authored
* Make gradient_checkpointing a training argument * Update src/transformers/modeling_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Update src/transformers/configuration_utils.py Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> * Fix tests * Style * document Gradient Checkpointing as a performance feature * Small rename * PoC for not using the config * Adapt BC to new PoC * Forgot to save * Rollout changes to all other models * Fix typo Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas@stason.org>
-
- 15 Sep, 2021 1 commit
-
-
Bhadresh Savani authored
-
- 06 Sep, 2021 1 commit
-
-
Nils Reimers authored
* refactor GPT Config to allow dyn. properties * make attribute_map a class attribute * remove old code * update unit test to test config: Add test for common properties setter * update unit test to test config: Add test for common properties passed as parameters to __init__ * update to black code format * Allow that setters are not defined for certain config classes * update config classes to implement attribute_map * bugfix lxmert config - id2labels was not defined when num_labels was set * update broken configs - add attribute_maps * update bart config * update black codestyle * update documentation on common config attributes * update GPTJ config to new attribute map * update docs on common attributes * gptj config: add max_position_embeddings * gptj config: format with black * update speech to text 2 config * format doc file to max_len 119 * update config template
-
- 01 Sep, 2021 2 commits
-
-
Patrick von Platen authored
-
Jonathan Chang authored
* Add option to add flax * Add flax template for __init__.py * Add flax template for .rst * Copy TF modeling template * Add a missing line in modeling_tf_... template * Update first half of modeling_flax_.. * Update encoder flax template * Copy test_modeling_tf... as test_modeling_flax... * Replace some TF to Flax in test_modeling_flax_... * Replace tf to np some function might not work, like _assert_tensors_equal * Replace remaining tf to np (might not work) * Fix cookiecutter * Add Flax in to_replace_... template * Update transformers-cli add-new-model * Save generate_flax in configuration.json This will be read by transformers-cli * Fix to_replace_... and cli * Fix replace cli * Fix cookiecutter name * Move docstring earlier to avoid not defined error * Fix a missing Module * Add encoder-decoder flax template from bart * Fix flax test * Make style * Fix endif * Fix replace all "utf-8 -> unp-8" * Update comment * Fix flax template (add missing ..._DOCSTRING) * Use flax_bart imports in template (was t5) * Fix unp * Update templates/adding_a_new_model/tests * Revert "Fix unp" This reverts commit dc9002a41d902c4f9b07343eab1cb350c8b7fd57. * Remove one line of copied from to suppress CI error * Use generate_tensorflow_pytorch_and_flax * Add a missing part * fix typo * fix flax config * add examples for flax * small rename * correct modeling imports * correct auto loading * corrects some flax tests * correct small typo * correct as type * finish modif * correct more templates * final fixes * add file testers * up * make sure tests match template regex * correct pytorch * correct tf * correct more tf * correct imports * minor error * minor error * correct init * more fixes * correct more flax tests * correct flax test * more fixes * correct docs * update * fix Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 31 Aug, 2021 1 commit
-
-
Jongheon Kim authored
Set missing seq_length variable when using inputs_embeds with ALBERT & Remove code duplication (#13152) * Set seq_length variable when using inputs_embeds * remove code duplication
-
- 06 Aug, 2021 1 commit
-
-
Sylvain Gugger authored
* Initial work * All auto models * All tf auto models * All flax auto models * Tokenizers * Add feature extractors * Fix typos * Fix other typo * Use the right config * Remove old mapping names and update logic in AutoTokenizer * Update check_table * Fix copies and check_repo script * Fix last test * Add back name * clean up * Update template * Update template * Forgot a ) * Use alternative to fixup * Fix TF model template * Address review comments * Address review comments * Style
-
- 03 Aug, 2021 1 commit
-
-
Sylvain Gugger authored
-
- 21 Jul, 2021 1 commit
-
-
Lysandre Debut authored
* Expose get_config() on ModelTesters * Typo
-
- 08 Jul, 2021 1 commit
-
-
Sylvain Gugger authored
* Try to pickle transformers * Deal with special objs better * Make picklable
-
- 07 Jul, 2021 1 commit
-
-
Michal Szutenberg authored
It was used in shift_right. After this change TF code is more similar to Pytorch implementations Also, TF graphs are optimized (one node less)
-
- 23 Jun, 2021 1 commit
-
-
Sylvain Gugger authored
* Add all XxxPreTrainedModel to the main init * Add to template * Add to template bis * Add FlaxT5
-
- 22 Jun, 2021 1 commit
-
-
Hamid Shojanazeri authored
* registering a buffer for token_type_ids, to pass the error of device-id getting hardcoded when tracing * sytle format * adding persistent flag to the resgitered buffers that prevent from adding them to the state_dict and addresses the Backward compatibility issue * adding the try catch to the fix as persistent flag is only available from PT >1.6 * adding version check * added the condition to only use the token_type_ids buffer when its autogenerated not passed by user * adding comments and making the conidtion where token_type_ids are None to use the registered buffer * taking out position-embeddding from the if block * adding comments * handling the case if buffer for position_ids was not registered * reverted the changes on position_ids, fix the issue with size of token_type_ids buffer, moved the modification for generated token_type_ids to Bertmodel, instead of Embeddings * reverting the token_type_ids in case of None to the previous version * reverting changes on position_ids adding back the if block * changes added by running make fix-copies * changes added by running make fix-copies and added the import version as it was getting used * changes added by running make fix-copies * changes added by running make fix-copies * fixing the import format * fixing the import format * modified to use temp tensor for trimed and expanded token_type_ids buffer * changes made by fix-copies after temp tensor modifications * changes made by fix-copies after temp tensor modifications * changes made by fix-copies after temp tensor modifications * clean up * clean up * clean up * clean up * Nit * Nit * Nit * modified according to support device conversion on traced models * modified according to support device conversion on traced models * modified according to support device conversion on traced models * modified according to support device conversion on traced models * changes based on latest in master * Adapt templates * Add version import Co-authored-by:
Ubuntu <ubuntu@ip-172-31-32-81.us-west-2.compute.internal> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-
- 18 Jun, 2021 1 commit
-
-
Xa9aX ツ authored
* Moved Mish to Torch 1.9 version * Run black formatting
-
- 14 Jun, 2021 1 commit
-
-
Stas Bekman authored
-