- 01 Jul, 2020 1 commit
-
-
Sam Shleifer authored
-
- 24 Jun, 2020 2 commits
-
-
Lysandre Debut authored
* Cleaning TensorFlow models Update all classes stylr * Don't average loss
-
Patrick von Platen authored
* fix use cache * add bart use cache * fix bart * finish bart
-
- 22 Jun, 2020 1 commit
-
-
Joseph Liu authored
* Configure all models to use output_hidden_states as argument passed to foward() * Pass all tests * Remove cast_bool_to_primitive in TF Flaubert model * correct tf xlnet * add pytorch test * add tf test * Fix broken tests * Configure all models to use output_hidden_states as argument passed to foward() * Pass all tests * Remove cast_bool_to_primitive in TF Flaubert model * correct tf xlnet * add pytorch test * add tf test * Fix broken tests * Refactor output_hidden_states for mobilebert * Reset and remerge to master Co-authored-by:
Joseph Liu <joseph.liu@coinflex.com> Co-authored-by:
patrickvonplaten <patrick.v.platen@gmail.com>
-
- 18 Jun, 2020 1 commit
-
-
Deniz authored
* resize token embeddings * add tokens * add tokens * add tokens * add t5 token method * add t5 token method * add t5 token method * typo * debugging input * debugging input * debug * debug * debug * trying to set embedding tokens properly * set embeddings for generation head too * set embeddings for generation head too * debugging * debugging * enable generation * add base method * add base method * add base method * return logits in the main call * reverting to generation * revert back * set embeddings for the bert main layer * description * fix conflicts * logging * set base model as self * refactor * tf_bert add method * tf_bert add method * tf_bert add method * tf_bert add method * tf_bert add method * tf_bert add method * tf_bert add method * tf_bert add method * v0 * v0 * finalize * final * black * add tests * revert back the emb call * comments * comments * add the second test * add vocab size condig * add tf models * add tf models. add common tests * remove model specific embedding tests * stylish * remove files * stylez * Update src/transformers/modeling_tf_transfo_xl.py change the error. Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * adding unchanged weight test Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
- 11 Jun, 2020 1 commit
-
-
Sylvain Gugger authored
* Support multiple choice in tf common model tests * Add the input_embeds test
-
- 09 Jun, 2020 1 commit
-
-
Bharat Raghunathan authored
* DOC: Replace instances of ``config.output_attentions`` with function argument ``output_attentions`` * DOC: Apply Black Formatting * Fix errors where output_attentions was undefined * Remove output_attentions in classes per review * Fix regressions on tests having `output_attention` * Fix further regressions in tests relating to `output_attentions` Ensure proper propagation of `output_attentions` as a function parameter to all model subclasses * Fix more regressions in `test_output_attentions` * Fix issues with BertEncoder * Rename related variables to `output_attentions` * fix pytorch tests * fix bert and gpt2 tf * Fix most TF tests for `test_output_attentions` * Fix linter errors and more TF tests * fix conflicts * DOC: Apply Black Formatting * Fix errors where output_attentions was undefined * Remove output_attentions in classes per review * Fix regressions on tests having `output_attention` * fix conflicts * fix conflicts * fix conflicts * fix conflicts * fix pytorch tests * fix conflicts * fix conflicts * Fix linter errors and more TF tests * fix tf tests * make style * fix isort * improve output_attentions * improve tensorflow Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 04 Jun, 2020 1 commit
-
-
Julien Plu authored
* Better None gradients handling * Apply Style * Apply Style * Create a loss class per task to compute its respective loss * Add loss classes to the ALBERT TF models * Add loss classes to the BERT TF models * Add question answering and multiple choice to TF Camembert * Remove prints * Add multiple choice model to TF DistilBERT + loss computation * Add question answering model to TF Electra + loss computation * Add token classification, question answering and multiple choice models to TF Flaubert * Add multiple choice model to TF Roberta + loss computation * Add multiple choice model to TF XLM + loss computation * Add multiple choice and question answering models to TF XLM-Roberta * Add multiple choice model to TF XLNet + loss computation * Remove unused parameters * Add task loss classes * Reorder TF imports + add new model classes * Add new model classes * Bugfix in TF T5 model * Bugfix for TF T5 tests * Bugfix in TF T5 model * Fix TF T5 model tests * Fix T5 tests + some renaming * Fix inheritance issue in the AutoX tests * Add tests for TF Flaubert and TF XLM Roberta * Add tests for TF Flaubert and TF XLM Roberta * Remove unused piece of code in the TF trainer * bugfix and remove unused code * Bugfix for TF 2.2 * Apply Style * Divide TFSequenceClassificationAndMultipleChoiceLoss into their two respective name * Apply style * Mirror the PT Trainer in the TF one: fp16, optimizers and tb_writer as class parameter and better dataset handling * Fix TF optimizations tests and apply style * Remove useless parameter * Bugfix and apply style * Fix TF Trainer prediction * Now the TF models return the loss such as their PyTorch couterparts * Apply Style * Ignore some tests output * Take into account the SQuAD cls_index, p_mask and is_impossible parameters for the QuestionAnswering task models. * Fix names for SQuAD data * Apply Style * Fix conflicts with 2.11 release * Fix conflicts with 2.11 * Fix wrongname * Add better documentation on the new create_optimizer function * Fix isort * logging_dir: use same default as PyTorch Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 14 Apr, 2020 1 commit
-
-
Patrick von Platen authored
* remove output_past from pt * make style * add optional input length for gpt2 * add use cache to prepare input * save memory in gpt2 * correct gpt2 test inputs * make past input optional for gpt2 * finish use_cache for all models * make style * delete modeling_gpt2 change in test file * correct docstring * correct is true statements for gpt2
-
- 06 Apr, 2020 1 commit
-
-
Patrick von Platen authored
* split beam search and no beam search test * fix test * clean generate tests
-
- 01 Apr, 2020 1 commit
-
-
Patrick von Platen authored
* change tf t5 argument naming for TF 2.2 * correct bug in testing
-
- 31 Mar, 2020 1 commit
-
-
Patrick von Platen authored
* add bad words list * make style * add bad_words_tokens * make style * better naming * make style * fix typo
-
- 19 Mar, 2020 1 commit
-
-
Patrick von Platen authored
* fix conflicts * update bart max length test * correct spelling mistakes * implemented model specific encode function * fix merge conflicts * better naming * save intermediate state -> need to rethink strucuture a bit * leave tf problem as it is for now * current version * add layers.pop * remove ipdb * make style * clean return cut decoding * remove ipdbs * Fix restoring layers in the decoders that doesnt exists. * push good intermediate solution for now * fix conflicts * always good to refuse to merge conflicts when rebasing * fix small bug * improve function calls * remove unused file * add correct scope behavior for t5_generate Co-authored-by:Morgan Funtowicz <funtowiczmo@gmail.com>
-
- 18 Mar, 2020 1 commit
-
-
Patrick von Platen authored
Adding LM Head to Transfo-XL and first step to fixing problem with Adaptive Embeddings in TransfoXL (#3286) * first commit * work in progress * make language generation task pass * update to working version for LM * delete print * remove dead code * make style
-
- 17 Mar, 2020 1 commit
-
-
Patrick von Platen authored
* change do_samples back * None better default as boolean * adapt do_sample to True in test example * make style
-
- 04 Mar, 2020 1 commit
-
-
patrickvonplaten authored
-
- 03 Mar, 2020 5 commits
-
-
Gunnlaugur Thor Briem authored
-
Gunnlaugur Thor Briem authored
And only run the test on TF*MainLayer classes so marked.
-
Gunnlaugur Thor Briem authored
When supplied by Keras deserialization, the config parameter to initializers will be a dict. So intercept it and convert to PretrainedConfig object (and store in instance attribute for get_config to get at it) before passing to the actual initializer. To accomplish this, and repeat as little code as possible, use a class decorator on TF*MainLayer classes.
-
Gunnlaugur Thor Briem authored
-
Patrick von Platen authored
* add first copy past test to tf 2 generate * add tf top_k_top_p_filter fn * add generate function for TF * add generate function for TF * implemented generate for all models expect transfoXL * implemented generate for all models expect transfoXL * implemented generate for all models expect transfoXL * make style * change permission of test file to correct ones * delete ipdb * delete ipdb * fix bug and finish simple gpt2 integration test * clean test file * clean test file * make style * make style * make style * make style * change import style * change import style * make style * make style * add decorators * add decorators * fix tf ctrl bug dim => axis in TF * make style * make style * refactored test file * refactored test file * take out test_torch_tf_conversion if nothing is defined * take out test_torch_tf_conversion if nothing is defined * remove useless files * remove useless files * fix conflicts * fix conflicts * fix conflicts * fix conflicts * fix conflicts * solve conflicts * solve conflicts * fix conflicts * fix conflicts * merge conflicts * delete ipdb * exposed top_k_top_p_filtering fns * delete weirdly created w! file * add comment to test tf common modeling * fix conflicts * fix conflicts * make style * merge conflicts * make style * change tf.tensor.shape to shape_list(tensor)
-
- 02 Mar, 2020 1 commit
-
-
Julien Chaumond authored
* debug env * Restrict TF GPU memory * Fixup * One more test * rm debug logs * Fixup
-
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
cc @julien-c @thomwolf
-
- 27 Jan, 2020 1 commit
-
-
Lysandre authored
cc @julien-c @@thomwolf
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 29 Dec, 2019 1 commit
-
-
Julien Chaumond authored
-
- 23 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
-
- 22 Dec, 2019 10 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
I suspect the wrapper classes were created in order to prevent the abstract base class (TF)CommonModelTester from being included in test discovery and running, because that would fail. I solved this by replacing the abstract base class with a mixin. Code changes are just de-indenting and automatic reformattings performed by black to use the extra line space.
-
Aymeric Augustin authored
This construct isn't used anymore these days. Running python tests/test_foo.py puts the tests/ directory on PYTHONPATH, which isn't representative of how we run tests. Use python -m unittest tests/test_foo.py instead.
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This change is mostly autogenerated with: $ python -m autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports examples templates transformers utils hubconf.py setup.py I made minor changes in the generated diff. -
Aymeric Augustin authored
This change is mostly autogenerated with: $ python -m autoflake --in-place --recursive examples templates transformers utils hubconf.py setup.py I made minor changes in the generated diff. -
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand.
-
- 10 Dec, 2019 1 commit
-
-
thomwolf authored
-