- 07 Sep, 2020 9 commits
-
-
Stas Bekman authored
unittest doesn't support pytest's super-handy `@pytest.mark.parametrize`, I researched and there are many proposed workarounds, most tedious at best. If we include https://pypi.org/project/parameterized/ in dev dependencies - it will provide a very easy to write parameterization in tests. Same as pytest's fixture, plus quite a few other ways. Example: ``` from parameterized import parameterized @parameterized([ (2, 2, 4), (2, 3, 8), (1, 9, 1), (0, 9, 0), ]) def test_pow(base, exponent, expected): assert_equal(math.pow(base, exponent), expected) ``` (extra `self`var if inside a test class) To remind the pytest style is slightly different: ``` @pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)]) def test_eval(test_input, expected): ``` More examples here: https://pypi.org/project/parameterized May I suggest that it will make it much easier to write some types of tests?
-
Stas Bekman authored
* [docstring] missing arg add the missing `tie_word_embeddings` entry * cleanup * Update src/transformers/configuration_reformer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Stas Bekman authored
there is no var `decoder_input_ids`, but there is `input_ids` for decoder :)
-
Julien Chaumond authored
-
Lysandre Debut authored
-
Sylvain Gugger authored
* Add warning for gradient accumulation * Formatting
-
Julien Chaumond authored
cc @jplu
-
Boris Dayma authored
* feat: allow padding_text for any generative model * docs(pipelines.py): correct typo * Update src/transformers/pipelines.py Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> * feat: rename padding_text to prefix * fix: cannot tokenize empty text * fix: pass prefix arg to pipeline * test: add prefix to text-generetation pipeline * style: fix style * style: clean code and variable name more explicit * set arg docstring to optional Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Sam Shleifer authored
-
- 06 Sep, 2020 1 commit
-
-
Patrick von Platen authored
-
- 05 Sep, 2020 1 commit
-
-
Steven Liu authored
* create model card for astroGPT * Hotlink to actual image file Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
- 04 Sep, 2020 8 commits
-
-
Naveenkhasyap authored
* Create Readme.MD for KanBERTo KanBERTo language model readme for Kannada language. * Update model_cards/Naveen-k/KanBERTo/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Stas Bekman authored
* remove the implied defaults to :obj:`None` * fix bug in the original * replace to :obj:`True`, :obj:`False`
-
Stas Bekman authored
-
Sam Shleifer authored
-
Sam Shleifer authored
-
Stas Bekman authored
* correct bool types fix docstring s/int/bool/ * fix description * fix num_labels to match reality
-
Patrick von Platen authored
-
Yih-Dar authored
* Remove hard-coded uses of float32 to fix mixed precision use in TF Distilbert * fix style * fix gelu dtype issue in TF Distilbert * fix numeric overflow while using half precision
-
- 03 Sep, 2020 12 commits
-
-
Sam Shleifer authored
-
Sam Shleifer authored
-
krfricke authored
* move wandb/comet logger init to train() to allow parallel logging * Setup wandb/comet loggers on first call to log()
-
Sam Shleifer authored
-
Sam Shleifer authored
Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
brett koonce authored
-
Stefan Engl authored
-
David Mark Nemeskey authored
-
abdullaholuk-loodos authored
Loodos model cards had errors on "Usage" section. It is fixed. Also "electra-base-turkish-uncased" model removed from s3 and re-uploaded as "electra-base-turkish-uncased-discriminator". Its README added. (#6921) Co-authored-by:Abdullah Oluk <abdullaholuk123@gmail.com>
-
Julien Chaumond authored
cc @psorianom @rachelker
-
Sylvain Gugger authored
-
Antonio V Mendoza authored
Adding the LXMERT pretraining model (MultiModal languageXvision) to HuggingFace's suite of models (#5793) * added template files for LXMERT and competed the configuration_lxmert.py * added modeling, tokization, testing, and finishing touched for lxmert [yet to be tested] * added model card for lxmert * cleaning up lxmert code * Update src/transformers/modeling_lxmert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/modeling_tf_lxmert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/modeling_tf_lxmert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/modeling_lxmert.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * tested torch lxmert, changed documtention, updated outputs, and other small fixes * Update src/transformers/convert_pytorch_checkpoint_to_tf2.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/convert_pytorch_checkpoint_to_tf2.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/convert_pytorch_checkpoint_to_tf2.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * renaming, other small issues, did not change TF code in this commit * added lxmert question answering model in pytorch * added capability to edit number of qa labels for lxmert * made answer optional for lxmert question answering * add option to return hidden_states for lxmert * changed default qa labels for lxmert * changed config archive path * squshing 3 commits: merged UI + testing improvments + more UI and testing * changed some variable names for lxmert * TF LXMERT * Various fixes to LXMERT * Final touches to LXMERT * AutoTokenizer order * Add LXMERT to index.rst and README.md * Merge commit test fixes + Style update * TensorFlow 2.3.0 sequential model changes variable names Remove inherited test * Update src/transformers/modeling_tf_pytorch_utils.py * Update docs/source/model_doc/lxmert.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update docs/source/model_doc/lxmert.rst Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * Update src/transformers/modeling_tf_lxmert.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> * added suggestions * Fixes * Final fixes for TF model * Fix docs Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
- 02 Sep, 2020 9 commits
-
-
Puneetha Pai authored
-
Stas Bekman authored
Since `generate()` does: ``` num_beams = num_beams if num_beams is not None else self.config.num_beams ``` This test fails if `model.config.num_beams > 1` (which is the case in the model I'm porting). This fix makes the test setup unambiguous by passing an explicit `num_beams=1` to `generate()`. Thanks. -
Sylvain Gugger authored
* Fix output_attention -> output_attentions * Formatting * One unsaved file
-
Yohei Tamura authored
-
Suraj Patil authored
* add Text2TextGenerationPipeline * remove max length warning * remove comments * remove input_length * fix typo * add tests * use TFAutoModelForSeq2SeqLM * doc * typo * add the doc below TextGenerationPipeline * doc nit * style * delete comment
-
Prajjwal Bhargava authored
-
Stas Bekman authored
* [doc] typos fixed typos * Update README.md
-
Harry Wang authored
-
Patrick von Platen authored
-