- 14 Oct, 2020 3 commits
-
-
XiaoqiJiao authored
-
Jonathan Chang authored
* Add support for gpt2 batch inferencing * add test * remove typo Co-authored-by:patrickvonplaten <patrick.v.platen@gmail.com>
-
Quentin Lhoest authored
* fix bert position ids in DPR convert script * style
-
- 13 Oct, 2020 10 commits
-
-
Sylvain Gugger authored
-
Sam Shleifer authored
-
François Lagunas authored
* Adding optional trial argument to model_init Co-authored-by:Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Tiger authored
-
Noam Wies authored
* use DDP no_sync when possible * fix is_nlp_available addition mistake * reformat trainer.py * reformat trainer.py * drop support for pytorch < 1.2 * return support for pytorch < 1.2
-
Lysandre Debut authored
* Do not softmax when num_labels==1 * Update src/transformers/pipelines.py Co-authored-by:
Funtowicz Morgan <mfuntowicz@users.noreply.github.com> Co-authored-by:
Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
-
Patrick von Platen authored
* fix rag * Update tokenizer save_pretrained Co-authored-by:Thomas Wolf <thomwolf@users.noreply.github.com>
-
Patrick von Platen authored
Putting my name on a couple more issues to directly redirect them to me
-
Felipe Curti authored
* Add Documentation for GPT-1 Classification * Add GPT-1 with Classification head * Add tests for GPT-1 Classification * Add GPT-1 For Classification to auto models * Remove authorized missing keys, change checkpoint to openai-gpt
-
Lysandre Debut authored
-
- 12 Oct, 2020 11 commits
-
-
Sam Shleifer authored
-
Alex Combessie authored
-
Lysandre Debut authored
-
Julien Plu authored
* Fix test * fix generic text classification * fix test * Fix tests
-
sgugger authored
-
Jonathan Chang authored
Fix a bug that happends when subclassing Trainer and overwriting evaluate() without calling prediciton_loop()
-
Kelvin authored
Very often splitting large files to smaller files can prevent tokenizer going out of memory in environment like Colab that does not have swap memory
-
AndreaSottana authored
Minor spelling corrections in docstrings. "information" is uncountable in English and has no plural.
-
fteufel authored
Added is_torch_tpu_available() to the condition for saving a model as xla model. "xla_device" property of config can also be True on a non-xla device, when loading a checkpointthat was trained on xla before. Resolves #7695
-
Sylvain Gugger authored
-
Berowne authored
replace 'men_len' with 'mem_len' to match parameter name
-
- 11 Oct, 2020 3 commits
-
-
Miguel Victor authored
-
Sam Shleifer authored
-
Alexandr Maslov authored
-
- 10 Oct, 2020 2 commits
-
-
Andrew Kane authored
-
Sylvain Gugger authored
-
- 09 Oct, 2020 11 commits
-
-
Sylvain Gugger authored
-
Doug Blank authored
* Import intergration libraries first * isort and black happiness * flake8 happiness * Add a test * Black reformat * Ignore import order in tests * A heavy-handed method of disabling comet for tests * Remove comet_ml tests * Run black on setup.py
-
sgugger authored
-
Sylvain Gugger authored
-
Sam Shleifer authored
-
Stas Bekman authored
-
sgugger authored
-
Julien Plu authored
* Fix test * Fix cardinality issue * Fix test
-
Joe Davison authored
-
Funtowicz Morgan authored
* Reintroduce clean_text call which was removed by mistake in #4723 Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Added unittest for clean_text parameter on Bert tokenizer. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Better unittest name. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Adapt unittest to use untrained tokenizer. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Code quality + update test Co-authored-by:
Lysandre <lysandre.debut@reseau.eseo.fr>
-