- 14 May, 2020 1 commit
-
-
Julien Chaumond authored
* Fix: unpin flake8 and fix cs errors * Ok we still need to quote those
-
- 07 Apr, 2020 1 commit
-
-
Patrick von Platen authored
* improve and add features to benchmark utils * update benchmark style * remove output files
-
- 19 Mar, 2020 1 commit
-
-
Nitish Shirish Keskar authored
torch.cuda.empty_cache() was being called from a TF function (even when torch is unavailable) not sure any replacement is needed if TF OOMs
-
- 17 Mar, 2020 1 commit
-
-
Thomas Wolf authored
* memory benchmark rss * have both forward pass and line-by-line mem tracing * cleaned up tracing * refactored and cleaning up API * no f-strings yet... * add GPU mem logging * fix GPU memory monitoring * style and quality * clean up and doc * update with comments * Switching to python 3.6+ * fix quality
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 22 Dec, 2019 2 commits
-
-
Aymeric Augustin authored
Fixes flake8 warning W291 (x224).
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 1 commit
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand.
-
- 31 Oct, 2019 1 commit
-
-
Timothy Liu authored
-
- 22 Oct, 2019 2 commits
- 18 Oct, 2019 1 commit
-
-
LysandreJik authored
-