- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 22 Dec, 2019 5 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This change is mostly autogenerated with: $ python -m autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports examples templates transformers utils hubconf.py setup.py I made minor changes in the generated diff. -
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 3 commits
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand. -
thomwolf authored
-
thomwolf authored
-
- 19 Dec, 2019 1 commit
-
-
Francesco authored
Removed duplicate XLMConfig, XLMForQuestionAnswering and XLMTokenizer from import statement of run_squad.py script
-
- 16 Dec, 2019 1 commit
-
-
Lysandre authored
-
- 14 Dec, 2019 1 commit
-
-
erenup authored
-
- 13 Dec, 2019 2 commits
- 12 Dec, 2019 1 commit
-
-
LysandreJik authored
-
- 11 Dec, 2019 1 commit
-
-
Bilal Khan authored
-
- 10 Dec, 2019 2 commits
-
-
LysandreJik authored
-
Lysandre authored
-
- 09 Dec, 2019 1 commit
-
-
LysandreJik authored
-
- 05 Dec, 2019 2 commits
-
-
LysandreJik authored
Improve global visibility on the run_squad script, remove unused files and fixes related to XLNet.
-
LysandreJik authored
-
- 04 Dec, 2019 3 commits
-
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
- 03 Dec, 2019 2 commits
-
-
LysandreJik authored
-
Ethan Perez authored
When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo.
-
- 28 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 26 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 22 Nov, 2019 2 commits
- 18 Nov, 2019 1 commit
-
-
Kazutoshi Shinoda authored
-
- 15 Nov, 2019 1 commit
-
-
Xu Hongshen authored
-
- 14 Nov, 2019 1 commit
-
-
R茅mi Louf authored
-
- 12 Nov, 2019 1 commit
-
-
ronakice authored
-
- 04 Nov, 2019 1 commit
-
-
thomwolf authored
-
- 20 Oct, 2019 1 commit
-
-
Pasquale Minervini authored
-
- 17 Oct, 2019 1 commit
-
-
William Tambellini authored
Add a speed estimate log (time per example) for evaluation to examples/run_squad.py
-
- 14 Oct, 2019 2 commits
-
-
hlums authored
-
Simon Layton authored
-