- 24 Mar, 2020 1 commit
-
-
Julien Chaumond authored
-
- 02 Mar, 2020 1 commit
-
-
Victor SANH authored
* fix n_gpu count when no_cuda flag is activated * someone was left behind
-
- 21 Feb, 2020 1 commit
-
-
maximeilluin authored
* Added CamembertForQuestionAnswering * fixed camembert tokenizer case
-
- 04 Feb, 2020 1 commit
-
-
Yuval Pinter authored
* pass langs parameter to certain XLM models Adding an argument that specifies the language the SQuAD dataset is in so language-sensitive XLMs (e.g. `xlm-mlm-tlm-xnli15-1024`) don't default to language `0`. Allows resolution of issue #1799 . * fixing from `make style` * fixing style (again)
-
- 28 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 17 Jan, 2020 1 commit
-
-
jiyeon_baek authored
Rul -> Run
-
- 16 Jan, 2020 1 commit
-
-
Lysandre authored
-
- 08 Jan, 2020 2 commits
-
-
Lysandre authored
-
Lysandre Debut authored
-
- 06 Jan, 2020 2 commits
-
-
alberduris authored
-
alberduris authored
-
- 22 Dec, 2019 5 commits
-
-
Aymeric Augustin authored
-
Aymeric Augustin authored
-
Aymeric Augustin authored
This change is mostly autogenerated with: $ python -m autoflake --in-place --recursive --remove-all-unused-imports --ignore-init-module-imports examples templates transformers utils hubconf.py setup.py I made minor changes in the generated diff. -
Aymeric Augustin authored
-
Aymeric Augustin authored
This is the result of: $ isort --recursive examples templates transformers utils hubconf.py setup.py
-
- 21 Dec, 2019 3 commits
-
-
Aymeric Augustin authored
This is the result of: $ black --line-length 119 examples templates transformers utils hubconf.py setup.py There's a lot of fairly long lines in the project. As a consequence, I'm picking the longest widely accepted line length, 119 characters. This is also Thomas' preference, because it allows for explicit variable names, to make the code easier to understand. -
thomwolf authored
-
thomwolf authored
-
- 19 Dec, 2019 1 commit
-
-
Francesco authored
Removed duplicate XLMConfig, XLMForQuestionAnswering and XLMTokenizer from import statement of run_squad.py script
-
- 16 Dec, 2019 1 commit
-
-
Lysandre authored
-
- 14 Dec, 2019 1 commit
-
-
erenup authored
-
- 13 Dec, 2019 2 commits
- 12 Dec, 2019 1 commit
-
-
LysandreJik authored
-
- 11 Dec, 2019 1 commit
-
-
Bilal Khan authored
-
- 10 Dec, 2019 2 commits
-
-
LysandreJik authored
-
Lysandre authored
-
- 09 Dec, 2019 1 commit
-
-
LysandreJik authored
-
- 05 Dec, 2019 2 commits
-
-
LysandreJik authored
Improve global visibility on the run_squad script, remove unused files and fixes related to XLNet.
-
LysandreJik authored
-
- 04 Dec, 2019 3 commits
-
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
- 03 Dec, 2019 2 commits
-
-
LysandreJik authored
-
Ethan Perez authored
When evaluating, shouldn't we always use the SequentialSampler instead of DistributedSampler? Evaluation only runs on 1 GPU no matter what, so if you use the DistributedSampler with N GPUs, I think you'll only evaluate on 1/N of the evaluation set. That's at least what I'm finding when I run an older/modified version of this repo.
-
- 28 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 26 Nov, 2019 1 commit
-
-
Lysandre authored
-
- 22 Nov, 2019 2 commits