- 28 Aug, 2019 1 commit
-
-
Shijie Wu authored
-
- 27 Aug, 2019 1 commit
-
-
thomwolf authored
-
- 24 Aug, 2019 2 commits
- 23 Aug, 2019 3 commits
-
-
Shijie Wu authored
Tokenization behave the same as original XLM proprocessing for most languages except zh, ja and th; Change API to allow specifying language in `tokenize`
-
Thomas Wolf authored
Fix distributed barrier hang
-
Thomas Wolf authored
reraise EnvironmentError in modeling_utils.py
-
- 22 Aug, 2019 3 commits
-
-
Abhishek Rao authored
-
VictorSanh authored
-
VictorSanh authored
-
- 21 Aug, 2019 8 commits
-
-
Abhishek Rao authored
-
Abhishek Rao authored
-
thomwolf authored
-
Lysandre authored
-
VictorSanh authored
-
Thomas Wolf authored
Adding gpt-2 large (774M parameters) model
-
thomwolf authored
-
thomwolf authored
-
- 20 Aug, 2019 22 commits
-
-
Thomas Wolf authored
Add a few of typos corrections, bugs fixes and small improvements
-
Thomas Wolf authored
Better use of spacy tokenizer in open ai and xlm tokenizers
-
Thomas Wolf authored
Re-implemented tokenize() iteratively in PreTrainedTokenizer.
-
Thomas Wolf authored
-
Thomas Wolf authored
Fix #1015 (tokenizer defaults to use_lower_case=True when loading from trained models)
-
Thomas Wolf authored
Fix a path so that a test can run on Windows
-
Peng Qi authored
-
Thomas Wolf authored
Fix typo. configuratoin -> configuration
-
Thomas Wolf authored
Fix bug of multi-gpu training in lm finetuning
-
thomwolf authored
-
Nikolay Korolev authored
-
Guillem García Subies authored
-
Guillem García Subies authored
-
Guillem García Subies authored
-
Guillem García Subies authored
-
Guillem García Subies authored
-
Thomas Wolf authored
Swap of optimizer.step and scheduler.step for lm finetuning examples
-
thomwolf authored
-
Julien Chaumond authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-