- 02 Sep, 2019 7 commits
-
-
LysandreJik authored
-
LysandreJik authored
-
Julien Chaumond authored
cc @n1t0 @lysandrejik @thomwolf
-
Thomas Wolf authored
Fix byte-level BPE decoding error when using added tokens
-
LysandreJik authored
-
thomwolf authored
-
Thomas Wolf authored
distillation: fix ModuleNotFoundError error in token counts script
-
- 01 Sep, 2019 1 commit
-
-
Thomas Wolf authored
Pruning changes so that deleted heads are kept on save/load
-
- 31 Aug, 2019 20 commits
-
-
LysandreJik authored
-
LysandreJik authored
-
Stefan Schweter authored
-
LysandreJik authored
-
LysandreJik authored
-
thomwolf authored
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
Now raises a warning when a head to be deleted already has been deleted. An integration test verifying the total pipeline (-> from config -> save model -> load model -> additional head pruning) has been added.
-
LysandreJik authored
-
LysandreJik authored
-
Lysandre authored
-
LysandreJik authored
-
LysandreJik authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
See #1089 cc @thomwolf @lysandrejik Also @dhpollack
-
Julien Chaumond authored
Instead we correctly store it on the config (regenerating the hosted config files) cc @lysandrejik
-
- 30 Aug, 2019 12 commits
-
-
LysandreJik authored
-
Thomas Wolf authored
Update apex fp16 implementation
-
Thomas Wolf authored
fix: hard coding for max number
-
Thomas Wolf authored
fix adding special tokens
-
Thomas Wolf authored
Shortcut to special tokens' ids - fix GPT2 & RoBERTa tokenizers - improved testing for GPT/GPT-2
-
Thomas Wolf authored
Added cleaned configuration properties for tokenizer with serialization - improve tokenization of XLM
-
Thomas Wolf authored
Torch.hub now based on AutoModels - Updating AutoModels with AutoModelWithLMHead, Sequence Classification and Question Answering
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-