- 30 Aug, 2019 40 commits
-
-
Thomas Wolf authored
Update apex fp16 implementation
-
Thomas Wolf authored
fix: hard coding for max number
-
Thomas Wolf authored
fix adding special tokens
-
Thomas Wolf authored
Shortcut to special tokens' ids - fix GPT2 & RoBERTa tokenizers - improved testing for GPT/GPT-2
-
Thomas Wolf authored
Added cleaned configuration properties for tokenizer with serialization - improve tokenization of XLM
-
Thomas Wolf authored
Torch.hub now based on AutoModels - Updating AutoModels with AutoModelWithLMHead, Sequence Classification and Question Answering
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
thomwolf authored
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
Added multiple AutoModel classes: AutoModelWithLMHead, AutoModelForQuestionAnswering and AutoModelForSequenceClassification
-
VictorSanh authored
-
VictorSanh authored
-
thomwolf authored
-
Thomas Wolf authored
-
thomwolf authored
-
thomwolf authored
-
Thomas Wolf authored
regarding #1026 pull request
-
Thomas Wolf authored
-
thomwolf authored
-
thomwolf authored
-
LysandreJik authored
-
LysandreJik authored
-
LysandreJik authored
-
Rabeeh KARIMI authored
-
Rabeeh KARIMI authored
-
thomwolf authored
-
Thomas Wolf authored
change layernorm code to pytorch's native layer norm
-
Thomas Wolf authored
loads the tokenizer for each checkpoint, to solve the reproducability…
-
Thomas Wolf authored
-
thomwolf authored
-
thomwolf authored
-