- 17 Feb, 2020 3 commits
-
-
Stefan Schweter authored
-
Thomas Wolf authored
update .gitignore to ignore .swp files created when using vim
-
Patrick von Platen authored
-
- 16 Feb, 2020 1 commit
-
-
Manuel Romero authored
I trained the model for more epochs so I improved the results. This commit will update the results of the model and add a gif using it with **transformers/pipelines**
-
- 14 Feb, 2020 14 commits
-
-
Julien Chaumond authored
-
Timo Moeller authored
* Update model performance for correct German conll03 dataset * Adjust text * Adjust line spacing
-
Julien Chaumond authored
Co-Authored-By:Ilias Chalkidis <ihalk@di.uoa.gr>
-
Julien Chaumond authored
cc @yvespeirsman Co-Authored-By:Yves Peirsman <yvespeirsman@users.noreply.github.com>
-
Yves Peirsman authored
* Created model card for nlptown/bert-base-multilingual-sentiment * Delete model card * Created model card for bert-base-multilingual-uncased-sentiment as README
-
Julien Chaumond authored
-
Manuel Romero authored
-
Ilias Chalkidis authored
-
Manuel Romero authored
-
Julien Chaumond authored
-
Manuel Romero authored
-
Julien Chaumond authored
-
Ilias Chalkidis authored
Added a "Pre-training details" section
-
Ilias Chalkidis authored
-
- 13 Feb, 2020 5 commits
-
-
Felix MIKAELIAN authored
* add model_card * Add tag cc @fmikaelian Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Severin Simmler authored
* feat: create model card * chore: add description * feat: stats plot * Delete prosa-jahre.svg * feat: years plot (again) * chore: add more details * fix: typos * feat: kfold plot * feat: kfold plot * Rename model_cards/severinsimmler/literary-german-bert.md to model_cards/severinsimmler/literary-german-bert/README.md * Support for linked images + add tags cc @severinsimmler Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Joe Davison authored
* Preserve spaces in GPT-2 tokenizers Preserves spaces after special tokens in GPT-2 and inhereted (RoBERTa) tokenizers, enabling correct BPE encoding. Automatically inserts a space in front of first token in encode function when adding special tokens. * Add tokenization preprocessing method * Add framework argument to pipeline factory Also fixes pipeline test issue. Each test input now treated as a distinct sequence.
-
Sam Shleifer authored
-
Sam Shleifer authored
* activations.py contains a mapping from string to activation function * resolves some `gelu` vs `gelu_new` ambiguity
-
- 12 Feb, 2020 6 commits
-
-
Lysandre authored
-
Lysandre authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Julien Chaumond authored
-
Manuel Romero authored
-
- 11 Feb, 2020 8 commits
-
-
Julien Chaumond authored
cc @tholor @loretoparisi @simonefrancia
-
sshleifer authored
-
sshleifer authored
-
sshleifer authored
-
Oleksiy Syvokon authored
PyTorch < 1.3 requires multiplication operands to be of the same type. This was violated when using default attention mask (i.e., attention_mask=None in arguments) given BERT in the decoder mode. In particular, this was breaking Model2Model and made tutorial from the quickstart failing.
-
jiyeon authored
-
Stefan Schweter authored
* [model_cards] New German Europeana BERT models from dbmdz * [model_cards] Update German Europeana BERT models from dbmdz
-
Funtowicz Morgan authored
Fix circleci cuInit error on Tensorflow >= 2.1.0.
-
- 10 Feb, 2020 3 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
This will enable filtering on language (amongst other tags) on the website cc @loretoparisi, @stefan-it, @HenrykBorzymowski, @marma
-
ahotrod authored
* Create README.md * Update README.md * Update README.md * Update README.md * [model_cards] Use code fences for consistency Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-