- 01 Jun, 2020 12 commits
-
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Victor SANH authored
-
Mehrdad Farahani authored
-
Mehrdad Farahani authored
Readme for HooshvareLab/bert-base-parsbert-armananer-uncased
-
Mehrdad Farahani authored
Readme for HooshvareLab/bert-base-parsbert-peymaner-uncased
-
Mehrdad Farahani authored
mBERT results added regarding NER datasets!
-
Manuel Romero authored
-
- 29 May, 2020 8 commits
-
-
Patrick von Platen authored
* fix bug * add more tests
-
Patrick von Platen authored
-
Wei Fang authored
* Fix longformer attention mask casting when using apex * remove extra type casting
-
Patrick von Platen authored
* better api * improve automatic setting of global attention mask * fix longformer bug * fix global attention mask in test * fix global attn mask flatten * fix slow tests * update docstring * update docs and make more robust * improve attention mask
-
Simon B枚hm authored
Change the example code to use encode_plus since the token_type_id wasn't being correctly set.
-
Zhangyx authored
-
Patrick von Platen authored
* add multiple choice for longformer * add models to docs * adapt docstring * add test to longformer * add longformer for mc in init and modeling auto * fix tests
-
Iz Beltagy authored
* fix longformer model names in examples * a better name for the notebook
-
- 28 May, 2020 7 commits
-
-
flozi00 authored
* gpt2 typo * Add files via upload
-
Iz Beltagy authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
-
Anthony MOI authored
-
Suraj Patil authored
-
Lavanya Shukla authored
Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Iz Beltagy authored
* adding freeze roberta models * model cards * lint
-
- 27 May, 2020 10 commits
-
-
Patrick von Platen authored
* improve memory benchmarking * correct typo * fix current memory * check torch memory allocated * better pytorch function * add total cached gpu memory * add total gpu required * improve torch gpu usage * update memory usage * finalize memory tracing * save intermediate benchmark class * fix conflict * improve benchmark * improve benchmark * finalize * make style * improve benchmarking * correct typo * make train function more flexible * fix csv save * better repr of bytes * better print * fix __repr__ bug * finish plot script * rename plot file * delete csv and small improvements * fix in plot * fix in plot * correct usage of timeit * remove redundant line * remove redundant line * fix bug * add hf parser tests * add versioning and platform info * make style * add gpu information * ensure backward compatibility * finish adding all tests * Update src/transformers/benchmark/benchmark_args.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Update src/transformers/benchmark/benchmark_args_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * delete csv files * fix isort ordering * add out of memory handling * add better train memory handling Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Suraj Patil authored
* LongformerForSequenceClassification * better naming x=>hidden_states, fix typo in doc * Update src/transformers/modeling_longformer.py * Update src/transformers/modeling_longformer.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Suraj Patil authored
-
Lysandre Debut authored
* per_device instead of per_gpu/error thrown when argument unknown * [docs] Restore examples.md symlink * Correct absolute links so that symlink to the doc works correctly * Update src/transformers/hf_argparser.py Co-authored-by:
Julien Chaumond <chaumond@gmail.com> * Warning + reorder * Docs * Style * not for squad Co-authored-by:
Julien Chaumond <chaumond@gmail.com>
-
Mehrdad Farahani authored
HooshvareLab/bert-base-parsbert-uncased
-
Patrick von Platen authored
-
Darek K艂eczek authored
Co-authored-by: kldarek <darekmail>
-
Darek K艂eczek authored
Model card for cased model
-
Sam Shleifer authored
-
Hao Tan authored
The option `--do_lower_case` is currently required by the uncased models (i.e., bert-base-uncased, bert-large-uncased). Results: BERT-BASE without --do_lower_case: 'exact': 73.83, 'f1': 82.22 BERT-BASE with --do_lower_case: 'exact': 81.02, 'f1': 88.34
-
- 26 May, 2020 3 commits
-
-
Bayartsogt Yadamsuren authored
Here I am uploading Mongolian masked language model (ALBERT) on your platform. https://en.wikipedia.org/wiki/Mongolia
-
Wissam Antoun authored
* updated aubmindlab/bert-base-arabert/ Model card * updated aubmindlab/bert-base-arabertv01 model card
-
Oleksandr Bushkovskyi authored
Add language metadata, training and evaluation corpora details. Add example output. Fix inconsistent use of quotes.
-