- 18 Jun, 2019 22 commits
-
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
- 11 Jun, 2019 2 commits
-
-
Meet Pragnesh Shah authored
-
Oliver Guhr authored
-
- 10 Jun, 2019 1 commit
-
-
jeonsworld authored
apply Whole Word Masking technique. referred to [create_pretraining_data.py](https://github.com/google-research/bert/blob/master/create_pretraining_data.py)
-
- 27 May, 2019 1 commit
-
-
Ahmad Barqawi authored
fix issue of bert-base-multilingual and add support for uncased multilingual
-
- 22 May, 2019 1 commit
-
-
tguens authored
Indentation change so that the output "nbest_predictions.json" is not empty.
-
- 13 May, 2019 1 commit
-
-
samuelbroscheit authored
-
- 11 May, 2019 2 commits
-
-
samuel.broscheit authored
-
https://github.com/huggingface/pytorch-pretrained-BERT/issues/556samuel.broscheit authored
Reason for issue was that optimzation steps where computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances. Solution in this pull request is to compute num_optimization_steps directly from len(data_loader).
-
- 09 May, 2019 2 commits
-
-
burcturkoglu authored
-
burcturkoglu authored
-
- 02 May, 2019 2 commits
- 30 Apr, 2019 1 commit
-
-
Aneesh Pappu authored
small fix to remove shifting of lm labels during pre process of roc stories, as this shifting happens interanlly in the model
-
- 29 Apr, 2019 1 commit
-
-
Mathieu Prouveur authored
-
- 24 Apr, 2019 1 commit
-
-
Mathieu Prouveur authored
-
- 23 Apr, 2019 1 commit
-
-
thomwolf authored
-
- 22 Apr, 2019 1 commit
-
-
Matthew Carrigan authored
-
- 21 Apr, 2019 1 commit
-
-
Sangwhan Moon authored
-