- 19 Jun, 2019 1 commit
-
-
thomwolf authored
-
- 18 Jun, 2019 28 commits
-
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
- 11 Jun, 2019 2 commits
-
-
Meet Pragnesh Shah authored
-
Oliver Guhr authored
-
- 10 Jun, 2019 1 commit
-
-
jeonsworld authored
apply Whole Word Masking technique. referred to [create_pretraining_data.py](https://github.com/google-research/bert/blob/master/create_pretraining_data.py)
-
- 27 May, 2019 1 commit
-
-
Ahmad Barqawi authored
fix issue of bert-base-multilingual and add support for uncased multilingual
-
- 22 May, 2019 1 commit
-
-
tguens authored
Indentation change so that the output "nbest_predictions.json" is not empty.
-
- 13 May, 2019 1 commit
-
-
samuelbroscheit authored
-
- 11 May, 2019 2 commits
-
-
samuel.broscheit authored
-
https://github.com/huggingface/pytorch-pretrained-BERT/issues/556samuel.broscheit authored
Reason for issue was that optimzation steps where computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances. Solution in this pull request is to compute num_optimization_steps directly from len(data_loader).
-
- 09 May, 2019 2 commits
-
-
burcturkoglu authored
-
burcturkoglu authored
-
- 02 May, 2019 1 commit
-
-
MottoX authored
-