- 15 Apr, 2019 4 commits
-
-
Thomas Wolf authored
-
Thomas Wolf authored
Extend the BertForSequenceClassification docs to mention the special CLS token.
-
Thomas Wolf authored
Fix tsv read error in Windows
-
Thomas Wolf authored
Added a helpful error for users with single-document corpuses - fixes # 452
-
- 12 Apr, 2019 2 commits
-
-
Martin Boyanov authored
-
Matthew Carrigan authored
error for users whose corpus is just one giant document.
-
- 11 Apr, 2019 8 commits
-
-
Thomas Wolf authored
fix run_gpt2.py
-
Thomas Wolf authored
Update README.md
-
Jie Yang authored
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
-
thomwolf authored
-
- 09 Apr, 2019 2 commits
-
-
Yaroslav Bulatov authored
Fix for ```> > > > 04/09/2019 21:39:38 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False Traceback (most recent call last): File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 642, in <module> main() File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 502, in main raise ValueError("Training is currently the only implemented execution option. Please set `do_train`.") ValueError: Training is currently the only implemented execution option. Please set `do_train`. ``` -
Benjamin Mann authored
-
- 07 Apr, 2019 4 commits
-
-
-
Dhanajit Brahma authored
-
Dhanajit Brahma authored
-
dhanajitb authored
```while not args.unconditional: if not args.unconditional: ``` These lines have been updated
-
- 03 Apr, 2019 6 commits
-
-
Thomas Wolf authored
Fix Language Modeling Loss
-
Thomas Wolf authored
fix sample_doc
-
Thomas Wolf authored
Fix cosine schedule
-
thomwolf authored
-
thomwolf authored
-
thomwolf authored
-
- 02 Apr, 2019 4 commits
-
-
Thomas Wolf authored
Fix links in README
-
Weixin Wang authored
-
Thomas Wolf authored
Fix typo in example code
-
Thomas Wolf authored
Fixes to the TensorFlow conversion tool
-
- 01 Apr, 2019 1 commit
-
-
Mike Arpaia authored
-
- 30 Mar, 2019 2 commits
-
-
Weixin Wang authored
Modify 'unambigiously' to 'unambiguously'
-
jeonsworld authored
If the value of rand_end is returned from the randint function, the value of sampled_doc_index that matches current_idx is returned from searchsorted. example: cumsum_max = {int64} 30 doc_cumsum = {ndarray} [ 5 7 11 19 30] doc_lengths = {list} <class 'list'>: [5, 2, 4, 8, 11] if current_idx = 1, rand_start = 7 rand_end = 35 sentence_index = randint(7, 35) % cumsum_max if randint return 35, sentence_index becomes 5. if sentence_index is 5, np.searchsorted returns 1 equal to current_index.
-
- 29 Mar, 2019 3 commits
-
-
Dhanajit Brahma authored
Merge remote-tracking branch 'upstream/master'
-
Thomas Wolf authored
fix lm_finetuning's link
-
Sepehr Sameni authored
-
- 28 Mar, 2019 2 commits
-
-
dhanajitb authored
The unconditional generation works now but if the seed is fixed, the sample is the same every time. n_samples > 1 will give different samples though. I am giving the start token as '<|endoftext|>' for the unconditional generation.
-
Thomas Wolf authored
Added remaining GLUE tasks to 'run_classifier.py'
-
- 27 Mar, 2019 2 commits
-
-
Catalin Voss authored
-
Thomas Wolf authored
Remove padding_idx from position_embeddings and token_type_embeddings
-