- 13 May, 2019 1 commit
-
-
samuelbroscheit authored
-
- 11 May, 2019 2 commits
-
-
samuel.broscheit authored
-
https://github.com/huggingface/pytorch-pretrained-BERT/issues/556samuel.broscheit authored
Reason for issue was that optimzation steps where computed from example size, which is different from actual size of dataloader when an example is chunked into multiple instances. Solution in this pull request is to compute num_optimization_steps directly from len(data_loader).
-
- 09 May, 2019 2 commits
-
-
burcturkoglu authored
-
burcturkoglu authored
-
- 02 May, 2019 2 commits
- 30 Apr, 2019 1 commit
-
-
Aneesh Pappu authored
small fix to remove shifting of lm labels during pre process of roc stories, as this shifting happens interanlly in the model
-
- 29 Apr, 2019 1 commit
-
-
Mathieu Prouveur authored
-
- 24 Apr, 2019 1 commit
-
-
Mathieu Prouveur authored
-
- 23 Apr, 2019 1 commit
-
-
thomwolf authored
-
- 22 Apr, 2019 1 commit
-
-
Matthew Carrigan authored
-
- 21 Apr, 2019 1 commit
-
-
Sangwhan Moon authored
-
- 16 Apr, 2019 2 commits
-
-
Ben Mann authored
-
Abhi Sharma authored
-
- 15 Apr, 2019 7 commits
- 12 Apr, 2019 1 commit
-
-
Matthew Carrigan authored
error for users whose corpus is just one giant document.
-
- 11 Apr, 2019 2 commits
- 09 Apr, 2019 2 commits
-
-
Yaroslav Bulatov authored
Fix for ```> > > > 04/09/2019 21:39:38 - INFO - __main__ - device: cuda n_gpu: 1, distributed training: False, 16-bits training: False Traceback (most recent call last): File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 642, in <module> main() File "/home/ubuntu/pytorch-pretrained-BERT/examples/lm_finetuning/simple_lm_finetuning.py", line 502, in main raise ValueError("Training is currently the only implemented execution option. Please set `do_train`.") ValueError: Training is currently the only implemented execution option. Please set `do_train`. ``` -
Benjamin Mann authored
-
- 07 Apr, 2019 2 commits
-
-
Dhanajit Brahma authored
-
dhanajitb authored
```while not args.unconditional: if not args.unconditional: ``` These lines have been updated
-
- 03 Apr, 2019 1 commit
-
-
thomwolf authored
-
- 01 Apr, 2019 1 commit
-
-
Mike Arpaia authored
-
- 30 Mar, 2019 2 commits
-
-
Weixin Wang authored
Modify 'unambigiously' to 'unambiguously'
-
jeonsworld authored
If the value of rand_end is returned from the randint function, the value of sampled_doc_index that matches current_idx is returned from searchsorted. example: cumsum_max = {int64} 30 doc_cumsum = {ndarray} [ 5 7 11 19 30] doc_lengths = {list} <class 'list'>: [5, 2, 4, 8, 11] if current_idx = 1, rand_start = 7 rand_end = 35 sentence_index = randint(7, 35) % cumsum_max if randint return 35, sentence_index becomes 5. if sentence_index is 5, np.searchsorted returns 1 equal to current_index.
-
- 28 Mar, 2019 1 commit
-
-
dhanajitb authored
The unconditional generation works now but if the seed is fixed, the sample is the same every time. n_samples > 1 will give different samples though. I am giving the start token as '<|endoftext|>' for the unconditional generation.
-
- 27 Mar, 2019 2 commits
- 25 Mar, 2019 2 commits
-
-
Matthew Carrigan authored
-
Matthew Carrigan authored
-
- 21 Mar, 2019 2 commits
-
-
Matthew Carrigan authored
order.
-
Matthew Carrigan authored
data on disc as a memmap rather than in memory
-