- 21 Jul, 2020 19 commits
-
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
Jannes authored
-
tuner007 authored
-
Manuel Romero authored
Add nlp dataset used
-
Manuel Romero authored
Add dataset used as it is now part of nlp package
-
Ali Hamdi Ali Fadel authored
* Add ComVE model cards * Apply suggestions from code review Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Aditya Soni authored
-
BatJedi authored
* Created model card for my extreme summarization model * Update model_cards/yuvraj/xSumm/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
BatJedi authored
* Created model card for my summarization model * Update model_cards/yuvraj/summarizer-cnndm/README.md Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Manuel Romero authored
- Maybe the result of this query answers the question You did some days ago @julien-c ;-)
-
Manuel Romero authored
-
Manuel Romero authored
-
- 20 Jul, 2020 17 commits
-
-
Sylvain Gugger authored
-
Qingqing Cao authored
The DataParallel training was fixed in https://github.com/huggingface/transformers/pull/5733, this commit also fixes the evaluation. It's more convenient when the user enables both `do_train` and `do_eval`.
-
Sylvain Gugger authored
-
Sam Shleifer authored
Huge MT speedup!
-
Sylvain Gugger authored
* Improve doc of use_cache * Update src/transformers/configuration_xlnet.py Co-authored-by:
Teven <teven.lescao@gmail.com> Co-authored-by:
Teven <teven.lescao@gmail.com>
-
Clement authored
-
Clement authored
-
Clement authored
-
Clement authored
-
Clement authored
-
Sam Shleifer authored
-
Stas Bekman authored
* DataParallel fixes: 1. switched to a more precise check - if self.args.n_gpu > 1: + if isinstance(model, nn.DataParallel): 2. fix tests - require the same fixup under DataParallel as the training module * another fix
-
Pradhy729 authored
* Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Cleaner if nesting. * Added test for trainer and iterable dataset * Formatting for test * Fixed import when torch is available only. * Added require torch decorator to helper class * Moved dataset class inside unittest * Removed nested if and changed model in test * Checking torch availability for IterableDataset
-
Julien Chaumond authored
cc @lhoestq @thomwolf Also cc'ing model author @nreimers => Model pages now properly link to the dataset pages (and in the future, eval results, etc.)
-
Manuel Romero authored
-
Alan deLevie authored
-
Alan deLevie authored
-
- 18 Jul, 2020 4 commits
-
-
Sam Shleifer authored
Co-authored-by:Pradhy729 <49659913+Pradhy729@users.noreply.github.com>
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-