"vscode:/vscode.git/clone" did not exist on "51fa7191b10a13f655d7ab19c7ea10e10078d668"
- 26 Jul, 2020 1 commit
-
-
Stas Bekman authored
* don't complain about missing W&B when WANDB_DISABLED=true * reformat to elif * typo
-
- 24 Jul, 2020 2 commits
-
-
Funtowicz Morgan authored
* Ensure OpenAI GPT position_ids is correctly initialized and registered as buffer at init. This will make it compatible with TorchScript export. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fix missing slice operator on the tensor data accessor. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Style. Signed-off-by:
Morgan Funtowicz <morgan@huggingface.co> * Fixed BertEmbedding position_ids buffer created at forward. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Fixed MobileBertEmbedding position_ids buffer created at forward. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Fixed XLM position_ids buffer created at forward. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com>
-
Sylvain Gugger authored
* Document TF modeling utils * Document all model utils
-
- 23 Jul, 2020 4 commits
-
-
Sylvain Gugger authored
* Avoid unnecessary warnings when loading pretrained model * Fix test * Add other keys to ignore * keys_to_ignore_at_load -> authorized_missing_keys
-
Sam Shleifer authored
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Clean up Trainer and expose customization points * Formatting * eval_step -> prediction_step
-
- 22 Jul, 2020 4 commits
-
-
Sylvain Gugger authored
-
Stas Bekman authored
* minor doc fixes correct superclass name and small grammar fixes * correct the instance name in the error message It appears to be `BaseTokenizer` from looking at: `from tokenizers.implementations import BaseTokenizer as BaseTokenizerFast` and not `Tokenizer` as it currently says.
-
Sam Shleifer authored
Co-authored-by:Julien Chaumond <chaumond@gmail.com>
-
Funtowicz Morgan authored
* Attempt to fix the way squad_convert_examples_to_features pad the elements for the QA pipeline. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Quality Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Make the code easier to read and avoid testing multiple test the same thing. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * missing enum value on truncation_strategy. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Rethinking for the easiest fix: expose the padding strategy on squad_convert_examples_to_features. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com> * Remove unused imports. Signed-off-by:
Morgan Funtowicz <funtowiczmo@gmail.com>
-
- 21 Jul, 2020 2 commits
-
-
Sylvain Gugger authored
* Update doc to new model outputs * Fix outputs in quicktour
-
Sam Shleifer authored
-
- 20 Jul, 2020 7 commits
-
-
Sylvain Gugger authored
-
Sylvain Gugger authored
* Improve doc of use_cache * Update src/transformers/configuration_xlnet.py Co-authored-by:
Teven <teven.lescao@gmail.com> Co-authored-by:
Teven <teven.lescao@gmail.com>
-
Sam Shleifer authored
-
Stas Bekman authored
* DataParallel fixes: 1. switched to a more precise check - if self.args.n_gpu > 1: + if isinstance(model, nn.DataParallel): 2. fix tests - require the same fixup under DataParallel as the training module * another fix
-
Pradhy729 authored
* Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Don't pass sampler for iterable dataset * Added check for test and eval dataloaders. * Formatting * Cleaner if nesting. * Added test for trainer and iterable dataset * Formatting for test * Fixed import when torch is available only. * Added require torch decorator to helper class * Moved dataset class inside unittest * Removed nested if and changed model in test * Checking torch availability for IterableDataset
-
Alan deLevie authored
-
Alan deLevie authored
-
- 18 Jul, 2020 4 commits
-
-
Sam Shleifer authored
Co-authored-by:Pradhy729 <49659913+Pradhy729@users.noreply.github.com>
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
- 17 Jul, 2020 4 commits
-
-
Teven authored
Slightly breaking change, changes functionality for `use_cache` in XLNet: if use_cache is True and mem_len is 0 or None (which is the case in the base model config), the model behaves like GPT-2 and returns mems to be used as past in generation. At training time `use_cache` is overriden and always True.
-
Patrick von Platen authored
* fix merge rebase * add intermediate reformer code * save intermediate caching results * save intermediate * save intermediate results * save intermediate * upload next step * fix generate tests * make tests work * add named tuple output * Apply suggestions from code review * fix use_cache for False case * fix tensor to gpu * fix tensor to gpu * refactor * refactor and make style
-
Sam Shleifer authored
- 16 Jul, 2020 3 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
- 15 Jul, 2020 1 commit
-
-
Patrick von Platen authored
* fix auto model causal lm * leverage given functionality * apply unused kwargs to all auto models
-
- 14 Jul, 2020 4 commits
-
-
Sam Shleifer authored
-
Gunnlaugur Thor Briem authored
-
as-stevens authored
[Reformer classification head] Implement the reformer model classification head for text classification (#5198) * Reformer model head classification implementation for text classification * Reformat the reformer model classification code * PR review comments, and test case implementation for reformer for classification head changes * CI/CD reformer for classification head test import error fix * CI/CD test case implementation added ReformerForSequenceClassification to all_model_classes * Code formatting- fixed * Normal test cases added for reformer classification head * Fix test cases implementation for the reformer classification head * removed token_type_id parameter from the reformer classification head * fixed the test case for reformer classification head * merge conflict with master fixed * merge conflict, changed reformer classification to accept the choice_label parameter added in latest code * refactored the the reformer classification head test code * reformer classification head, common transform test cases fixed * final set of the review comment, rearranging the reformer classes and docstring add to classification forward method * fixed the compilation error and text case fix for reformer classification head * Apply suggestions from code review Remove unnecessary dup Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Gaurav Mishra authored
Minor doc fix.
-
- 13 Jul, 2020 3 commits
-
-
Stas Bekman authored
* implement FlaubertForTokenClassification as a subclass of XLMForTokenClassification * fix mapping order * add the doc * add common tests
-
Patrick von Platen authored
* fix longformer global attention output * fix multi gpu problem * replace -10000 with 0 * better comment * make attention output equal local and global * Update src/transformers/modeling_longformer.py
-
Sylvain Gugger authored
* Fix Trainer in DataParallel setting * Fix typo Co-authored-by:Sam Shleifer <sshleifer@gmail.com>
-
- 12 Jul, 2020 1 commit
-
-
Kevin Canwen Xu authored
* Add model type check for pipelines * Add model type check for pipelines * rename func * Fix the init parameters * Fix format * rollback unnecessary refactor
-