"docs/source/vscode:/vscode.git/clone" did not exist on "1367142afd363e2799e3299b9bbf14fcb5e848c0"
- 17 Feb, 2021 2 commits
-
-
Daniel Stancl authored
* Fix head_mask and decoder_head_mask in TFT5 models * Enable test_headmasking both fot TFT5 tester and TFT5EncoderOnly tester Co-authored-by:patrickvonplaten <patrick.v.platen@gmail.com>
-
Lysandre Debut authored
-
- 16 Feb, 2021 5 commits
-
-
Stas Bekman authored
* [trainer] fix ignored columns logger This PR fixes a confusing log entry that says: ``` The following columns in the evaluation set don't have a corresponding argument in `T5ForConditionalGeneration.forward` and have been ignored: . ``` when everything is in order. * Update src/transformers/trainer.py Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Joe Davison authored
-
Sylvain Gugger authored
-
Zhang Cheng authored
-
Julien Plu authored
-
- 15 Feb, 2021 12 commits
-
-
Suraj Patil authored
* move old s2s scripts to legacy * add the tests back * proper rename * restore * Apply suggestions from code review Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com> Co-authored-by:
Stas Bekman <stas@stason.org> Co-authored-by:
Stas Bekman <stas00@users.noreply.github.com> Co-authored-by:
Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
-
Stas Bekman authored
-
Lysandre Debut authored
Co-authored-by:
Quentin Lhoest <lhoest.q@gmail.com> Co-authored-by:
Quentin Lhoest <lhoest.q@gmail.com>
-
Stas Bekman authored
* fix run_seq2seq.py; porting DeepSpeed tests to it * unrefactor * defensive programming * defensive programming 2 * port the rest of the trainer tests * style * a cleaner scripts dir finder * cleanup
-
Julien Plu authored
-
Suraj Patil authored
* add tokenizer for mBART-50 * update tokenizers * make src_lang and tgt_lang optional * update tokenizer test * add setter * update docs * update conversion script * update docs * update conversion script * update tokenizer * update test * update docs * doc * address Sylvain's suggestions * fix test * fix formatting * nits
-
Julien Plu authored
* Fix template * Update Seq2Seq tests
-
Suraj Patil authored
-
Julien Plu authored
* Add check-ops script * Finish to implement check_tf_ops and start the test * Make the test mandatory only for BERT * Update tf_ops folder * Remove useless classes * Add the ONNX test for GPT2 and BART * Add a onnxruntime slow test + better opset flexibility * Fix test + apply style * fix tests * Switch min opset from 12 to 10 * Update src/transformers/file_utils.py Co-authored-by:
Lysandre Debut <lysandre@huggingface.co> * Fix GPT2 * Remove extra shape_list usage * Fix GPT2 * Address Morgan's comments Co-authored-by:
Lysandre Debut <lysandre@huggingface.co>
-
Lysandre Debut authored
-
Nicolas Patry authored
Fixes #10168
-
Sylvain Gugger authored
-
- 13 Feb, 2021 6 commits
-
-
Stas Bekman authored
* save fast tokenizer + add info logs * fix tests * remove the saving of fast tokenizer
-
Sylvain Gugger authored
-
Manuel Romero authored
-
Manuel Romero authored
-
Nicolas Patry authored
* Conversion from slow to fast for BPE spm vocabs contained an error. - There is only 1 test currently (tokenizers + slow) that used the modified path and it's reformer, which does not contain any ids modification so the bug was silent for now. - The real issue is that vocab variable was overloaded by SentencePieceExtractor, leading to Slow specific vocab oddities to be completely ignored - The bug was reported here https://github.com/huggingface/transformers/issues/9518 - Ran the complete tokenization test suite with slow without error (`RUN_SLOW=1 pytest -sv tests/test_tokenization_*`) * Remove rebase error. * Adding the fixture.
-
Lysandre Debut authored
-
- 12 Feb, 2021 4 commits
-
-
Julien Chaumond authored
-
Julien Chaumond authored
* [hf_api] delete deprecated methods and tests cc @lhoestq * Update test_hf_api.py
-
Mohamed Al Salti authored
* Fix typo * apply suggestion Co-authored-by:Suraj Patil <surajp815@gmail.com>
-
Suraj Patil authored
* fix rouge metrics and task specific params * fix typo * round metrics * typo * remove task_specific_params
-
- 11 Feb, 2021 8 commits
-
-
Sylvain Gugger authored
* Refactor things out of main train * Store signature * Add SageMakerTrainer * Init + Copyright * Address review comments
-
Stas Bekman authored
* init devices/setup explicitly * docs + test * simplify * cleanup * cleanup * cleanup * correct the required dist setup * derive local_rank from env LOCAL_RANK
-
Sylvain Gugger authored
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* save intermediate * finish batch the same as fairseq * add normalization * fix batched input * add better comment * Update src/transformers/models/wav2vec2/modeling_wav2vec2.py * add nice docstring * add tokenizer tests * make all slow tests pass * finish PR * correct import
-
Tanmay Thakur authored
* Update:community.md, new nb add * feat: updated grammar on nb description * Update: Train summarizer for BlenderBotSmall
-
Qbiwan authored
* remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric * fix * fix * fix * push * fix * everything works * fix init * fix * special treatment for sepconv1d * style *
馃檹 馃徑 * add doc and cleanup * fix doc * fix doc again * fix doc again * Apply suggestions from code review * make style * Proposal that should work * Remove needless code * Fix test * Apply suggestions from code review * remove xnli_compute_metrics, add load_dataset, load_metric, set_seed,metric.compute,load_metric * amend README * removed data_args.task_name and replaced with task_name = "xnli"; use split function to load train and validation dataset separately; remove __post_init__; remove flag --task_name from README. * removed dict task_to_keys, use str "xnli" instead of variable task_name, change preprocess_function to use examples["premise"], examples["hypothesis"] directly, remove sentence1_key and sentence2_key, change compute_metrics function to cater only to accuracy metric, add condition for train_langauge is None when using dataset.load_dataset() * removed `torch.distributed.barrier()` and `import torch` as `from_pretrained` is able to do the work; amend README
-
- 10 Feb, 2021 3 commits
-
-
Stas Bekman authored
* free up memory at the end of train * rework tests * consistent formatting * correction
-
Suraj Patil authored
* add forced logits processors * delete adjust_logits method * add forced_eos_token_id argument in config * add tests for forced logits processors * update gen utils tests * add forced option to tf generate * remove adjust_logits method from tf models * update adjust_logits for marian * delete _force_token_id_to_be_generated method * style * import warnings * pass max_length to _get_logits_processor * set forced_eos_token_id to None * set forced attributes in conf utils * typo * fix rag generate * add forced_eos_token_id in rag config * remove force_bos_token_to_be_generated from BartConfig * remove _force_token_ids_generation from FSMT * nit * fix negative constant * apply suggestions from code review
-
Julien Plu authored
* Fix test * Remove commented test * Fix name * Apply style * Fix check copies * Remove prints * Restore boolean * Fix reshape
-