"vscode:/vscode.git/clone" did not exist on "ca3df9f0cfc73adbda59fd8044527d40fc09ffff"
  1. 03 Mar, 2021 1 commit
  2. 25 Feb, 2021 2 commits
    • Sylvain Gugger's avatar
      Add support for ZeRO-2/3 and ZeRO-offload in fairscale (#10354) · 9d14be5c
      Sylvain Gugger authored
      
      
      * Ass support for ZeRO-2/3 and ZeRO-offload in fairscale
      
      * Quality
      
      * Rework from review comments
      
      * Add doc
      
      * Apply suggestions from code review
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      
      * Address review comments
      Co-authored-by: default avatarStas Bekman <stas00@users.noreply.github.com>
      9d14be5c
    • Patrick von Platen's avatar
      [PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor,... · cb38ffcc
      Patrick von Platen authored
      [PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2Tokenizer (#10324)
      
      * push to show
      
      * small improvement
      
      * small improvement
      
      * Update src/transformers/feature_extraction_utils.py
      
      * Update src/transformers/feature_extraction_utils.py
      
      * implement base
      
      * add common tests
      
      * make all tests pass for wav2vec2
      
      * make padding work & add more tests
      
      * finalize feature extractor utils
      
      * add call method to feature extraction
      
      * finalize feature processor
      
      * finish tokenizer
      
      * finish general processor design
      
      * finish tests
      
      * typo
      
      * remove bogus file
      
      * finish docstring
      
      * add docs
      
      * finish docs
      
      * small fix
      
      * correct docs
      
      * save intermediate
      
      * load changes
      
      * apply changes
      
      * apply changes to doc
      
      * change tests
      
      * apply surajs recommend
      
      * final changes
      
      * Apply suggestions from code review
      
      * fix typo
      
      * fix import
      
      * correct docstring
      cb38ffcc
  3. 22 Feb, 2021 1 commit
  4. 17 Feb, 2021 1 commit
  5. 11 Feb, 2021 1 commit
  6. 10 Feb, 2021 1 commit
  7. 09 Feb, 2021 1 commit
  8. 08 Feb, 2021 1 commit
  9. 02 Feb, 2021 1 commit
  10. 14 Jan, 2021 1 commit
  11. 13 Jan, 2021 1 commit
    • Stas Bekman's avatar
      [trainer] deepspeed integration (#9211) · 2df34f4a
      Stas Bekman authored
      
      
      * deepspeed integration
      
      * style
      
      * add test
      
      * ds wants to do its own backward
      
      * fp16 assert
      
      * Update src/transformers/training_args.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * style
      
      * for clarity extract what args are being passed to deepspeed
      
      * introduce the concept of self.wrapped_model
      
      * s/self.wrapped_model/self.model_wrapped/
      
      * complete transition to self.wrapped_model / self.model
      
      * fix
      
      * doc
      
      * give ds its own init
      
      * add custom overrides, handle bs correctly
      
      * fix test
      
      * clean up model_init logic, fix small bug
      
      * complete fix
      
      * collapse --deepspeed_config into --deepspeed
      
      * style
      
      * start adding doc notes
      
      * style
      
      * implement hf2ds optimizer and scheduler configuration remapping
      
      * oops
      
      * call get_num_training_steps absolutely when needed
      
      * workaround broken auto-formatter
      
      * deepspeed_config arg is no longer needed - fixed in deepspeed master
      
      * use hf's fp16 args in config
      
      * clean
      
      * start on the docs
      
      * rebase cleanup
      
      * finish up --fp16
      
      * clarify the supported stages
      
      * big refactor thanks to discovering deepspeed.init_distributed
      
      * cleanup
      
      * revert fp16 part
      
      * add checkpoint-support
      
      * more init ds into integrations
      
      * extend docs
      
      * cleanup
      
      * unfix docs
      
      * clean up old code
      
      * imports
      
      * move docs
      
      * fix logic
      
      * make it clear which file it's referring to
      
      * document nodes/gpus
      
      * style
      
      * wrong format
      
      * style
      
      * deepspeed handles gradient clipping
      
      * easier to read
      
      * major doc rewrite
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * docs
      
      * switch to AdamW optimizer
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * clarify doc
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2df34f4a
  12. 05 Jan, 2021 1 commit
  13. 23 Dec, 2020 1 commit
    • Suraj Patil's avatar
      Add caching mechanism to BERT, RoBERTa (#9183) · 88ef8893
      Suraj Patil authored
      * add past_key_values
      
      * add use_cache option
      
      * make mask before cutting ids
      
      * adjust position_ids according to past_key_values
      
      * flatten past_key_values
      
      * fix positional embeds
      
      * fix _reorder_cache
      
      * set use_cache to false when not decoder, fix attention mask init
      
      * add test for caching
      
      * add past_key_values for Roberta
      
      * fix position embeds
      
      * add caching test for roberta
      
      * add doc
      
      * make style
      
      * doc, fix attention mask, test
      
      * small fixes
      
      * adress patrick's comments
      
      * input_ids shouldn't start with pad token
      
      * use_cache only when decoder
      
      * make consistent with bert
      
      * make copies consistent
      
      * add use_cache to encoder
      
      * add past_key_values to tapas attention
      
      * apply suggestions from code review
      
      * make coppies consistent
      
      * add attn mask in tests
      
      * remove copied from longformer
      
      * apply suggestions from code review
      
      * fix bart test
      
      * nit
      
      * simplify model outputs
      
      * fix doc
      
      * fix output ordering
      88ef8893
  14. 22 Dec, 2020 1 commit
  15. 16 Dec, 2020 2 commits
    • Lysandre Debut's avatar
      TableQuestionAnsweringPipeline (#9145) · 1c1a2ffb
      Lysandre Debut authored
      
      
      * AutoModelForTableQuestionAnswering
      
      * TableQuestionAnsweringPipeline
      
      * Apply suggestions from Patrick's code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * Sylvain and Patrick comments
      
      * Better PyTorch/TF error message
      
      * Add integration tests
      
      * Argument Handler naming
      Co-authored-by: default avatarpatrickvonplaten <patrick.v.platen@gmail.com>
      
      * Fix docs to appease the documentation gods
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      1c1a2ffb
    • Patrick von Platen's avatar
      [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init (#9054) · 640e6fe1
      Patrick von Platen authored
      
      
      * save intermediate
      
      * save intermediate
      
      * save intermediate
      
      * correct flax bert model file
      
      * new module / model naming
      
      * make style
      
      * almost finish BERT
      
      * finish roberta
      
      * make fix-copies
      
      * delete keys file
      
      * last refactor
      
      * fixes in run_mlm_flax.py
      
      * remove pooled from run_mlm_flax.py`
      
      * fix gelu | gelu_new
      
      * remove Module from inits
      
      * splits
      
      * dirty print
      
      * preventing warmup_steps == 0
      
      * smaller splits
      
      * make fix-copies
      
      * dirty print
      
      * dirty print
      
      * initial_evaluation argument
      
      * declaration order fix
      
      * proper model initialization/loading
      
      * proper initialization
      
      * run_mlm_flax improvements: improper model inputs bugfix + automatic dataset splitting + tokenizers parallelism warning + avoiding warmup_steps=0 bug
      
      * removed tokenizers warning hack, fixed model re-initialization
      
      * reverted training_args.py changes
      
      * fix flax from pretrained
      
      * improve test in flax
      
      * apply sylvains tips
      
      * update init
      
      * make 0.3.0 compatible
      
      * revert tevens changes
      
      * revert tevens changes 2
      
      * finalize revert
      
      * fix bug
      
      * add docs
      
      * add pretrained to init
      
      * Update src/transformers/modeling_flax_utils.py
      
      * fix copies
      
      * final improvements
      Co-authored-by: default avatarTevenLeScao <teven.lescao@gmail.com>
      640e6fe1
  16. 10 Dec, 2020 1 commit
  17. 07 Dec, 2020 1 commit
  18. 23 Nov, 2020 1 commit
    • Colin Brochtrup's avatar
      Add early stopping callback to pytorch trainer (#8581) · 8ffc01a7
      Colin Brochtrup authored
      * Add early stopping patience and minimum threshold metric must improve to prevent early stopping to pytorch trainer
      
      * Add early stopping test
      
      * Set patience counter to 0 if best metric not defined yet
      
      * Make early stopping a callback. Add callback event for updating the best metric for early stopping callback to trigger on.
      
      * Run make style
      
      * make funciton name sensible
      
      * Improve new argument docstring wording and hope that flakey CI test passes.
      
      * Use on_evaluation callback instead of custom. Remove some debug printing
      
      * Move early stopping arguments and state into early stopping callback
      
      * Run make style
      
      * Remove old code
      
      * Fix docs formatting. make style went rogue on me.
      
      * Remove copied attributes and fix variable
      
      * Add assertions on training arguments instead of mutating them. Move comment out of public docs.
      
      * Make separate test for early stopping callback. Add test of invalid arguments.
      
      * Run make style... I remembered before CI this time!
      
      * appease flake8
      
      * Add EarlyStoppingCallback to callback docs
      
      * Make docstring EarlyStoppingCallabck match other callbacks.
      
      * Fix typo in docs
      8ffc01a7
  19. 12 Nov, 2020 1 commit
  20. 06 Nov, 2020 1 commit
  21. 03 Nov, 2020 1 commit
    • Patrick von Platen's avatar
      Refactoring the generate() function (#6949) · a1bbcf3f
      Patrick von Platen authored
      * first draft
      
      * show design proposition for new generate method
      
      * up
      
      * make better readable
      
      * make first version
      
      * gpt2 tests pass
      
      * make beam search for gpt2 work
      
      * add first encoder-decoder code
      
      * delete typo
      
      * make t5 work
      
      * save indermediate
      
      * make bart work with beam search
      
      * finish beam search bart / t5
      
      * add default kwargs
      
      * make more tests pass
      
      * fix no bad words sampler
      
      * some fixes and tests for all distribution processors
      
      * fix test
      
      * fix rag slow tests
      
      * merge to master
      
      * add nograd to generate
      
      * make all slow tests pass
      
      * speed up generate
      
      * fix edge case bug
      
      * small fix
      
      * correct typo
      
      * add type hints and docstrings
      
      * fix typos in tests
      
      * add beam search tests
      
      * add tests for beam scorer
      
      * fix test rag
      
      * finish beam search tests
      
      * move generation tests in seperate file
      
      * fix generation tests
      
      * more tests
      
      * add aggressive generation tests
      
      * fix tests
      
      * add gpt2 sample test
      
      * add more docstring
      
      * add more docs
      
      * finish doc strings
      
      * apply some more of sylvains and sams comments
      
      * fix some typos
      
      * make fix copies
      
      * apply lysandres and sylvains comments
      
      * final corrections on examples
      
      * small fix for reformer
      a1bbcf3f
  22. 27 Oct, 2020 1 commit
  23. 26 Oct, 2020 3 commits
    • Sylvain Gugger's avatar
      Doc styling (#8067) · 08f534d2
      Sylvain Gugger authored
      * Important files
      
      * Styling them all
      
      * Revert "Styling them all"
      
      This reverts commit 7d029395fdae8513b8281cbc2a6c239f8093503e.
      
      * Syling them for realsies
      
      * Fix syntax error
      
      * Fix benchmark_utils
      
      * More fixes
      
      * Fix modeling auto and script
      
      * Remove new line
      
      * Fixes
      
      * More fixes
      
      * Fix more files
      
      * Style
      
      * Add FSMT
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * More fixes
      
      * Fixes
      
      * More fixes
      
      * More fixes
      
      * Last fixes
      
      * Make sphinx happy
      08f534d2
    • Sylvain Gugger's avatar
      Doc fixes in preparation for the docstyle PR (#8061) · 04a17f85
      Sylvain Gugger authored
      * Fixes in preparation for doc styling
      
      * More fixes
      
      * Better syntax
      
      * Fixes
      
      * Style
      
      * More fixes
      
      * More fixes
      04a17f85
    • noise-field's avatar
      Mlflow integration callback (#8016) · c48b16b8
      noise-field authored
      * Add MLflow integration class
      
      Add integration code for MLflow in integrations.py along with the code
      that checks that MLflow is installed.
      
      * Add MLflowCallback import
      
      Add import of MLflowCallback in trainer.py
      
      * Handle model argument
      
      Allow the callback to handle model argument and store model config items as hyperparameters.
      
      * Log parameters to MLflow in batches
      
      MLflow cannot log more than a hundred parameters at once.
      Code added to split the parameters into batches of 100 items and log the batches one by one.
      
      * Fix style
      
      * Add docs on MLflow callback
      
      * Fix issue with unfinished runs
      
      The "fluent" api used in MLflow integration allows only one run to be active at any given moment. If the Trainer is disposed off and a new one is created, but the training is not finished, it will refuse to log the results when the next trainer is created.
      
      * Add MLflow integration class
      
      Add integration code for MLflow in integrations.py along with the code
      that checks that MLflow is installed.
      
      * Add MLflowCallback import
      
      Add import of MLflowCallback in trainer.py
      
      * Handle model argument
      
      Allow the callback to handle model argument and store model config items as hyperparameters.
      
      * Log parameters to MLflow in batches
      
      MLflow cannot log more than a hundred parameters at once.
      Code added to split the parameters into batches of 100 items and log the batches one by one.
      
      * Fix style
      
      * Add docs on MLflow callback
      
      * Fix issue with unfinished runs
      
      The "fluent" api used in MLflow integration allows only one run to be active at any given moment. If the Trainer is disposed off and a new one is created, but the training is not finished, it will refuse to log the results when the next trainer is created.
      c48b16b8
  24. 13 Oct, 2020 1 commit
  25. 07 Oct, 2020 1 commit
    • Sylvain Gugger's avatar
      Trainer callbacks (#7596) · 08ba4b49
      Sylvain Gugger authored
      
      
      * Initial callback proposal
      
      * Finish various callbacks
      
      * Post-rebase conflicts
      
      * Fix tests
      
      * Don't use something that's not set
      
      * Documentation
      
      * Remove unwanted print.
      
      * Document all models can work
      
      * Add tests + small fixes
      
      * Update docs/source/internal/trainer_utils.rst
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * Address review comments
      
      * Fix TF tests
      
      * Real fix this time
      
      * This one should work
      
      * Fix typo
      
      * Really fix typo
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      08ba4b49
  26. 24 Sep, 2020 1 commit
  27. 23 Sep, 2020 1 commit
  28. 11 Sep, 2020 2 commits
    • Sylvain Gugger's avatar
      Compute loss method (#7074) · 4cbd50e6
      Sylvain Gugger authored
      4cbd50e6
    • Sylvain Gugger's avatar
      Automate the lists in auto-xxx docs (#7061) · e841b75d
      Sylvain Gugger authored
      * More readable dict
      
      * More nlp -> datasets
      
      * Revert "More nlp -> datasets"
      
      This reverts commit 3cd1883d226c63c4a686fc1fed35f2cd586ebe45.
      
      * Automate the lists in auto-xxx docs
      
      * More readable dict
      
      * Revert "More nlp -> datasets"
      
      This reverts commit 3cd1883d226c63c4a686fc1fed35f2cd586ebe45.
      
      * Automate the lists in auto-xxx docs
      
      * nlp -> datasets
      
      * Fix new key
      e841b75d
  29. 09 Sep, 2020 1 commit
  30. 02 Sep, 2020 1 commit
    • Suraj Patil's avatar
      [pipelines] Text2TextGenerationPipeline (#6744) · 4230d30f
      Suraj Patil authored
      * add Text2TextGenerationPipeline
      
      * remove max length warning
      
      * remove comments
      
      * remove input_length
      
      * fix typo
      
      * add tests
      
      * use TFAutoModelForSeq2SeqLM
      
      * doc
      
      * typo
      
      * add the doc below TextGenerationPipeline
      
      * doc nit
      
      * style
      
      * delete comment
      4230d30f
  31. 01 Sep, 2020 1 commit
  32. 27 Aug, 2020 1 commit
  33. 14 Aug, 2020 1 commit
  34. 04 Aug, 2020 1 commit
  35. 03 Aug, 2020 1 commit