"vscode:/vscode.git/clone" did not exist on "bdd690a74da5283cbc893dfd79e1c7c72ec1bcfa"
  1. 13 Jan, 2021 1 commit
    • Stas Bekman's avatar
      [trainer] deepspeed integration (#9211) · 2df34f4a
      Stas Bekman authored
      
      
      * deepspeed integration
      
      * style
      
      * add test
      
      * ds wants to do its own backward
      
      * fp16 assert
      
      * Update src/transformers/training_args.py
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * style
      
      * for clarity extract what args are being passed to deepspeed
      
      * introduce the concept of self.wrapped_model
      
      * s/self.wrapped_model/self.model_wrapped/
      
      * complete transition to self.wrapped_model / self.model
      
      * fix
      
      * doc
      
      * give ds its own init
      
      * add custom overrides, handle bs correctly
      
      * fix test
      
      * clean up model_init logic, fix small bug
      
      * complete fix
      
      * collapse --deepspeed_config into --deepspeed
      
      * style
      
      * start adding doc notes
      
      * style
      
      * implement hf2ds optimizer and scheduler configuration remapping
      
      * oops
      
      * call get_num_training_steps absolutely when needed
      
      * workaround broken auto-formatter
      
      * deepspeed_config arg is no longer needed - fixed in deepspeed master
      
      * use hf's fp16 args in config
      
      * clean
      
      * start on the docs
      
      * rebase cleanup
      
      * finish up --fp16
      
      * clarify the supported stages
      
      * big refactor thanks to discovering deepspeed.init_distributed
      
      * cleanup
      
      * revert fp16 part
      
      * add checkpoint-support
      
      * more init ds into integrations
      
      * extend docs
      
      * cleanup
      
      * unfix docs
      
      * clean up old code
      
      * imports
      
      * move docs
      
      * fix logic
      
      * make it clear which file it's referring to
      
      * document nodes/gpus
      
      * style
      
      * wrong format
      
      * style
      
      * deepspeed handles gradient clipping
      
      * easier to read
      
      * major doc rewrite
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      
      * docs
      
      * switch to AdamW optimizer
      
      * style
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      
      * clarify doc
      Co-authored-by: default avatarSylvain Gugger <35901082+sgugger@users.noreply.github.com>
      Co-authored-by: default avatarLysandre Debut <lysandre@huggingface.co>
      2df34f4a
  2. 07 Jan, 2021 1 commit
  3. 06 Jan, 2021 1 commit
  4. 05 Jan, 2021 2 commits
    • Patrick von Platen's avatar
      [PyTorch Bart] Split Bart into different models (#9343) · eef66035
      Patrick von Platen authored
      * first try
      
      * remove old template
      
      * finish bart
      
      * finish mbart
      
      * delete unnecessary line
      
      * init pegasus
      
      * save intermediate
      
      * correct pegasus
      
      * finish pegasus
      
      * remove cookie cutter leftover
      
      * add marian
      
      * finish blenderbot
      
      * replace in file
      
      * correctly split blenderbot
      
      * delete "old" folder
      
      * correct "add statement"
      
      * adapt config for tf comp
      
      * correct configs for tf
      
      * remove ipdb
      
      * fix more stuff
      
      * fix mbart
      
      * push pegasus fix
      
      * fix mbart
      
      * more fixes
      
      * fix research projects code
      
      * finish docs for bart, mbart, and marian
      
      * delete unnecessary file
      
      * correct attn typo
      
      * correct configs
      
      * remove pegasus for seq class
      
      * correct peg docs
      
      * correct peg docs
      
      * finish configs
      
      * further improve docs
      
      * add copied from statements to mbart
      
      * fix copied from in mbart
      
      * add copy statements to marian
      
      * add copied from to marian
      
      * add pegasus copied from
      
      * finish pegasus
      
      * finish copied from
      
      * Apply suggestions from code review
      
      * make style
      
      * backward comp blenderbot
      
      * apply lysandres and sylvains suggestions
      
      * apply suggestions
      
      * push last fixes
      
      * fix docs
      
      * fix tok tests
      
      * fix imports code style
      
      * fix doc
      eef66035
    • Yusuke Mori's avatar
  5. 04 Jan, 2021 3 commits
  6. 03 Jan, 2021 1 commit
  7. 23 Dec, 2020 1 commit
  8. 22 Dec, 2020 5 commits
  9. 21 Dec, 2020 3 commits
  10. 20 Dec, 2020 1 commit
  11. 19 Dec, 2020 1 commit
  12. 18 Dec, 2020 7 commits
  13. 17 Dec, 2020 1 commit
  14. 16 Dec, 2020 3 commits
    • Sylvain Gugger's avatar
      Experimental support for fairscale ShardedDDP (#9139) · 9a671853
      Sylvain Gugger authored
      * Experimental stupport for fairscale ShardedDDP
      
      * Add import error if fairscale not available
      
      * Address review comments
      
      * Fix seq2seq trainer
      9a671853
    • Sylvain Gugger's avatar
    • Patrick von Platen's avatar
      [Flax] Align FlaxBertForMaskedLM with BertForMaskedLM, implement from_pretrained, init (#9054) · 640e6fe1
      Patrick von Platen authored
      
      
      * save intermediate
      
      * save intermediate
      
      * save intermediate
      
      * correct flax bert model file
      
      * new module / model naming
      
      * make style
      
      * almost finish BERT
      
      * finish roberta
      
      * make fix-copies
      
      * delete keys file
      
      * last refactor
      
      * fixes in run_mlm_flax.py
      
      * remove pooled from run_mlm_flax.py`
      
      * fix gelu | gelu_new
      
      * remove Module from inits
      
      * splits
      
      * dirty print
      
      * preventing warmup_steps == 0
      
      * smaller splits
      
      * make fix-copies
      
      * dirty print
      
      * dirty print
      
      * initial_evaluation argument
      
      * declaration order fix
      
      * proper model initialization/loading
      
      * proper initialization
      
      * run_mlm_flax improvements: improper model inputs bugfix + automatic dataset splitting + tokenizers parallelism warning + avoiding warmup_steps=0 bug
      
      * removed tokenizers warning hack, fixed model re-initialization
      
      * reverted training_args.py changes
      
      * fix flax from pretrained
      
      * improve test in flax
      
      * apply sylvains tips
      
      * update init
      
      * make 0.3.0 compatible
      
      * revert tevens changes
      
      * revert tevens changes 2
      
      * finalize revert
      
      * fix bug
      
      * add docs
      
      * add pretrained to init
      
      * Update src/transformers/modeling_flax_utils.py
      
      * fix copies
      
      * final improvements
      Co-authored-by: default avatarTevenLeScao <teven.lescao@gmail.com>
      640e6fe1
  15. 15 Dec, 2020 4 commits
  16. 11 Dec, 2020 3 commits
  17. 10 Dec, 2020 1 commit
  18. 09 Dec, 2020 1 commit