"examples/vscode:/vscode.git/clone" did not exist on "651408a077f842e76e75bfc7d02b8ac38eeb6480"
  1. 20 Jul, 2023 1 commit
  2. 17 Jul, 2023 1 commit
  3. 12 Jun, 2023 1 commit
  4. 07 Jun, 2023 1 commit
  5. 06 Jun, 2023 1 commit
  6. 22 May, 2023 1 commit
  7. 18 May, 2023 1 commit
  8. 09 May, 2023 1 commit
  9. 13 Apr, 2023 1 commit
  10. 22 Mar, 2023 1 commit
  11. 14 Mar, 2023 1 commit
  12. 22 Feb, 2023 2 commits
  13. 06 Feb, 2023 2 commits
  14. 31 Jan, 2023 1 commit
  15. 30 Jan, 2023 1 commit
  16. 23 Jan, 2023 2 commits
  17. 18 Jan, 2023 1 commit
  18. 03 Jan, 2023 1 commit
  19. 01 Dec, 2022 1 commit
  20. 16 Nov, 2022 1 commit
  21. 15 Nov, 2022 2 commits
  22. 03 Nov, 2022 1 commit
  23. 01 Nov, 2022 1 commit
  24. 10 Oct, 2022 1 commit
  25. 20 Sep, 2022 1 commit
  26. 14 Sep, 2022 1 commit
  27. 13 Sep, 2022 1 commit
  28. 07 Sep, 2022 1 commit
  29. 01 Sep, 2022 1 commit
  30. 25 Aug, 2022 1 commit
  31. 22 Aug, 2022 1 commit
  32. 18 Aug, 2022 2 commits
  33. 16 Aug, 2022 1 commit
    • zhoutang776's avatar
      Update run_translation_no_trainer.py (#18637) · 25e651a2
      zhoutang776 authored
      * Update run_translation_no_trainer.py
      
      found an error in selecting `no_decay` parameters and some small modifications when the user continues to train from a checkpoint
      
      * fixs `no_decay` and `resume_step` issue
      
      1. change `no_decay` list
      2. if use continue to train their model from provided checkpoint, the `resume_step` will not be initialized properly if `args.gradient_accumulation_steps != 1`
      25e651a2
  34. 08 Aug, 2022 1 commit
    • Rasmus Arpe Fogh Jensen's avatar
      Update no_trainer.py scripts to include accelerate gradient accumulation wrapper (#18473) · a765b68a
      Rasmus Arpe Fogh Jensen authored
      * Added accelerate gradient accumulation wrapper to run_image_classification_no_trainer.py example script
      
      * make fixup changes
      
      * PR comments
      
      * changed input to Acceletor based on PR comment, ran make fixup
      
      * Added comment explaining the sync_gradients statement
      
      * Fixed lr scheduler max steps
      
      * Changed run_clm_no_trainer.py script to use accelerate gradient accum wrapper
      
      * Fixed all scripts except wav2vec2 pretraining to use accelerate gradient accum wrapper
      
      * Added accelerate gradient accum wrapper for wav2vec2_pretraining_no_trainer.py script
      
      * make fixup and lr_scheduler step inserted back into run_qa_beam_search_no_trainer.py
      
      * removed changes to run_wav2vec2_pretraining_no_trainer.py script and fixed using wrong constant in qa_beam_search_no_trainer.py script
      a765b68a
  35. 06 Aug, 2022 1 commit