1. 06 Jan, 2023 1 commit
  2. 05 Jan, 2023 2 commits
  3. 04 Jan, 2023 1 commit
  4. 03 Jan, 2023 1 commit
  5. 30 Dec, 2022 1 commit
  6. 27 Dec, 2022 1 commit
  7. 23 Dec, 2022 1 commit
  8. 30 Nov, 2022 1 commit
  9. 17 Nov, 2022 1 commit
  10. 03 Oct, 2022 1 commit
    • Boyuan Yao's avatar
      [autoparallel] add rotor C version (#1658) · 1df98d5b
      Boyuan Yao authored
      * [autoparallel] add rotor c version
      
      * [fx] remove metainfoprop in rotor solver
      
      * [autoparallel] modify C
       code format
      
      * [autoparallel] remove build.py
      
      * [autoparallel] fix C extension build
      
      * [autoparallel] add C solver consistency test
      
      * [autoparallel] remove some unused imports
      
      * [autoparallel] refactor rotor solver code
      
      * [autoparallel] replace print with colossalai logger
      
      * [autoparallel] ranks fixed
      1df98d5b
  11. 27 Jul, 2022 1 commit
    • Super Daniel's avatar
      [fx] add torchaudio test (#1369) · be229217
      Super Daniel authored
      * [fx]add torchaudio test
      
      * [fx]add torchaudio test
      
      * [fx] add torchaudio test
      
      * [fx] add torchaudio test
      
      * [fx] add torchaudio test
      
      * [fx] add torchaudio test
      
      * [fx] add torchaudio test
      
      * [fx] add torchaudio test and test patches
      
      * Delete ~
      
      * [fx] add patches and patches test
      
      * [fx] add patches and patches test
      
      * [fx] fix patches
      
      * [fx] fix rnn patches
      
      * [fx] fix rnn patches
      
      * [fx] fix rnn patches
      
      * [fx] fix rnn patches
      
      * [fx] merge upstream
      
      * [fx] fix import errors
      be229217
  12. 09 May, 2022 1 commit
  13. 07 May, 2022 1 commit
  14. 05 May, 2022 1 commit
  15. 27 Apr, 2022 1 commit
  16. 22 Apr, 2022 2 commits
  17. 19 Apr, 2022 5 commits
  18. 12 Apr, 2022 1 commit
  19. 16 Mar, 2022 1 commit
  20. 11 Mar, 2022 6 commits
  21. 15 Feb, 2022 1 commit
  22. 13 Jan, 2022 1 commit
  23. 21 Dec, 2021 1 commit
  24. 09 Dec, 2021 1 commit
    • Frank Lee's avatar
      Develop/experiments (#59) · da01c234
      Frank Lee authored
      
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      
      * Split conv2d, class token, positional embedding in 2d, Fix random number in ddp
      Fix convergence in cifar10, Imagenet1000
      
      * Integrate 1d tensor parallel in Colossal-AI (#39)
      
      * fixed 1D and 2D convergence (#38)
      
      * optimized 2D operations
      
      * fixed 1D ViT convergence problem
      
      * Feature/ddp (#49)
      
      * remove redundancy func in setup (#19) (#20)
      
      * use env to control the language of doc (#24) (#25)
      
      * Support TP-compatible Torch AMP and Update trainer API (#27)
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      
      * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
      
      * add explanation for ViT example (#35) (#36)
      
      * support torch ddp
      
      * fix loss accumulation
      
      * add log for ddp
      
      * change seed
      
      * modify timing hook
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * Feature/pipeline (#40)
      
      * remove redundancy func in setup (#19) (#20)
      
      * use env to control the language of doc (#24) (#25)
      
      * Support TP-compatible Torch AMP and Update trainer API (#27)
      
      * Add gradient accumulation, fix lr scheduler
      
      * fix FP16 optimizer and adapted torch amp with tensor parallel (#18)
      
      * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes
      
      * fixed trainer
      
      * Revert "fixed trainer"
      
      This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097.
      
      * improved consistency between trainer, engine and schedule (#23)
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      
      * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29)
      
      * add explanation for ViT example (#35) (#36)
      
      * optimize communication of pipeline parallel
      
      * fix grad clip for pipeline
      Co-authored-by: default avatarFrank Lee <somerlee.9@gmail.com>
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      
      * optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51)
      
      * Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset
      
      * update api for better usability (#58)
      
      update api for better usability
      Co-authored-by: default avatar1SAA <c2h214748@gmail.com>
      Co-authored-by: default avatarver217 <lhx0217@gmail.com>
      Co-authored-by: default avatarpuck_WCR <46049915+WANG-CR@users.noreply.github.com>
      Co-authored-by: default avatarbinmakeswell <binmakeswell@gmail.com>
      Co-authored-by: default avatarアマデウス <kurisusnowdeng@users.noreply.github.com>
      Co-authored-by: default avatarBoxiangW <45734921+BoxiangW@users.noreply.github.com>
      da01c234
  25. 18 Nov, 2021 1 commit
  26. 15 Nov, 2021 1 commit
  27. 03 Nov, 2021 1 commit
  28. 28 Oct, 2021 1 commit