1. 10 May, 2023 3 commits
  2. 09 May, 2023 1 commit
    • Hongxin Liu's avatar
      [booster] fix no_sync method (#3709) · 6552cbf8
      Hongxin Liu authored
      * [booster] fix no_sync method
      
      * [booster] add test for ddp no_sync
      
      * [booster] fix merge
      
      * [booster] update unit test
      
      * [booster] update unit test
      
      * [booster] update unit test
      6552cbf8
  3. 08 May, 2023 2 commits
  4. 06 May, 2023 4 commits
  5. 05 May, 2023 6 commits
    • Hongxin Liu's avatar
      [booster] refactor all dp fashion plugins (#3684) · d0915f54
      Hongxin Liu authored
      * [booster] add dp plugin base
      
      * [booster] inherit dp plugin base
      
      * [booster] refactor unit tests
      d0915f54
    • digger-yu's avatar
      [CI] Update test_sharded_optim_with_sync_bn.py (#3688) · b49020c1
      digger-yu authored
      fix spelling error in line23
      change "cudnn_determinstic"=True to "cudnn_deterministic=True"
      b49020c1
    • Tong Li's avatar
      Merge pull request #3680 from digger-yu/digger-yu-patch-2 · b36e67cb
      Tong Li authored
      fix spelling error with applications/Chat/evaluate/
      b36e67cb
    • jiangmingyan's avatar
      [booster] gemini plugin support shard checkpoint (#3610) · 307894f7
      jiangmingyan authored
      
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin add shard checkpoint save/load
      
      * gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      * [API Refactoring]gemini plugin support shard checkpoint
      
      ---------
      Co-authored-by: default avatarluchen <luchen@luchendeMBP.lan>
      Co-authored-by: default avatarluchen <luchen@luchendeMacBook-Pro.local>
      307894f7
    • Camille Zhong's avatar
      [chat] PPO stage3 doc enhancement (#3679) · 0f785cb1
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      update roberta with coati
      
      chat ci update
      
      Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      * Update README.md
      
      Update README.md
      
      * update readme
      
      * Update test_ci.sh
      
      * update readme and add a script
      
      update readme and add a script
      
      modify readme
      
      Update README.md
      0f785cb1
    • digger-yu's avatar
      [doc] fix chat spelling error (#3671) · 6650daeb
      digger-yu authored
      * Update README.md
      
      change "huggingaface" to "huggingface"
      
      * Update README.md
      
      change "Colossa-AI" to "Colossal-AI"
      6650daeb
  6. 04 May, 2023 3 commits
  7. 28 Apr, 2023 5 commits
  8. 27 Apr, 2023 6 commits
    • Tong Li's avatar
      update questions and readme · c4191173
      Tong Li authored
      c4191173
    • Tong Li's avatar
      remove unnecessary step and update readme · aa77ddae
      Tong Li authored
      aa77ddae
    • YH's avatar
      [zero] Suggests a minor change to confusing variable names in the ZeRO optimizer. (#3173) · a22407cc
      YH authored
      * Fix confusing variable name in zero opt
      
      * Apply lint
      
      * Fix util func
      
      * Fix minor util func
      
      * Fix zero param optimizer name
      a22407cc
    • Hongxin Liu's avatar
      [chat] refactor model save/load logic (#3654) · 842768a1
      Hongxin Liu authored
      * [chat] strategy refactor unwrap model
      
      * [chat] strategy refactor save model
      
      * [chat] add docstr
      
      * [chat] refactor trainer save model
      
      * [chat] fix strategy typing
      
      * [chat] refactor trainer save model
      
      * [chat] update readme
      
      * [chat] fix unit test
      842768a1
    • Hongxin Liu's avatar
      [chat] remove lm model class (#3653) · 6ef70114
      Hongxin Liu authored
      * [chat] refactor lora
      
      * [chat] remove lm class
      
      * [chat] refactor save model
      
      * [chat] refactor train sft
      
      * [chat] fix ci
      
      * [chat] fix ci
      6ef70114
    • Camille Zhong's avatar
      [Doc] enhancement on README.md for chat examples (#3646) · 8bccb72c
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      update roberta with coati
      
      chat ci update
      
      Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      * Update README.md
      
      Update README.md
      
      * update readme
      
      * Update test_ci.sh
      8bccb72c
  9. 26 Apr, 2023 5 commits
    • Hongxin Liu's avatar
      [chat] refactor trainer (#3648) · 2a951955
      Hongxin Liu authored
      * [chat] ppo trainer remove useless args
      
      * [chat] update examples
      
      * [chat] update benchmark
      
      * [chat] update examples
      
      * [chat] fix sft training with wandb
      
      * [chat] polish docstr
      2a951955
    • Hongxin Liu's avatar
      [chat] polish performance evaluator (#3647) · f8288315
      Hongxin Liu authored
      f8288315
    • Hongxin Liu's avatar
      [gemini] accelerate inference (#3641) · 50793b35
      Hongxin Liu authored
      * [gemini] support don't scatter after inference
      
      * [chat] update colossalai strategy
      
      * [chat] fix opt benchmark
      
      * [chat] update opt benchmark
      
      * [gemini] optimize inference
      
      * [test] add gemini inference test
      
      * [chat] fix unit test ci
      
      * [chat] fix ci
      
      * [chat] fix ci
      
      * [chat] skip checkpoint test
      50793b35
    • Hongxin Liu's avatar
      [booster] add low level zero plugin (#3594) · 4b3240cb
      Hongxin Liu authored
      * [booster] add low level zero plugin
      
      * [booster] fix gemini plugin test
      
      * [booster] fix precision
      
      * [booster] add low level zero plugin test
      
      * [test] fix booster plugin test oom
      
      * [test] fix booster plugin test oom
      
      * [test] fix googlenet and inception output trans
      
      * [test] fix diffuser clip vision model
      
      * [test] fix torchaudio_wav2vec2_base
      
      * [test] fix low level zero plugin test
      4b3240cb
    • digger-yu's avatar
      [doc] Fix typo under colossalai and doc(#3618) · b9a8dff7
      digger-yu authored
      * Fixed several spelling errors under colossalai
      
      * Fix the spelling error in colossalai and docs directory
      
      * Cautious Changed the spelling error under the example folder
      
      * Update runtime_preparation_pass.py
      
      revert autograft to autograd
      
      * Update search_chunk.py
      
      utile to until
      
      * Update check_installation.py
      
      change misteach to mismatch in line 91
      
      * Update 1D_tensor_parallel.md
      
      revert to perceptron
      
      * Update 2D_tensor_parallel.md
      
      revert to perceptron in line 73
      
      * Update 2p5D_tensor_parallel.md
      
      revert to perceptron in line 71
      
      * Update 3D_tensor_parallel.md
      
      revert to perceptron in line 80
      
      * Update README.md
      
      revert to resnet in line 42
      
      * Update reorder_graph.py
      
      revert to indice in line 7
      
      * Update p2p.py
      
      revert to megatron in line 94
      
      * Update initialize.py
      
      revert to torchrun in line 198
      
      * Update routers.py
      
      change to detailed in line 63
      
      * Update routers.py
      
      change to detailed in line 146
      
      * Update README.md
      
      revert  random number in line 402
      b9a8dff7
  10. 24 Apr, 2023 3 commits
  11. 22 Apr, 2023 1 commit
  12. 20 Apr, 2023 1 commit