1. 28 Apr, 2023 5 commits
  2. 27 Apr, 2023 6 commits
    • Tong Li's avatar
      update questions and readme · c4191173
      Tong Li authored
      c4191173
    • Tong Li's avatar
      remove unnecessary step and update readme · aa77ddae
      Tong Li authored
      aa77ddae
    • YH's avatar
      [zero] Suggests a minor change to confusing variable names in the ZeRO optimizer. (#3173) · a22407cc
      YH authored
      * Fix confusing variable name in zero opt
      
      * Apply lint
      
      * Fix util func
      
      * Fix minor util func
      
      * Fix zero param optimizer name
      a22407cc
    • Hongxin Liu's avatar
      [chat] refactor model save/load logic (#3654) · 842768a1
      Hongxin Liu authored
      * [chat] strategy refactor unwrap model
      
      * [chat] strategy refactor save model
      
      * [chat] add docstr
      
      * [chat] refactor trainer save model
      
      * [chat] fix strategy typing
      
      * [chat] refactor trainer save model
      
      * [chat] update readme
      
      * [chat] fix unit test
      842768a1
    • Hongxin Liu's avatar
      [chat] remove lm model class (#3653) · 6ef70114
      Hongxin Liu authored
      * [chat] refactor lora
      
      * [chat] remove lm class
      
      * [chat] refactor save model
      
      * [chat] refactor train sft
      
      * [chat] fix ci
      
      * [chat] fix ci
      6ef70114
    • Camille Zhong's avatar
      [Doc] enhancement on README.md for chat examples (#3646) · 8bccb72c
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      update roberta with coati
      
      chat ci update
      
      Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      * Update README.md
      
      Update README.md
      
      * update readme
      
      * Update test_ci.sh
      8bccb72c
  3. 26 Apr, 2023 5 commits
    • Hongxin Liu's avatar
      [chat] refactor trainer (#3648) · 2a951955
      Hongxin Liu authored
      * [chat] ppo trainer remove useless args
      
      * [chat] update examples
      
      * [chat] update benchmark
      
      * [chat] update examples
      
      * [chat] fix sft training with wandb
      
      * [chat] polish docstr
      2a951955
    • Hongxin Liu's avatar
      [chat] polish performance evaluator (#3647) · f8288315
      Hongxin Liu authored
      f8288315
    • Hongxin Liu's avatar
      [gemini] accelerate inference (#3641) · 50793b35
      Hongxin Liu authored
      * [gemini] support don't scatter after inference
      
      * [chat] update colossalai strategy
      
      * [chat] fix opt benchmark
      
      * [chat] update opt benchmark
      
      * [gemini] optimize inference
      
      * [test] add gemini inference test
      
      * [chat] fix unit test ci
      
      * [chat] fix ci
      
      * [chat] fix ci
      
      * [chat] skip checkpoint test
      50793b35
    • Hongxin Liu's avatar
      [booster] add low level zero plugin (#3594) · 4b3240cb
      Hongxin Liu authored
      * [booster] add low level zero plugin
      
      * [booster] fix gemini plugin test
      
      * [booster] fix precision
      
      * [booster] add low level zero plugin test
      
      * [test] fix booster plugin test oom
      
      * [test] fix booster plugin test oom
      
      * [test] fix googlenet and inception output trans
      
      * [test] fix diffuser clip vision model
      
      * [test] fix torchaudio_wav2vec2_base
      
      * [test] fix low level zero plugin test
      4b3240cb
    • digger-yu's avatar
      [doc] Fix typo under colossalai and doc(#3618) · b9a8dff7
      digger-yu authored
      * Fixed several spelling errors under colossalai
      
      * Fix the spelling error in colossalai and docs directory
      
      * Cautious Changed the spelling error under the example folder
      
      * Update runtime_preparation_pass.py
      
      revert autograft to autograd
      
      * Update search_chunk.py
      
      utile to until
      
      * Update check_installation.py
      
      change misteach to mismatch in line 91
      
      * Update 1D_tensor_parallel.md
      
      revert to perceptron
      
      * Update 2D_tensor_parallel.md
      
      revert to perceptron in line 73
      
      * Update 2p5D_tensor_parallel.md
      
      revert to perceptron in line 71
      
      * Update 3D_tensor_parallel.md
      
      revert to perceptron in line 80
      
      * Update README.md
      
      revert to resnet in line 42
      
      * Update reorder_graph.py
      
      revert to indice in line 7
      
      * Update p2p.py
      
      revert to megatron in line 94
      
      * Update initialize.py
      
      revert to torchrun in line 198
      
      * Update routers.py
      
      change to detailed in line 63
      
      * Update routers.py
      
      change to detailed in line 146
      
      * Update README.md
      
      revert  random number in line 402
      b9a8dff7
  4. 24 Apr, 2023 3 commits
  5. 22 Apr, 2023 1 commit
  6. 20 Apr, 2023 3 commits
  7. 19 Apr, 2023 4 commits
  8. 18 Apr, 2023 6 commits
    • digger-yu's avatar
      [misc] op_builder/builder.py (#3593) · d96567bb
      digger-yu authored
      Optimization Code
      The source code has not been modified, only a few spelling errors in the comments have been changed
      d96567bb
    • binmakeswell's avatar
      [coati] fix install cmd (#3592) · 5a79cffd
      binmakeswell authored
      5a79cffd
    • Yuanchen's avatar
      1ec0d386
    • Hongxin Liu's avatar
      [fx] fix meta tensor registration (#3589) · dac127d0
      Hongxin Liu authored
      * [meta] fix torch 1.13.1
      
      * [meta] fix torch 2.0.0
      
      * [meta] fix torch 1.13.0
      
      * [meta] polish code
      dac127d0
    • Camille Zhong's avatar
      Update test_ci.sh · 36a519b4
      Camille Zhong authored
      update
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      update
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      update ci
      
      Update test_ci.sh
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update run_chatgpt_examples.yml
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      update test ci
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      update roberta with coati
      
      chat ci update
      
      Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      [test]chat_update_ci
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      test
      
      Update gpt_critic.py
      
      Update gpt_critic.py
      
      Update run_chatgpt_unit_tests.yml
      
      update test ci
      
      update
      
      update
      
      update
      
      update
      
      Update test_ci.sh
      
      update
      
      Update test_ci.sh
      
      Update test_ci.sh
      
      Update run_chatgpt_examples.yml
      
      Update run_chatgpt_examples.yml
      36a519b4
    • digger-yu's avatar
      [example] fix community doc (#3586) · d0fbd4b8
      digger-yu authored
      Adjusted the style of Community Examples to be consistent with other titles
      d0fbd4b8
  9. 17 Apr, 2023 7 commits
    • Hongxin Liu's avatar
      [gemini] support save state dict in shards (#3581) · f313babd
      Hongxin Liu authored
      * [gemini] support state dict shard
      
      * [gemini] add test state dict shard
      
      * [gemini] polish docstr
      
      * [gemini] fix merge
      
      * [gemini] polish code
      f313babd
    • tingfeng cao's avatar
      fix: fix sft (#3568) · 7788e0b0
      tingfeng cao authored
      7788e0b0
    • digger-yu's avatar
      [doc] Update .github/workflows/README.md (#3577) · 6e7e43c6
      digger-yu authored
      Optimization Code
      I think there were two extra $ entered here, which have been deleted
      6e7e43c6
    • Fazzie-Maqianli's avatar
      6b1a39b1
    • binmakeswell's avatar
      [chat] update reward model sh (#3578) · cc1eec2f
      binmakeswell authored
      cc1eec2f
    • csric's avatar
      [chatgpt] Detached PPO Training (#3195) · e3551443
      csric authored
      
      
      * run the base
      
      * working on dist ppo
      
      * sync
      
      * detached trainer
      
      * update detached trainer. no maker update function
      
      * facing init problem
      
      * 1 maker 1 trainer detached run. but no model update
      
      * facing cuda problem
      
      * fix save functions
      
      * verified maker update
      
      * nothing
      
      * add ignore
      
      * analyize loss issue
      
      * remove some debug codes
      
      * facing 2m1t stuck issue
      
      * 2m1t verified
      
      * do not use torchrun
      
      * working on 2m2t
      
      * working on 2m2t
      
      * initialize strategy in ray actor env
      
      * facing actor's init order issue
      
      * facing ddp model update issue (need unwarp ddp)
      
      * unwrap ddp actor
      
      * checking 1m2t stuck problem
      
      * nothing
      
      * set timeout for trainer choosing. It solves the stuck problem!
      
      * delete some debug output
      
      * rename to sync with upstream
      
      * rename to sync with upstream
      
      * coati rename
      
      * nothing
      
      * I am going to detach the replaybuffer from trainer and make it a Ray Actor. Two benefits: 1. support TP trainer. 2. asynchronized buffer operations
      
      * experience_maker_holder performs target-revolving _send_experience() instead of length comparison.
      
      * move code to ray subfolder
      
      * working on pipeline inference
      
      * apply comments
      
      ---------
      Co-authored-by: default avatarcsric <richcsr256@gmail.com>
      e3551443
    • YH's avatar
      Add docstr for zero3 chunk search utils (#3572) · d329c294
      YH authored
      d329c294