1. 25 Jun, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] refactor strategy class with booster api (#3987) · 153b957a
      Wenhao Chen authored
      * refactor: adapt boost API in base and naive strategies
      
      * fix: initialize plugin after setup_distributed
      
      * fix: fix save_pretrained fn
      
      * refactor: adapt boost API in DDPStrategy
      
      * to: add _post_init check
      
      * to: fix ddp backward, modify ddp dataloader and unwrap
      
      * feat: adapt boost API in ColossalAIStrategy
      
      * fix: call setup_distributed before use get_current_device
      
      * fix: fix save_model and save_optimizer
      
      * test: remove save_sharded_optimizer test
      
      * style: apply formatter
      
      * fix: fix stage check and add comments
      
      * feat: allow dict type arg in strategy.prepare
      
      * to: temporarily remove lr_scheduler for testing
      
      * style: simplify init of ColossalAIStrategy
      
      * fix: fix lr_scheduler in sft and rm
      
      * style: modify comments
      
      * test: add train_prompts tests
      
      * fix: fix inference only case and use in train_prompts
      
      * test: skip failed tests in ci
      
      * style: fix CodeFactor check
      
      * fix: do not use model.to('cpu') with GeminiPlugin
      
      * test: enable colossalai_gemini tests
      
      * test: set CUDA_VISIBLE_DEVICES in ci
      
      * docs: add note
      153b957a
  2. 28 Apr, 2023 1 commit
  3. 27 Apr, 2023 1 commit
    • Hongxin Liu's avatar
      [chat] refactor model save/load logic (#3654) · 842768a1
      Hongxin Liu authored
      * [chat] strategy refactor unwrap model
      
      * [chat] strategy refactor save model
      
      * [chat] add docstr
      
      * [chat] refactor trainer save model
      
      * [chat] fix strategy typing
      
      * [chat] refactor trainer save model
      
      * [chat] update readme
      
      * [chat] fix unit test
      842768a1
  4. 18 Apr, 2023 1 commit
  5. 03 Apr, 2023 1 commit
    • Camille Zhong's avatar
      [chatgpt] add pre-trained model RoBERTa for RLHF stage 2 & 3 (#3223) · 30412866
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * add test for reward model training
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * update roberta with coati
      30412866
  6. 28 Mar, 2023 1 commit
  7. 22 Mar, 2023 1 commit
  8. 20 Mar, 2023 1 commit
    • BlueRum's avatar
      [chatgpt]Reward Model Training Process update (#3133) · 7548ca5a
      BlueRum authored
      * add normalize function to value_head in bloom rm
      
      * add normalization to value_function in gpt_rm
      
      * add normalization to value_head of opt_rm
      
      * add Anthropic/hh-rlhf dataset
      
      * Update __init__.py
      
      * Add LogExpLoss in RM training
      
      * Update __init__.py
      
      * update rm trainer to use acc as target
      
      * update example/train_rm
      
      * Update train_rm.sh
      
      * code style
      
      * Update README.md
      
      * Update README.md
      
      * add rm test to ci
      
      * fix tokenier
      
      * fix typo
      
      * change batchsize to avoid oom in ci
      
      * Update test_ci.sh
      7548ca5a
  9. 07 Mar, 2023 1 commit
  10. 03 Mar, 2023 1 commit
  11. 02 Mar, 2023 1 commit
  12. 22 Feb, 2023 1 commit
  13. 21 Feb, 2023 1 commit
    • BlueRum's avatar
      [chatgpt] fix rm eval (#2829) · 3eebc4df
      BlueRum authored
      * [chatgpt]fix train_rm bug with lora
      
      * [chatgpt]support colossalai strategy to train rm
      
      * fix pre-commit
      
      * fix pre-commit 2
      
      * [chatgpt]fix rm eval typo
      
      * fix rm eval
      
      * fix pre commit
      3eebc4df
  14. 16 Feb, 2023 1 commit
  15. 14 Feb, 2023 1 commit