1. 14 Nov, 2023 1 commit
  2. 27 Sep, 2023 1 commit
    • flybird11111's avatar
      [chat] fix gemini strategy (#4698) · be400a09
      flybird11111 authored
      * [chat] fix gemini strategy
      
      * [chat] fix gemini strategy
      
      * [chat] fix gemini strategy
      
      * [chat] fix gemini strategy
      
      * g# This is a combination of 2 commits.
      
      [chat] fix gemini strategy
      
      fox
      
      * [chat] fix gemini strategy
      
      update llama2 example
      
      [chat] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * [fix] fix gemini strategy
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * fix
      
      * Update train_prompts.py
      be400a09
  3. 21 Sep, 2023 1 commit
  4. 20 Sep, 2023 1 commit
    • Wenhao Chen's avatar
      [chat]: update rm, add wandb and fix bugs (#4471) · 7b9b8644
      Wenhao Chen authored
      
      
      * feat: modify forward fn of critic and reward model
      
      * feat: modify calc_action_log_probs
      
      * to: add wandb in sft and rm trainer
      
      * feat: update train_sft
      
      * feat: update train_rm
      
      * style: modify type annotation and add warning
      
      * feat: pass tokenizer to ppo trainer
      
      * to: modify trainer base and maker base
      
      * feat: add wandb in ppo trainer
      
      * feat: pass tokenizer to generate
      
      * test: update generate fn tests
      
      * test: update train tests
      
      * fix: remove action_mask
      
      * feat: remove unused code
      
      * fix: fix wrong ignore_index
      
      * fix: fix mock tokenizer
      
      * chore: update requirements
      
      * revert: modify make_experience
      
      * fix: fix inference
      
      * fix: add padding side
      
      * style: modify _on_learn_batch_end
      
      * test: use mock tokenizer
      
      * fix: use bf16 to avoid overflow
      
      * fix: fix workflow
      
      * [chat] fix gemini strategy
      
      * [chat] fix
      
      * sync: update colossalai strategy
      
      * fix: fix args and model dtype
      
      * fix: fix checkpoint test
      
      * fix: fix requirements
      
      * fix: fix missing import and wrong arg
      
      * fix: temporarily skip gemini test in stage 3
      
      * style: apply pre-commit
      
      * fix: temporarily skip gemini test in stage 1&2
      
      ---------
      Co-authored-by: default avatarMingyan Jiang <1829166702@qq.com>
      7b9b8644
  5. 19 Sep, 2023 1 commit
  6. 02 Aug, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] fix bugs and add unit tests (#4213) · da4f7b85
      Wenhao Chen authored
      * style: rename replay buffer
      
      Experience replay is typically for off policy algorithms.
      Use this name in PPO maybe misleading.
      
      * fix: fix wrong zero2 default arg
      
      * test: update experience tests
      
      * style: rename zero_pad fn
      
      * fix: defer init in CycledDataLoader
      
      * test: add benchmark test
      
      * style: rename internal fn of generation
      
      * style: rename internal fn of lora
      
      * fix: remove unused loss fn
      
      * fix: remove unused utils fn
      
      * refactor: remove generate_with_actor fn
      
      * fix: fix type annotation
      
      * test: add models tests
      
      * fix: skip llama due to long execution time
      
      * style: modify dataset
      
      * style: apply formatter
      
      * perf: update reward dataset
      
      * fix: fix wrong IGNORE_INDEX in sft dataset
      
      * fix: remove DataCollatorForSupervisedDataset
      
      * test: add dataset tests
      
      * style: apply formatter
      
      * style: rename test_ci to test_train
      
      * feat: add llama in inference
      
      * test: add inference tests
      
      * test: change test scripts directory
      
      * fix: update ci
      
      * fix: fix typo
      
      * fix: skip llama due to oom
      
      * fix: fix file mod
      
      * style: apply formatter
      
      * refactor: remove duplicated llama_gptq
      
      * style: apply formatter
      
      * to: update rm test
      
      * feat: add tokenizer arg
      
      * feat: add download model script
      
      * test: update train tests
      
      * fix: modify gemini load and save pretrained
      
      * test: update checkpoint io test
      
      * to: modify nproc_per_node
      
      * fix: do not remove existing dir
      
      * fix: modify save path
      
      * test: add random choice
      
      * fix: fix sft path
      
      * fix: enlarge nproc_per_node to avoid oom
      
      * fix: add num_retry
      
      * fix: make lora config of rm and critic consistent
      
      * fix: add warning about lora weights
      
      * fix: skip some gpt2 tests
      
      * fix: remove grad ckpt in rm and critic due to errors
      
      * refactor: directly use Actor in train_sft
      
      * test: add more arguments
      
      * fix: disable grad ckpt when using lora
      
      * fix: fix save_pretrained and related tests
      
      * test: enable zero2 tests
      
      * revert: remove useless fn
      
      * style: polish code
      
      * test: modify test args
      da4f7b85
  7. 04 Jul, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] use official transformers and fix some issues (#4117) · 3d8d5d0d
      Wenhao Chen authored
      * feat: remove on_learn_epoch fn as not used
      
      * revert: add _on_learn_epoch fn
      
      * feat: remove NaiveStrategy
      
      * test: update train_prompts tests
      
      * fix: remove prepare_llama_tokenizer_and_embedding
      
      * test: add lora arg
      
      * feat: remove roberta support in train_prompts due to runtime errs
      
      * feat: remove deberta & roberta in rm as not used
      
      * test: remove deberta and roberta tests
      
      * feat: remove deberta and roberta models as not used
      
      * fix: remove calls to roberta
      
      * fix: remove prepare_llama_tokenizer_and_embedding
      
      * chore: update transformers version
      
      * docs: update transformers version
      
      * fix: fix actor inference
      
      * fix: fix ci
      
      * feat: change llama pad token to unk
      
      * revert: revert ddp setup_distributed
      
      * fix: change llama pad token to unk
      
      * revert: undo unnecessary changes
      
      * fix: use pip to install transformers
      3d8d5d0d
  8. 29 Jun, 2023 2 commits
    • Wenhao Chen's avatar
      [chat] remove naive strategy and split colossalai strategy (#4094) · edd75a59
      Wenhao Chen authored
      * feat: remove on_learn_epoch fn as not used
      
      * revert: add _on_learn_epoch fn
      
      * to: remove the use of NaiveStrategy
      
      * test: remove NaiveStrategy tests
      
      * feat: remove NaiveStrategy
      
      * style: modify comments and params
      
      * feat: split ColossalAIStrategy into LowLevelZeroStrategy and GeminiStrategy
      
      * fix: remove naive
      
      * fix: align with modified colossal strategy
      
      * fix: fix ddp _try_init_dist arg
      edd75a59
    • Wenhao Chen's avatar
      [chat] refactor trainer class (#4080) · b03d64d0
      Wenhao Chen authored
      * to: add SLTrainer
      
      * refactor: refactor RMTrainer and SFTTrainer
      
      * fix: fix init file
      
      * feat: remove on_learn_epoch fn as not used
      
      * fix: align with modified gemini arguments
      
      * to: add OnPolicyTrainer
      
      * revert: add _on_learn_epoch fn
      
      * refactor: refactor PPOTrainer
      
      * style: rename PPOTrainer argument
      
      * fix: align with modified PPO arguments
      
      * test: align with modified train_prompts arguments
      
      * chore: modify train_prompts
      
      * docs: align with modified arguments
      
      * fix: remove unnecessary output
      
      * fix: move dataloader to fit fn of SLTrainer
      
      * fix: move dataloader to fit fn of OnPolicyTrainer
      
      * fix: modify usage of prompt and pretrain dataloader
      b03d64d0
  9. 25 Jun, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] refactor strategy class with booster api (#3987) · 153b957a
      Wenhao Chen authored
      * refactor: adapt boost API in base and naive strategies
      
      * fix: initialize plugin after setup_distributed
      
      * fix: fix save_pretrained fn
      
      * refactor: adapt boost API in DDPStrategy
      
      * to: add _post_init check
      
      * to: fix ddp backward, modify ddp dataloader and unwrap
      
      * feat: adapt boost API in ColossalAIStrategy
      
      * fix: call setup_distributed before use get_current_device
      
      * fix: fix save_model and save_optimizer
      
      * test: remove save_sharded_optimizer test
      
      * style: apply formatter
      
      * fix: fix stage check and add comments
      
      * feat: allow dict type arg in strategy.prepare
      
      * to: temporarily remove lr_scheduler for testing
      
      * style: simplify init of ColossalAIStrategy
      
      * fix: fix lr_scheduler in sft and rm
      
      * style: modify comments
      
      * test: add train_prompts tests
      
      * fix: fix inference only case and use in train_prompts
      
      * test: skip failed tests in ci
      
      * style: fix CodeFactor check
      
      * fix: do not use model.to('cpu') with GeminiPlugin
      
      * test: enable colossalai_gemini tests
      
      * test: set CUDA_VISIBLE_DEVICES in ci
      
      * docs: add note
      153b957a
  10. 06 May, 2023 1 commit
  11. 28 Apr, 2023 1 commit
  12. 27 Apr, 2023 2 commits
    • Hongxin Liu's avatar
      [chat] refactor model save/load logic (#3654) · 842768a1
      Hongxin Liu authored
      * [chat] strategy refactor unwrap model
      
      * [chat] strategy refactor save model
      
      * [chat] add docstr
      
      * [chat] refactor trainer save model
      
      * [chat] fix strategy typing
      
      * [chat] refactor trainer save model
      
      * [chat] update readme
      
      * [chat] fix unit test
      842768a1
    • Camille Zhong's avatar
      [Doc] enhancement on README.md for chat examples (#3646) · 8bccb72c
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      Update test_ci.sh
      
      Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      update roberta with coati
      
      chat ci update
      
      Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      * Update README.md
      
      Update README.md
      
      * update readme
      
      * Update test_ci.sh
      8bccb72c
  13. 26 Apr, 2023 1 commit
    • Hongxin Liu's avatar
      [chat] refactor trainer (#3648) · 2a951955
      Hongxin Liu authored
      * [chat] ppo trainer remove useless args
      
      * [chat] update examples
      
      * [chat] update benchmark
      
      * [chat] update examples
      
      * [chat] fix sft training with wandb
      
      * [chat] polish docstr
      2a951955
  14. 22 Apr, 2023 1 commit
  15. 03 Apr, 2023 1 commit
    • Camille Zhong's avatar
      [chatgpt] add pre-trained model RoBERTa for RLHF stage 2 & 3 (#3223) · 30412866
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * add test for reward model training
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * update roberta with coati
      30412866
  16. 29 Mar, 2023 2 commits
  17. 28 Mar, 2023 1 commit