1. 29 Jun, 2023 2 commits
    • Wenhao Chen's avatar
      [chat] remove naive strategy and split colossalai strategy (#4094) · edd75a59
      Wenhao Chen authored
      * feat: remove on_learn_epoch fn as not used
      
      * revert: add _on_learn_epoch fn
      
      * to: remove the use of NaiveStrategy
      
      * test: remove NaiveStrategy tests
      
      * feat: remove NaiveStrategy
      
      * style: modify comments and params
      
      * feat: split ColossalAIStrategy into LowLevelZeroStrategy and GeminiStrategy
      
      * fix: remove naive
      
      * fix: align with modified colossal strategy
      
      * fix: fix ddp _try_init_dist arg
      edd75a59
    • Wenhao Chen's avatar
      [chat] refactor trainer class (#4080) · b03d64d0
      Wenhao Chen authored
      * to: add SLTrainer
      
      * refactor: refactor RMTrainer and SFTTrainer
      
      * fix: fix init file
      
      * feat: remove on_learn_epoch fn as not used
      
      * fix: align with modified gemini arguments
      
      * to: add OnPolicyTrainer
      
      * revert: add _on_learn_epoch fn
      
      * refactor: refactor PPOTrainer
      
      * style: rename PPOTrainer argument
      
      * fix: align with modified PPO arguments
      
      * test: align with modified train_prompts arguments
      
      * chore: modify train_prompts
      
      * docs: align with modified arguments
      
      * fix: remove unnecessary output
      
      * fix: move dataloader to fit fn of SLTrainer
      
      * fix: move dataloader to fit fn of OnPolicyTrainer
      
      * fix: modify usage of prompt and pretrain dataloader
      b03d64d0
  2. 25 Jun, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] refactor strategy class with booster api (#3987) · 153b957a
      Wenhao Chen authored
      * refactor: adapt boost API in base and naive strategies
      
      * fix: initialize plugin after setup_distributed
      
      * fix: fix save_pretrained fn
      
      * refactor: adapt boost API in DDPStrategy
      
      * to: add _post_init check
      
      * to: fix ddp backward, modify ddp dataloader and unwrap
      
      * feat: adapt boost API in ColossalAIStrategy
      
      * fix: call setup_distributed before use get_current_device
      
      * fix: fix save_model and save_optimizer
      
      * test: remove save_sharded_optimizer test
      
      * style: apply formatter
      
      * fix: fix stage check and add comments
      
      * feat: allow dict type arg in strategy.prepare
      
      * to: temporarily remove lr_scheduler for testing
      
      * style: simplify init of ColossalAIStrategy
      
      * fix: fix lr_scheduler in sft and rm
      
      * style: modify comments
      
      * test: add train_prompts tests
      
      * fix: fix inference only case and use in train_prompts
      
      * test: skip failed tests in ci
      
      * style: fix CodeFactor check
      
      * fix: do not use model.to('cpu') with GeminiPlugin
      
      * test: enable colossalai_gemini tests
      
      * test: set CUDA_VISIBLE_DEVICES in ci
      
      * docs: add note
      153b957a
  3. 13 Jun, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] refactor actor class (#3968) · 9d02590c
      Wenhao Chen authored
      * refactor: separate log_probs fn from Actor forward fn
      
      * refactor: separate generate fn from Actor class
      
      * feat: update unwrap_model and get_base_model
      * unwrap_model returns model not wrapped by Strategy
      * get_base_model returns HF model for Actor, Critic and RewardModel
      
      * feat: simplify Strategy.prepare
      
      * style: remove get_base_model method of Actor
      
      * perf: tokenize text in batches
      
      * refactor: move calc_action_log_probs to utils of model
      
      * test: update test with new forward fn
      
      * style: rename forward fn args
      
      * fix: do not unwrap model in save_model fn of naive strategy
      
      * test: add gemini test for train_prompts
      
      * fix: fix _set_default_generate_kwargs
      9d02590c
  4. 27 Apr, 2023 1 commit
    • Hongxin Liu's avatar
      [chat] refactor model save/load logic (#3654) · 842768a1
      Hongxin Liu authored
      * [chat] strategy refactor unwrap model
      
      * [chat] strategy refactor save model
      
      * [chat] add docstr
      
      * [chat] refactor trainer save model
      
      * [chat] fix strategy typing
      
      * [chat] refactor trainer save model
      
      * [chat] update readme
      
      * [chat] fix unit test
      842768a1
  5. 26 Apr, 2023 2 commits
    • Hongxin Liu's avatar
      [chat] refactor trainer (#3648) · 2a951955
      Hongxin Liu authored
      * [chat] ppo trainer remove useless args
      
      * [chat] update examples
      
      * [chat] update benchmark
      
      * [chat] update examples
      
      * [chat] fix sft training with wandb
      
      * [chat] polish docstr
      2a951955
    • Hongxin Liu's avatar
      [gemini] accelerate inference (#3641) · 50793b35
      Hongxin Liu authored
      * [gemini] support don't scatter after inference
      
      * [chat] update colossalai strategy
      
      * [chat] fix opt benchmark
      
      * [chat] update opt benchmark
      
      * [gemini] optimize inference
      
      * [test] add gemini inference test
      
      * [chat] fix unit test ci
      
      * [chat] fix ci
      
      * [chat] fix ci
      
      * [chat] skip checkpoint test
      50793b35
  6. 24 Apr, 2023 1 commit
  7. 20 Apr, 2023 1 commit
  8. 18 Apr, 2023 1 commit
  9. 11 Apr, 2023 1 commit
  10. 06 Apr, 2023 2 commits
    • YY Lin's avatar
      [Chat]Add Peft support & fix the ptx bug (#3433) · 62f4e2eb
      YY Lin authored
      * Update ppo.py
      
      Fix the bug of fetching wrong batch data
      
      * Add peft model support in SFT and Prompts training
      
      In stage-1 and stage-3, the peft model supports are added. So the trained artifacts will be only a small lora additions instead of the whole bunch of files.
      
      * Delete test_prompts.txt
      
      * Delete test_pretrained.txt
      
      * Move the peft stuffs to a community folder.
      
      * Move the demo sft to community
      
      * delete dirty files
      
      * Add instructions to install peft using source
      
      * Remove Chinese comments
      
      * remove the Chinese comments
      62f4e2eb
    • Dr-Corgi's avatar
      [chat]fix save_model(#3377) · 73afb635
      Dr-Corgi authored
      The function save_model should be a part of PPOTrainer.
      73afb635
  11. 05 Apr, 2023 1 commit
  12. 28 Mar, 2023 1 commit
  13. 17 Mar, 2023 1 commit
  14. 07 Mar, 2023 1 commit
  15. 17 Feb, 2023 1 commit
    • ver217's avatar
      [chatgpt] startegy add prepare method (#2766) · 4ee311c0
      ver217 authored
      * [chatgpt] startegy add prepare method
      
      * [chatgpt] refactor examples
      
      * [chatgpt] refactor strategy.prepare
      
      * [chatgpt] support save/load checkpoint
      
      * [chatgpt] fix unwrap actor
      
      * [chatgpt] fix unwrap actor
      4ee311c0
  16. 15 Feb, 2023 1 commit
    • ver217's avatar
      [chatgpt] optimize generation kwargs (#2717) · 9c0943ec
      ver217 authored
      * [chatgpt] ppo trainer use default generate args
      
      * [chatgpt] example remove generation preparing fn
      
      * [chatgpt] benchmark remove generation preparing fn
      
      * [chatgpt] fix ci
      9c0943ec
  17. 14 Feb, 2023 1 commit