1. 01 Feb, 2024 1 commit
  2. 20 Sep, 2023 1 commit
    • Wenhao Chen's avatar
      [chat]: update rm, add wandb and fix bugs (#4471) · 7b9b8644
      Wenhao Chen authored
      
      
      * feat: modify forward fn of critic and reward model
      
      * feat: modify calc_action_log_probs
      
      * to: add wandb in sft and rm trainer
      
      * feat: update train_sft
      
      * feat: update train_rm
      
      * style: modify type annotation and add warning
      
      * feat: pass tokenizer to ppo trainer
      
      * to: modify trainer base and maker base
      
      * feat: add wandb in ppo trainer
      
      * feat: pass tokenizer to generate
      
      * test: update generate fn tests
      
      * test: update train tests
      
      * fix: remove action_mask
      
      * feat: remove unused code
      
      * fix: fix wrong ignore_index
      
      * fix: fix mock tokenizer
      
      * chore: update requirements
      
      * revert: modify make_experience
      
      * fix: fix inference
      
      * fix: add padding side
      
      * style: modify _on_learn_batch_end
      
      * test: use mock tokenizer
      
      * fix: use bf16 to avoid overflow
      
      * fix: fix workflow
      
      * [chat] fix gemini strategy
      
      * [chat] fix
      
      * sync: update colossalai strategy
      
      * fix: fix args and model dtype
      
      * fix: fix checkpoint test
      
      * fix: fix requirements
      
      * fix: fix missing import and wrong arg
      
      * fix: temporarily skip gemini test in stage 3
      
      * style: apply pre-commit
      
      * fix: temporarily skip gemini test in stage 1&2
      
      ---------
      Co-authored-by: default avatarMingyan Jiang <1829166702@qq.com>
      7b9b8644
  3. 19 Sep, 2023 1 commit
  4. 29 Aug, 2023 1 commit
    • yingliu-hpc's avatar
      [coati] add chatglm model (#4539) · 1467e3b4
      yingliu-hpc authored
      * update configuration of chatglm and add support in coati
      
      * add unit test & update chatglm default config & fix bos index issue
      
      * remove chatglm due to oom
      
      * add dataset pkg in requirement-text
      
      * fix parameter issue in test_models
      
      * add ref in tokenize & rm unnessary parts
      
      * separate source & target tokenization in chatglm
      
      * add unit test to chatglm
      
      * fix test dataset issue
      
      * update truncation of chatglm
      
      * fix Colossalai version
      
      * fix colossal ai version in test
      1467e3b4
  5. 02 Aug, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] fix bugs and add unit tests (#4213) · da4f7b85
      Wenhao Chen authored
      * style: rename replay buffer
      
      Experience replay is typically for off policy algorithms.
      Use this name in PPO maybe misleading.
      
      * fix: fix wrong zero2 default arg
      
      * test: update experience tests
      
      * style: rename zero_pad fn
      
      * fix: defer init in CycledDataLoader
      
      * test: add benchmark test
      
      * style: rename internal fn of generation
      
      * style: rename internal fn of lora
      
      * fix: remove unused loss fn
      
      * fix: remove unused utils fn
      
      * refactor: remove generate_with_actor fn
      
      * fix: fix type annotation
      
      * test: add models tests
      
      * fix: skip llama due to long execution time
      
      * style: modify dataset
      
      * style: apply formatter
      
      * perf: update reward dataset
      
      * fix: fix wrong IGNORE_INDEX in sft dataset
      
      * fix: remove DataCollatorForSupervisedDataset
      
      * test: add dataset tests
      
      * style: apply formatter
      
      * style: rename test_ci to test_train
      
      * feat: add llama in inference
      
      * test: add inference tests
      
      * test: change test scripts directory
      
      * fix: update ci
      
      * fix: fix typo
      
      * fix: skip llama due to oom
      
      * fix: fix file mod
      
      * style: apply formatter
      
      * refactor: remove duplicated llama_gptq
      
      * style: apply formatter
      
      * to: update rm test
      
      * feat: add tokenizer arg
      
      * feat: add download model script
      
      * test: update train tests
      
      * fix: modify gemini load and save pretrained
      
      * test: update checkpoint io test
      
      * to: modify nproc_per_node
      
      * fix: do not remove existing dir
      
      * fix: modify save path
      
      * test: add random choice
      
      * fix: fix sft path
      
      * fix: enlarge nproc_per_node to avoid oom
      
      * fix: add num_retry
      
      * fix: make lora config of rm and critic consistent
      
      * fix: add warning about lora weights
      
      * fix: skip some gpt2 tests
      
      * fix: remove grad ckpt in rm and critic due to errors
      
      * refactor: directly use Actor in train_sft
      
      * test: add more arguments
      
      * fix: disable grad ckpt when using lora
      
      * fix: fix save_pretrained and related tests
      
      * test: enable zero2 tests
      
      * revert: remove useless fn
      
      * style: polish code
      
      * test: modify test args
      da4f7b85
  6. 28 Jul, 2023 1 commit
  7. 26 Jul, 2023 1 commit
  8. 13 Jun, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] refactor actor class (#3968) · 9d02590c
      Wenhao Chen authored
      * refactor: separate log_probs fn from Actor forward fn
      
      * refactor: separate generate fn from Actor class
      
      * feat: update unwrap_model and get_base_model
      * unwrap_model returns model not wrapped by Strategy
      * get_base_model returns HF model for Actor, Critic and RewardModel
      
      * feat: simplify Strategy.prepare
      
      * style: remove get_base_model method of Actor
      
      * perf: tokenize text in batches
      
      * refactor: move calc_action_log_probs to utils of model
      
      * test: update test with new forward fn
      
      * style: rename forward fn args
      
      * fix: do not unwrap model in save_model fn of naive strategy
      
      * test: add gemini test for train_prompts
      
      * fix: fix _set_default_generate_kwargs
      9d02590c
  9. 23 May, 2023 1 commit
  10. 17 May, 2023 1 commit
  11. 26 Apr, 2023 1 commit
    • Hongxin Liu's avatar
      [gemini] accelerate inference (#3641) · 50793b35
      Hongxin Liu authored
      * [gemini] support don't scatter after inference
      
      * [chat] update colossalai strategy
      
      * [chat] fix opt benchmark
      
      * [chat] update opt benchmark
      
      * [gemini] optimize inference
      
      * [test] add gemini inference test
      
      * [chat] fix unit test ci
      
      * [chat] fix ci
      
      * [chat] fix ci
      
      * [chat] skip checkpoint test
      50793b35
  12. 17 Apr, 2023 1 commit
  13. 06 Apr, 2023 1 commit
    • Camille Zhong's avatar
      [Chat] fix the tokenizer "int too big to convert" error in SFT training (#3453) · 72cb4dd4
      Camille Zhong authored
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * Add RoBERTa for RLHF Stage 2 & 3 (test)
      
      RoBERTa for RLHF Stage 2 & 3 (still in testing)
      
      * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)"
      
      This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368.
      
      * Add RoBERTa for RLHF stage 2 & 3
      
      1. add roberta folder under model folder
      2. add  roberta option in train_reward_model.py
      3. add some test in testci
      
      * Update test_ci.sh
      
      * Revert "Update test_ci.sh"
      
      This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a.
      
      * update roberta with coati
      
      * chat ci update
      
      * Revert "chat ci update"
      
      This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846.
      
      * [Chat] fix the tokenizer "int too big to convert" error in SFT training
      
      fix the tokenizer error during SFT training using Bloom and OPT
      72cb4dd4
  14. 28 Mar, 2023 1 commit