1. 29 Mar, 2024 1 commit
    • YeAnbang's avatar
      [ColossalChat] Update RLHF V2 (#5286) · df5e9c53
      YeAnbang authored
      
      
      * Add dpo. Fix sft, ppo, lora. Refactor all
      
      * fix and tested ppo
      
      * 2 nd round refactor
      
      * add ci tests
      
      * fix ci
      
      * fix ci
      
      * fix readme, style
      
      * fix readme style
      
      * fix style, fix benchmark
      
      * reproduce benchmark result, remove useless files
      
      * rename to ColossalChat
      
      * use new image
      
      * fix ci workflow
      
      * fix ci
      
      * use local model/tokenizer for ci tests
      
      * fix ci
      
      * fix ci
      
      * fix ci
      
      * fix ci timeout
      
      * fix rm progress bar. fix ci timeout
      
      * fix ci
      
      * fix ci typo
      
      * remove 3d plugin from ci temporary
      
      * test environment
      
      * cannot save optimizer
      
      * support chat template
      
      * fix readme
      
      * fix path
      
      * test ci locally
      
      * restore build_or_pr
      
      * fix ci data path
      
      * fix benchmark
      
      * fix ci, move ci tests to 3080, disable fast tokenizer
      
      * move ci to 85
      
      * support flash attention 2
      
      * add all-in-one data preparation script. Fix colossal-llama2-chat chat template
      
      * add hardware requirements
      
      * move ci test data
      
      * fix save_model, add unwrap
      
      * fix missing bos
      
      * fix missing bos; support grad accumulation with gemini
      
      * fix ci
      
      * fix ci
      
      * fix ci
      
      * fix llama2 chat template config
      
      * debug sft
      
      * debug sft
      
      * fix colossalai version requirement
      
      * fix ci
      
      * add sanity check to prevent NaN loss
      
      * fix requirements
      
      * add dummy data generation script
      
      * add dummy data generation script
      
      * add dummy data generation script
      
      * add dummy data generation script
      
      * update readme
      
      * update readme
      
      * update readme and ignore
      
      * fix logger bug
      
      * support parallel_output
      
      * modify data preparation logic
      
      * fix tokenization
      
      * update lr
      
      * fix inference
      
      * run pre-commit
      
      ---------
      Co-authored-by: default avatarTong Li <tong.li352711588@gmail.com>
      df5e9c53
  2. 19 Sep, 2023 1 commit
  3. 02 Aug, 2023 1 commit
    • Wenhao Chen's avatar
      [chat] fix bugs and add unit tests (#4213) · da4f7b85
      Wenhao Chen authored
      * style: rename replay buffer
      
      Experience replay is typically for off policy algorithms.
      Use this name in PPO maybe misleading.
      
      * fix: fix wrong zero2 default arg
      
      * test: update experience tests
      
      * style: rename zero_pad fn
      
      * fix: defer init in CycledDataLoader
      
      * test: add benchmark test
      
      * style: rename internal fn of generation
      
      * style: rename internal fn of lora
      
      * fix: remove unused loss fn
      
      * fix: remove unused utils fn
      
      * refactor: remove generate_with_actor fn
      
      * fix: fix type annotation
      
      * test: add models tests
      
      * fix: skip llama due to long execution time
      
      * style: modify dataset
      
      * style: apply formatter
      
      * perf: update reward dataset
      
      * fix: fix wrong IGNORE_INDEX in sft dataset
      
      * fix: remove DataCollatorForSupervisedDataset
      
      * test: add dataset tests
      
      * style: apply formatter
      
      * style: rename test_ci to test_train
      
      * feat: add llama in inference
      
      * test: add inference tests
      
      * test: change test scripts directory
      
      * fix: update ci
      
      * fix: fix typo
      
      * fix: skip llama due to oom
      
      * fix: fix file mod
      
      * style: apply formatter
      
      * refactor: remove duplicated llama_gptq
      
      * style: apply formatter
      
      * to: update rm test
      
      * feat: add tokenizer arg
      
      * feat: add download model script
      
      * test: update train tests
      
      * fix: modify gemini load and save pretrained
      
      * test: update checkpoint io test
      
      * to: modify nproc_per_node
      
      * fix: do not remove existing dir
      
      * fix: modify save path
      
      * test: add random choice
      
      * fix: fix sft path
      
      * fix: enlarge nproc_per_node to avoid oom
      
      * fix: add num_retry
      
      * fix: make lora config of rm and critic consistent
      
      * fix: add warning about lora weights
      
      * fix: skip some gpt2 tests
      
      * fix: remove grad ckpt in rm and critic due to errors
      
      * refactor: directly use Actor in train_sft
      
      * test: add more arguments
      
      * fix: disable grad ckpt when using lora
      
      * fix: fix save_pretrained and related tests
      
      * test: enable zero2 tests
      
      * revert: remove useless fn
      
      * style: polish code
      
      * test: modify test args
      da4f7b85
  4. 28 Mar, 2023 1 commit
  5. 14 Feb, 2023 1 commit