- 06 Apr, 2023 11 commits
-
-
Frank Lee authored
-
jiangmingyan authored
* [checkpoint] support huggingface style sharded checkpoint * [checkpoint] support huggingface style sharded checkpoint * [checkpoint] support huggingface style sharded checkpoint * [checkpoint] support huggingface style sharded checkpoint * [checkpoint] support huggingface style sharded checkpoint --------- Co-authored-by:luchen <luchen@luchendeMBP.lan>
-
Fazzie-Maqianli authored
-
Frank Lee authored
* [test] added spawn decorator * polish code * polish code * polish code * polish code * polish code * polish code
-
YY Lin authored
* Update ppo.py Fix the bug of fetching wrong batch data * Add peft model support in SFT and Prompts training In stage-1 and stage-3, the peft model supports are added. So the trained artifacts will be only a small lora additions instead of the whole bunch of files. * Delete test_prompts.txt * Delete test_pretrained.txt * Move the peft stuffs to a community folder. * Move the demo sft to community * delete dirty files * Add instructions to install peft using source * Remove Chinese comments * remove the Chinese comments
-
Dr-Corgi authored
The function save_model should be a part of PPOTrainer.
-
kingkingofall authored
* fix stage 2 fix stage 2 * add torch
-
Frank Lee authored
-
YH authored
-
ver217 authored
-
Camille Zhong authored
* Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. * Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci * Update test_ci.sh * Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. * Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. * Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci * Update test_ci.sh * Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. * update roberta with coati * chat ci update * Revert "chat ci update" This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846. * [Chat] fix the tokenizer "int too big to convert" error in SFT training fix the tokenizer error during SFT training using Bloom and OPT
-
- 05 Apr, 2023 2 commits
-
-
Hakjin Lee authored
-
Yuanchen authored
Co-authored-by:Yuanchen Xu <yuanchen.xu00@gmail.com>
-
- 04 Apr, 2023 6 commits
-
-
YuliangLiu0306 authored
* [autoparallel] integrate new analyzer in module level * unify the profiling method * polish * fix no codegen bug * fix pass bug * fix liveness test * polish
-
ver217 authored
* [zero] update legacy import * [zero] update examples * [example] fix opt tutorial * [example] fix opt tutorial * [example] fix opt tutorial * [example] fix opt tutorial * [example] fix import
-
Yuanchen authored
Co-authored-by:Yuanchen Xu <yuanchen.xu00@gmail.com>
-
Frank Lee authored
* [checkpoint] refactored the API and added safetensors support * polish code
-
ver217 authored
* [zero] refactor low-level zero folder structure * [zero] fix legacy zero import path * [zero] fix legacy zero import path * [zero] remove useless import * [zero] refactor gemini folder structure * [zero] refactor gemini folder structure * [zero] refactor legacy zero import path * [zero] refactor gemini folder structure * [zero] refactor gemini folder structure * [zero] refactor gemini folder structure * [zero] refactor legacy zero import path * [zero] fix test import path * [zero] fix test * [zero] fix circular import * [zero] update import
-
Yuanchen authored
fix sft training for bloom, gpt and opt
-
- 03 Apr, 2023 2 commits
-
-
Frank Lee authored
* [test] fixed gemini plugin test * polish code * polish code
-
Camille Zhong authored
* Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. * Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci * add test for reward model training * Update test_ci.sh * Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. * Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) * Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. * Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci * Update test_ci.sh * Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. * update roberta with coati
-
- 02 Apr, 2023 1 commit
-
-
Chris Sundström authored
Minor changes to improve grammar and punctuation.
-
- 01 Apr, 2023 1 commit
-
-
Jan Roudaut authored
* [examples/images/diffusion]: README.md: typo fixes * Update README.md * Grammar fixes * Reformulated "Step 3" (xformers) introduction to the cost => at the cost + reworded pip availability.
-
- 31 Mar, 2023 3 commits
-
-
Jan Roudaut authored
s/0.12.0/0.0.12/
-
ver217 authored
* [booster] add gemini plugin * [booster] update docstr * [booster] gemini plugin add coloparam convertor * [booster] fix coloparam convertor * [booster] fix gemini plugin device * [booster] add gemini plugin test * [booster] gemini plugin ignore sync bn * [booster] skip some model * [booster] skip some model * [booster] modify test world size * [booster] modify test world size * [booster] skip test
-
HELSON authored
* [moe] add checkpoint for moe models * [hotfix] fix bugs in unit test
-
- 30 Mar, 2023 7 commits
-
-
YuliangLiu0306 authored
* [autoparallel] adapt autoparallel with new analyzer * fix all node handler tests * polish * polish
-
アマデウス authored
-
Ofey Chan authored
-
yuxuan-lou authored
-
Andrew authored
-
YuliangLiu0306 authored
-
binmakeswell authored
* [doc] add Intel cooperation news * [doc] add Intel cooperation news
-
- 29 Mar, 2023 7 commits
-
-
Michelle authored
* [NFC] polish colossalai/engine/schedule/_pipeline_schedule.py code style * [NFC] polish colossalai/fx/tracer/_tracer_utils.py code style --------- Co-authored-by:Qianran Ma <qianranm@luchentech.com>
-
Xu Kai authored
-
RichardoLuo authored
-
Ziheng Qin authored
-
Kai Wang (Victor Kai) authored
-
Sze-qq authored
Co-authored-by:siqi <siqi@siqis-MacBook-Pro.local>
-
Arsmart1 authored
-