".github/workflows/auto_compatibility_test.yml" did not exist on "65ad47c35c9166c2493d658fbcdaba8d93268ca1"
- 15 May, 2023 2 commits
-
-
wukong1992 authored
Co-authored-by:纪少敏 <jishaomin@jishaomindeMBP.lan>
-
digger-yu authored
-
- 11 May, 2023 2 commits
-
-
digger-yu authored
* fix spelling error with examples/comminity/ * fix spelling error with tests/ * fix some spelling error with tests/ colossalai/ etc. * fix spelling error with tests/ etc. date:2023.5.10
-
digger-yu authored
* fix spelling error with examples/comminity/ * fix spelling error with tests/
-
- 10 May, 2023 3 commits
-
-
digger-yu authored
* fix spelling error with examples/comminity/ * fix spelling error with tests/ * fix some spelling error with tests/ colossalai/ etc.
-
MisterLin1995 authored
Co-authored-by:jiangwen <zxl265370@antgroup.com>
-
jiangmingyan authored
* [booster] update tests for booster * [booster] update tests for booster * [booster] update tests for booster * [booster] update tests for booster * [booster] update tests for booster * [booster] update booster tutorials#3717, fix recursive check
-
- 09 May, 2023 1 commit
-
-
Hongxin Liu authored
* [booster] fix no_sync method * [booster] add test for ddp no_sync * [booster] fix merge * [booster] update unit test * [booster] update unit test * [booster] update unit test
-
- 08 May, 2023 2 commits
-
-
Hongxin Liu authored
* [booster] add prepare dataloader method for plug * [booster] update examples and docstr
-
Hongxin Liu authored
* [example] add train vit with booster example * [example] update readme * [example] add train resnet with booster example * [example] enable ci * [example] enable ci * [example] add requirements * [hotfix] fix analyzer init * [example] update requirements
-
- 06 May, 2023 4 commits
-
-
YH authored
-
zhang-yi-chi authored
* fix gemini strategy bug * add comment * add comment * better solution
-
Hongxin Liu authored
-
digger-yu authored
* fix spelling error with examples/comminity/ * fix spelling error with example/
-
- 05 May, 2023 6 commits
-
-
Hongxin Liu authored
* [booster] add dp plugin base * [booster] inherit dp plugin base * [booster] refactor unit tests
-
digger-yu authored
fix spelling error in line23 change "cudnn_determinstic"=True to "cudnn_deterministic=True"
-
Tong Li authored
fix spelling error with applications/Chat/evaluate/
-
jiangmingyan authored
* gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin add shard checkpoint save/load * gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint * [API Refactoring]gemini plugin support shard checkpoint --------- Co-authored-by:
luchen <luchen@luchendeMBP.lan> Co-authored-by:
luchen <luchen@luchendeMacBook-Pro.local>
-
Camille Zhong authored
* Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci Update test_ci.sh Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci Update test_ci.sh Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. update roberta with coati chat ci update Revert "chat ci update" This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846. * Update README.md Update README.md * update readme * Update test_ci.sh * update readme and add a script update readme and add a script modify readme Update README.md
-
digger-yu authored
* Update README.md change "huggingaface" to "huggingface" * Update README.md change "Colossa-AI" to "Colossal-AI"
-
- 04 May, 2023 3 commits
-
-
Hongxin Liu authored
* [chat] add opt attn kernel * [chat] disable xformer during fwd
-
digger-yu authored
fix spelling error with generate_gpt35_answers.py
-
digger-yu authored
fix spelling error with evaluate.py
-
- 28 Apr, 2023 5 commits
-
-
tanitna authored
-
Tong Li authored
[Chat]: Remove unnecessary step and update documentation
-
binmakeswell authored
* [chat] set default gemini strategy * [chat] set default zero2 strategy * [chat] set default zero2 strategy
-
Tong Li authored
-
Tong Li authored
-
- 27 Apr, 2023 6 commits
-
-
Tong Li authored
-
Tong Li authored
-
YH authored
* Fix confusing variable name in zero opt * Apply lint * Fix util func * Fix minor util func * Fix zero param optimizer name
-
Hongxin Liu authored
* [chat] strategy refactor unwrap model * [chat] strategy refactor save model * [chat] add docstr * [chat] refactor trainer save model * [chat] fix strategy typing * [chat] refactor trainer save model * [chat] update readme * [chat] fix unit test
-
Hongxin Liu authored
* [chat] refactor lora * [chat] remove lm class * [chat] refactor save model * [chat] refactor train sft * [chat] fix ci * [chat] fix ci
-
Camille Zhong authored
* Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci Update test_ci.sh Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. Add RoBERTa for RLHF Stage 2 & 3 (test) RoBERTa for RLHF Stage 2 & 3 (still in testing) Revert "Add RoBERTa for RLHF Stage 2 & 3 (test)" This reverts commit 06741d894dcbe958acd4e10d771f22275e20e368. Add RoBERTa for RLHF stage 2 & 3 1. add roberta folder under model folder 2. add roberta option in train_reward_model.py 3. add some test in testci Update test_ci.sh Revert "Update test_ci.sh" This reverts commit 9c7352b81766f3177d31eeec0ec178a301df966a. update roberta with coati chat ci update Revert "chat ci update" This reverts commit 17ae7ae01fa752bd3289fc39069868fde99cf846. * Update README.md Update README.md * update readme * Update test_ci.sh
-
- 26 Apr, 2023 5 commits
-
-
Hongxin Liu authored
* [chat] ppo trainer remove useless args * [chat] update examples * [chat] update benchmark * [chat] update examples * [chat] fix sft training with wandb * [chat] polish docstr
-
Hongxin Liu authored
-
Hongxin Liu authored
* [gemini] support don't scatter after inference * [chat] update colossalai strategy * [chat] fix opt benchmark * [chat] update opt benchmark * [gemini] optimize inference * [test] add gemini inference test * [chat] fix unit test ci * [chat] fix ci * [chat] fix ci * [chat] skip checkpoint test
-
Hongxin Liu authored
* [booster] add low level zero plugin * [booster] fix gemini plugin test * [booster] fix precision * [booster] add low level zero plugin test * [test] fix booster plugin test oom * [test] fix booster plugin test oom * [test] fix googlenet and inception output trans * [test] fix diffuser clip vision model * [test] fix torchaudio_wav2vec2_base * [test] fix low level zero plugin test
-
digger-yu authored
* Fixed several spelling errors under colossalai * Fix the spelling error in colossalai and docs directory * Cautious Changed the spelling error under the example folder * Update runtime_preparation_pass.py revert autograft to autograd * Update search_chunk.py utile to until * Update check_installation.py change misteach to mismatch in line 91 * Update 1D_tensor_parallel.md revert to perceptron * Update 2D_tensor_parallel.md revert to perceptron in line 73 * Update 2p5D_tensor_parallel.md revert to perceptron in line 71 * Update 3D_tensor_parallel.md revert to perceptron in line 80 * Update README.md revert to resnet in line 42 * Update reorder_graph.py revert to indice in line 7 * Update p2p.py revert to megatron in line 94 * Update initialize.py revert to torchrun in line 198 * Update routers.py change to detailed in line 63 * Update routers.py change to detailed in line 146 * Update README.md revert random number in line 402
-
- 24 Apr, 2023 1 commit
-
-
Tong Li authored
[chat] fix single gpu training bug in examples/train_prompts.py
-