"vscode:/vscode.git/clone" did not exist on "02192a632e6c6f965d93ec79937f97e10e121307"
- 18 Jan, 2022 5 commits
- 17 Jan, 2022 2 commits
- 13 Jan, 2022 1 commit
-
-
ver217 authored
-
- 10 Jan, 2022 2 commits
-
-
BoxiangW authored
Update the documentations of layer integration Update _log_hook.py Update _operation.py
-
binmakeswell authored
-
- 07 Jan, 2022 5 commits
- 06 Jan, 2022 2 commits
-
-
Frank Lee authored
* enable CI after PR sync * Fixed github action
-
binmakeswell authored
-
- 05 Jan, 2022 1 commit
-
-
Jiarui Fang authored
-
- 04 Jan, 2022 4 commits
- 30 Dec, 2021 2 commits
-
-
ver217 authored
* add pipeline shared module wrapper and update load batch * added model parallel process group for amp and clip grad (#86) * added model parallel process group for amp and clip grad * update amp and clip with model parallel process group * remove pipeline_prev/next group (#88) * micro batch offload * optimize pipeline gpu memory usage * pipeline can receive tensor shape (#93) * optimize pipeline gpu memory usage * fix grad accumulation step counter * rename classes and functions Co-authored-by:Frank Lee <somerlee.9@gmail.com>
-
アマデウス authored
-
- 29 Dec, 2021 1 commit
-
-
アマデウス authored
* optimized 1d layer apis; reorganized nn.layer modules; fixed tests * fixed 2.5d runtime issue * reworked split batch, now called in trainer.schedule.load_batch Co-authored-by:BoxiangW <45734921+BoxiangW@users.noreply.github.com>
-
- 27 Dec, 2021 1 commit
-
-
アマデウス authored
* integrated parallel layers for ease of building models * integrated 2.5d layers * cleaned codes and unit tests * added log metric by step hook; updated imagenet benchmark; fixed some bugs * reworked initialization; cleaned codes Co-authored-by:BoxiangW <45734921+BoxiangW@users.noreply.github.com>
-
- 21 Dec, 2021 2 commits
- 20 Dec, 2021 2 commits
- 16 Dec, 2021 2 commits
- 14 Dec, 2021 1 commit
-
-
Frank Lee authored
-
- 13 Dec, 2021 1 commit
-
-
Frank Lee authored
-
- 10 Dec, 2021 2 commits
- 09 Dec, 2021 1 commit
-
-
Frank Lee authored
* Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097. * improved consistency between trainer, engine and schedule (#23) Co-authored-by:
1SAA <c2h214748@gmail.com> * Split conv2d, class token, positional embedding in 2d, Fix random number in ddp Fix convergence in cifar10, Imagenet1000 * Integrate 1d tensor parallel in Colossal-AI (#39) * fixed 1D and 2D convergence (#38) * optimized 2D operations * fixed 1D ViT convergence problem * Feature/ddp (#49) * remove redundancy func in setup (#19) (#20) * use env to control the language of doc (#24) (#25) * Support TP-compatible Torch AMP and Update trainer API (#27) * Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097. * improved consistency between trainer, engine and schedule (#23) Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
ver217 <lhx0217@gmail.com> * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29) * add explanation for ViT example (#35) (#36) * support torch ddp * fix loss accumulation * add log for ddp * change seed * modify timing hook Co-authored-by:
Frank Lee <somerlee.9@gmail.com> Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
binmakeswell <binmakeswell@gmail.com> * Feature/pipeline (#40) * remove redundancy func in setup (#19) (#20) * use env to control the language of doc (#24) (#25) * Support TP-compatible Torch AMP and Update trainer API (#27) * Add gradient accumulation, fix lr scheduler * fix FP16 optimizer and adapted torch amp with tensor parallel (#18) * fixed bugs in compatibility between torch amp and tensor parallel and performed some minor fixes * fixed trainer * Revert "fixed trainer" This reverts commit 2e0b0b76990e8d4e337add483d878c0f61cf5097. * improved consistency between trainer, engine and schedule (#23) Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
ver217 <lhx0217@gmail.com> * add an example of ViT-B/16 and remove w_norm clipping in LAMB (#29) * add explanation for ViT example (#35) (#36) * optimize communication of pipeline parallel * fix grad clip for pipeline Co-authored-by:
Frank Lee <somerlee.9@gmail.com> Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
binmakeswell <binmakeswell@gmail.com> * optimized 3d layer to fix slow computation ; tested imagenet performance with 3d; reworked lr_scheduler config definition; fixed launch args; fixed some printing issues; simplified apis of 3d layers (#51) * Update 2.5d layer code to get a similar accuracy on imagenet-1k dataset * update api for better usability (#58) update api for better usability Co-authored-by:
1SAA <c2h214748@gmail.com> Co-authored-by:
ver217 <lhx0217@gmail.com> Co-authored-by:
puck_WCR <46049915+WANG-CR@users.noreply.github.com> Co-authored-by:
binmakeswell <binmakeswell@gmail.com> Co-authored-by:
アマデウス <kurisusnowdeng@users.noreply.github.com> Co-authored-by:
BoxiangW <45734921+BoxiangW@users.noreply.github.com>
-
- 02 Dec, 2021 3 commits