- 17 Mar, 2022 2 commits
-
-
Jiarui Fang authored
-
Jiarui Fang authored
-
- 16 Mar, 2022 3 commits
-
-
Jiarui Fang authored
-
ver217 authored
-
Jiarui Fang authored
-
- 15 Mar, 2022 3 commits
-
-
Jiarui Fang authored
-
Jiarui Fang authored
-
Jiarui Fang authored
-
- 14 Mar, 2022 3 commits
-
-
Jiarui Fang authored
-
Jiarui Fang authored
-
ver217 authored
-
- 11 Mar, 2022 29 commits
-
-
Jiarui Fang authored
* place params on cpu after zero init context * polish code * bucketzed cpu gpu tensor transter * find a bug in sharded optim unittest * add offload unittest for ShardedOptimV2. * polish code and make it more robust
-
Frank Lee authored
* refactored test with component func * fixed bug
-
Jiarui Fang authored
-
Jiarui Fang authored
* place params on cpu after zero init context * polish code
-
Jiarui Fang authored
-
Jiarui Fang authored
-
ver217 authored
-
ver217 authored
-
ver217 authored
-
jiaruifang authored
-
jiaruifang authored
-
jiaruifang authored
-
jiaruifang authored
-
ver217 authored
-
jiaruifang authored
-
jiaruifang authored
-
Jiarui Fang authored
-
Jiarui Fang authored
* add zero init context * add more flags for zero init context fix bug of repeated converting param to ShardedParamV2 * polish code
-
Jiarui Fang authored
-
Jiarui Fang authored
* init shard param from shape tuple * add more unitest for shard param * add set_payload method for ShardedParam * [zero] add shareded tensor class * polish code * add shard stratgy * move shard and gather logic to shard strategy from shard tensor. * polish code
-
ver217 authored
-
ver217 authored
-
Jiarui Fang authored
-
Jiarui Fang authored
* init shard param from shape tuple * add more unitest for shard param * add set_payload method for ShardedParam * [zero] add shareded tensor class * polish code
-
Jiarui Fang authored
* init shard param from shape tuple * add more unitest for shard param * add more unittests to shareded param
-
ver217 authored
-
Frank Lee authored
* added unit test for sharded optimizer * refactor for elegance
-
Frank Lee authored
-
Jiarui Fang authored
* add zero1 (#209) * add zero1 * add test zero1 * update zero stage 1 develop (#212) * Implement naive zero3 (#240) * naive zero3 works well * add zero3 param manager * add TODOs in comments * add gather full param ctx * fix sub module streams * add offload * fix bugs of hook and add unit tests * fix bugs of hook and add unit tests (#252) * add gather full param ctx * fix sub module streams * add offload * fix bugs of hook and add unit tests * polish code and add state dict hook * fix bug * update unit test * refactor reconstructed zero code * clip_grad support zero3 and add unit test * add unit test for Zero3ParameterManager * [WIP] initialize the shard param class * [WIP] Yet another sharded model implementation (#274) * [WIP] initialize the shard param class * [WIP] Yes another implementation of shardModel. Using a better hook method. * torch.concat -> torch.cat * fix test_zero_level_1.py::test_zero_level_1 unitest * remove deepspeed implementation and refactor for the reconstructed zero module * polish zero dp unittests Co-authored-by:
ver217 <lhx0217@gmail.com> Co-authored-by:
Frank Lee <somerlee.9@gmail.com>
-