- 21 Mar, 2023 1 commit
-
-
Zihao authored
* add auto-offload feature * polish code * fix syn offload runtime pass bug * add offload example * fix offload testing bug * fix example testing bug
-
- 07 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
* [hotfix] skip auto checkpointing tests * fix test name issue
-
- 28 Feb, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 23 Feb, 2023 2 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
* [autoparallel] find repeat blocks * polish * polish * polish
-
- 22 Feb, 2023 2 commits
-
-
Boyuan Yao authored
* [autoparallel] patch meta information of torch.where * [autoparallel] pre-commit modified
-
Boyuan Yao authored
* [autoparallel] tanh meta information * [autoparallel] remove redundant code * [autoparallel] patch meta information of torch.nn.Dropout
-
- 20 Feb, 2023 1 commit
-
-
Boyuan Yao authored
* [autoparallel] tensor related meta information prototype * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information
-
- 17 Feb, 2023 1 commit
-
-
Boyuan Yao authored
* [autoparallel] embedding metainfo * [autoparallel] fix function name in test_activation_metainfo * [autoparallel] undo changes in activation metainfo and related tests
-
- 15 Feb, 2023 5 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
* [autoparallel] add shard option * polish
-
YuliangLiu0306 authored
* [autoparallel] refactor runtime pass * add unit test * polish
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
-
- 13 Feb, 2023 1 commit
-
-
Boyuan Yao authored
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674) * [autoparallel] softmax metainfo * [autoparallel] softmax metainfo
-
- 10 Feb, 2023 1 commit
-
-
Boyuan Yao authored
* [autoparallel] layernorm metainfo patch * [autoparallel] polish test
-
- 08 Feb, 2023 3 commits
-
-
YuliangLiu0306 authored
* [autoparallel] refactor handlers which reshape input tensors * polish
-
YuliangLiu0306 authored
-
Boyuan Yao authored
* [autoparallel] matmul metainfo * [auto_parallel] remove unused print * [tests] skip test_matmul_handler when torch version is lower than 1.12.0
-
- 16 Jan, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 12 Jan, 2023 1 commit
-
-
YuliangLiu0306 authored
* [autoparallel] update binary elementwise handler * polish
-
- 11 Jan, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 03 Jan, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 30 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 28 Dec, 2022 2 commits
-
-
YuliangLiu0306 authored
* [autoparallel] record parameter attribute in collotracer * [autoparallel] fix construct_meta_info bug
-
Boyuan Yao authored
* [fx] metainfo class for auto parallel * [fx] add unit test for linear metainfo * [fx] fix bwd param for linear * [fx] modify unit test * [fx] modify unit test * [fx] modify import * [fx] modify import * [fx] modify import * [fx] move meta profiler to auto parallel * [fx] add conv metainfo class * [fx] restore profiler * [fx] restore meta profiler * [autoparallel] modify unit test * [fx] modify unit test * [autoparallel] add batchnorm metainfo class * [autoparallel] fix batchnorm unit test function declaration * [fx] restore profiler * [fx] add relu metainfo class * [fx] restore profiler * [autoparallel] modify metainfo input * [autoparallel] add pooling metainfo * [autoparallel] add F.linear metainfo generator * [autoparallel] add binary elementwise metainfo * [fx] recover profiler * [autoparallel] fix forward memory calculation * [autoparallel] modify constants.py * [autoparallel] remove redundant print * [autoparallel] add F.conv metainfo * [autoparallel] linear fix * [autoparallel] memory estimation for communication actions * [autoparallel] fix docstring * [autoparallel] fix variables name * [autoparallel] attach tensor to metainfo class * [autoparallel] fix dangerous try except * [autoparallel] attach memory cost to shape consistency node * [autoparallel] attach shape consistency node's metainfo to the node * [autoparallel] remove todo in shape consistency memory estimation * [autoparallel] fix the annotation
-
- 27 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 26 Dec, 2022 2 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
-
- 23 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
* [autoparallel] integrate_gpt_related_tests * polish code * polish code * add GPT2Model into runtime test
-
- 20 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 14 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 12 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 09 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 08 Dec, 2022 4 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
* [autoparallel] add bias addtion function class * polish code * polish
-
YuliangLiu0306 authored
-
- 07 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
* [autoparallel] add embedding handler * fix bugs
-
- 06 Dec, 2022 1 commit
-
-
YuliangLiu0306 authored
-