- 15 Mar, 2023 2 commits
- 14 Mar, 2023 3 commits
-
-
HELSON authored
* [tests] diffuser models in model zoo * remove useless code * [tests] add diffusers to requirement-test
-
YuliangLiu0306 authored
* [DTensor] refactor dtensor with new components * polish
-
Frank Lee authored
* [test] added timm models to test model zoo * polish code * polish code * polish code * polish code * polish code
-
- 13 Mar, 2023 1 commit
-
-
Xuanlei Zhao authored
* refact memory code * dont log free var memory * add memory align * update chunk target * update setting for new memory * finish test * update tracer * update typo * update test * add unet test * add bench * update bench * update bench * init * support vit * move to cpu * add cpu benchmark
-
- 10 Mar, 2023 3 commits
-
-
Super Daniel authored
* [hotfix] meta tensor default device. * [siu] add experimental submodules to main branch. * [siu] * [siu] * [analyzer] init. * [analyzer] readme. * [analyzer] readme. * [analyzer] readme. * [analyzer] readme. * [test] add test. * Update symbolic_trace.py * mark skip tests. * try except. * try except. * try except. * s * init * init * fix * skip * skip --------- Co-authored-by:
Daniel Shao <superdainiu@MININT-PVARVID.fareast.corp.microsoft.com> Co-authored-by:
Daniel Shao <superdainiu@Daniels-Mac.local>
-
Xuanlei Zhao authored
support vit for autochunk * support some new ops for vit * fix some bugs * add test for vit
-
YuliangLiu0306 authored
* [DTensor] refactor LayoutConverter for DTensor * polish code * polish docstring
-
- 08 Mar, 2023 2 commits
-
-
Xuanlei Zhao authored
* refact memory code * dont log free var memory * add memory align * update chunk target * update setting for new memory * finish test * update tracer * update typo * update test
-
YuliangLiu0306 authored
-
- 07 Mar, 2023 2 commits
-
-
YuliangLiu0306 authored
* [hotfix] skip auto checkpointing tests * fix test name issue
-
YuliangLiu0306 authored
* [autoparallel] refactor sharding spec * rename function name
-
- 01 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
* [DTensor] implementation of dtensor * test layout convert * polish
-
- 28 Feb, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 23 Feb, 2023 2 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
* [autoparallel] find repeat blocks * polish * polish * polish
-
- 22 Feb, 2023 2 commits
-
-
Boyuan Yao authored
* [autoparallel] patch meta information of torch.where * [autoparallel] pre-commit modified
-
Boyuan Yao authored
* [autoparallel] tanh meta information * [autoparallel] remove redundant code * [autoparallel] patch meta information of torch.nn.Dropout
-
- 20 Feb, 2023 1 commit
-
-
Boyuan Yao authored
* [autoparallel] tensor related meta information prototype * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information * [autoparallel] tensor related meta information
-
- 17 Feb, 2023 2 commits
-
-
HELSON authored
-
Boyuan Yao authored
* [autoparallel] embedding metainfo * [autoparallel] fix function name in test_activation_metainfo * [autoparallel] undo changes in activation metainfo and related tests
-
- 15 Feb, 2023 5 commits
-
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
* [autoparallel] add shard option * polish
-
YuliangLiu0306 authored
* [autoparallel] refactor runtime pass * add unit test * polish
-
YuliangLiu0306 authored
-
YuliangLiu0306 authored
-
- 13 Feb, 2023 2 commits
-
-
Boyuan Yao authored
[autoparallel] Patch meta information of `torch.nn.functional.softmax` and `torch.nn.Softmax` (#2674) * [autoparallel] softmax metainfo * [autoparallel] softmax metainfo
-
HELSON authored
-
- 10 Feb, 2023 1 commit
-
-
Boyuan Yao authored
* [autoparallel] layernorm metainfo patch * [autoparallel] polish test
-
- 08 Feb, 2023 3 commits
-
-
YuliangLiu0306 authored
* [autoparallel] refactor handlers which reshape input tensors * polish
-
YuliangLiu0306 authored
-
Boyuan Yao authored
* [autoparallel] matmul metainfo * [auto_parallel] remove unused print * [tests] skip test_matmul_handler when torch version is lower than 1.12.0
-
- 07 Feb, 2023 1 commit
-
-
oahzxl authored
* add alphafold benchmark * renae alphafold test * rename tests * rename diffuser * renme * rename * update transformer * update benchmark * update benchmark * update bench memory * update transformer benchmark * rename * support diffuser * support unet metainfo prop * fix bug and simplify code * update linear and support some op * optimize max region search, support conv * update unet test * support some op * support groupnorm and interpolate * update flow search * add fix dim in node flow * fix utils * rename * support diffusion * update diffuser * update chunk search * optimize imports * import * finish autochunk
-
- 02 Feb, 2023 1 commit
-
-
oahzxl authored
-
- 01 Feb, 2023 1 commit
-
-
oahzxl authored
Support multi outputs chunk search. Previously we only support single output chunk search. It is more flexible and improve performance by a large margin. For transformer, we reduce memory by 40% than previous search strategy. 1. rewrite search strategy to support multi outputs chunk search 2. fix many, many bugs 3. update tests
-
- 31 Jan, 2023 1 commit
-
-
oahzxl authored
-
- 30 Jan, 2023 1 commit
-
-
Frank Lee authored
* [workflow] only report coverage for changed files * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file * polish file
-
- 29 Jan, 2023 2 commits