- 14 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
* [DTensor] refactor dtensor with new components * polish
-
- 10 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
* [DTensor] refactor LayoutConverter for DTensor * polish code * polish docstring
-
- 08 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
-
- 07 Mar, 2023 2 commits
-
-
YuliangLiu0306 authored
* [hotfix] skip auto checkpointing tests * fix test name issue
-
YuliangLiu0306 authored
* [autoparallel] refactor sharding spec * rename function name
-
- 01 Mar, 2023 1 commit
-
-
YuliangLiu0306 authored
* [DTensor] implementation of dtensor * test layout convert * polish
-
- 28 Jan, 2023 1 commit
-
-
HELSON authored
* [zero] add strict ddp mode for chunk init * [gemini] update gpt example
-
- 20 Jan, 2023 1 commit
-
-
HELSON authored
* [zero] add strict ddp mode * [polish] add comments for strict ddp mode * [zero] fix test error
-
- 18 Jan, 2023 1 commit
-
-
HELSON authored
-
- 09 Jan, 2023 1 commit
-
-
HELSON authored
* [gemini] polish code * [testing] remove code * [gemini] make more robust
-
- 26 Dec, 2022 1 commit
-
-
HELSON authored
* [testing] add beit model * [beit] fix bugs * [beit] fix bugs * [testing] fix bugs
-
- 06 Dec, 2022 1 commit
-
-
Jiarui Fang authored
-
- 24 Nov, 2022 1 commit
-
-
Jiarui Fang authored
-
- 23 Nov, 2022 1 commit
-
-
Genghan Zhang authored
* Add mix-gather * Add comments * Add comments * Polish comments * Change the global rank assumption * Add tests * Add two-step tests * Fix 10 and 01 * Skip test becasue the number of GPUs
-
- 16 Nov, 2022 1 commit
-
-
Jiarui Fang authored
-
- 15 Nov, 2022 1 commit
-
-
Jiarui Fang authored
-
- 14 Nov, 2022 1 commit
-
-
Jiarui Fang authored
-
- 09 Nov, 2022 1 commit
-
-
Jiarui Fang authored
-
- 21 Oct, 2022 1 commit
-
-
YuliangLiu0306 authored
* [autoparallel] shard param and buffer as expected * fix unit test issue
-
- 19 Oct, 2022 1 commit
-
-
Frank Lee authored
* [autoparallel] handled illegal sharding strategy * polish code
-
- 18 Oct, 2022 1 commit
-
-
HELSON authored
* add chunk manager init function * fix unit tests * add comment * add flush=True
-
- 09 Oct, 2022 1 commit
-
-
HELSON authored
-
- 29 Sep, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 26 Sep, 2022 2 commits
-
-
Frank Lee authored
* [fix] fixed the collective pattern name for consistency * polish code
-
Jiarui Fang authored
This reverts commit 5be118f4.
-
- 24 Sep, 2022 1 commit
-
-
HELSON authored
-
- 23 Sep, 2022 1 commit
-
-
YuliangLiu0306 authored
* [tensor] use communication autograd func * change all to all comm spec info * rename pattern and distinguish fwd/bwd * polish code
-
- 25 Aug, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 19 Aug, 2022 1 commit
-
-
YuliangLiu0306 authored
* [tensor] support runtime ShardingSpec apply * polish code * polish code
-
- 12 Aug, 2022 2 commits
-
-
YuliangLiu0306 authored
* [tensor] shape consistency output transform path and communication cost * polish code
-
Frank Lee authored
* [tensor] added linear implementation for the new sharding spec * polish code
-
- 10 Aug, 2022 4 commits
-
-
Jiarui Fang authored
-
Jiarui Fang authored
-
Jiarui Fang authored
-
YuliangLiu0306 authored
* [tensor] add shape consistency feature to supportauto sharding spec transform. * [tensor] remove unused argument in simulator, add doc string for target pair.
-
- 09 Aug, 2022 2 commits
-
-
Jiarui Fang authored
-
Jiarui Fang authored
-
- 08 Aug, 2022 1 commit
-
-
YuliangLiu0306 authored
-
- 26 Jul, 2022 1 commit
-
-
HELSON authored
-
- 25 Jul, 2022 1 commit
-
-
HELSON authored
-