- 12 Dec, 2022 1 commit
-
-
Min Xu authored
* [test] ci py 3.11 tests Co-authored-by:
Min Xu <min.xu.public@gmail.com> * fixed setup.py * fixed ci config * fixed ci config's python 3.11 version * fixed torch installs on cpu * update pygit2 for 3.11 * we don't run benchmark on cpu, so no need to install the benchmark reqs * update torch install * try to install torchvision * numpy version 311 * fix cpu test dependency installation * pip git install cmd fix * bypass some tests in 3.11. failure due to packages they use haven't been updated for 3.11 yet Co-authored-by:
Min Xu <min.xu.public@gmail.com>
-
- 12 Jun, 2022 1 commit
-
-
Crutcher Dunnavant authored
-
- 27 Oct, 2021 1 commit
-
-
Eugen Hotaj authored
Fixes #827. Co-authored-by:Eugen Hotaj <ehotaj@fb.com>
-
- 22 Oct, 2021 1 commit
-
-
Eugen Hotaj authored
auto_shard.py currently uses torch.fx to create a symbolic DAG of operations and linearizes that DAG into an nn.Sequential so it can later be used for model offloading. This works in most cases but runs into issues for certain eager mode features, such as dynamic conditionals, shape-dependent computation, etc. This PR extends auto_shard.py to first run a preprocessing step which wraps any nn.Module which cannot be traced through. It adds a test for dynamic conditionals and updates existing failing test code. There are some immediate extensions to this approach which are marked as TODO in the code.
-
- 26 Jun, 2021 1 commit
-
-
Pavel Belevich authored
-
- 11 Jun, 2021 1 commit
-
-
anj-s authored
[Offload][feature] Add auto shard functionality to remove requirement of nn.Sequential models. (#695) * auto wrap functionality * lint and doc strings * fix lint errors * lint errors and version skips * remove mypy checking and add conditional import * another math.prod instance * another import fix * address comments * lint errors * address comments * fix lint errors * add placeholder nodes to tracker list
-