- 23 Sep, 2022 1 commit
-
-
Min Xu authored
* [fix] better handling non-flatten in FSDP - see the detailed comment about that backward firing case - also minor debugging help in FSDP - also minor fix in FPW's state dict * [feat] disallow reset_parameters by default * [feat] adding fsdp_instances API - useful in check wrapping by user code * [fix] one line fix but more than a day of debugging * fixed the case of loading combined check with empty fsdp instances * fixed another bug around state loading the root/nonroot module full param caching due to not resharding after forward * [feat] support .half and .float better * fixed a bug in gather optim state losses extra keys from the original state_dict * fixed a test failure in mixed precision * fixed another bug affecting no_sync grad acc * fixed a bug and a test in fsdp optim state * fixed another corner case * added a comment * skip ssd offload tests * skip fsdp one for ssd overload Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 12 Jun, 2022 1 commit
-
-
Crutcher Dunnavant authored
-
- 23 Feb, 2022 2 commits
- 22 Feb, 2022 1 commit
-
-
anj-s authored
* add benchmarks for fsdp * fix lint errors * clean up * clean up unused flags * add the benchmarks * remove unused args * fix lint errors * fix lint errors * update command line * add support for multiple devices * try full fp16 mode * try full fp16 mode * lint errors * merge main * lint errors * lint errors * lint error * update intersphinx mapping for numpy * update intersphinx mapping for numpy * skip test * added golden configs * use synthetic benchmarks * fix fn name * fix cuda device id * fix verify * lint fix
-
- 14 Feb, 2022 1 commit
-
-
Min Xu authored
* update pytest versions * [test] test related changes - upgrade to newer pytorch versions - added function to make test more deterministic on A100 and TF32 - fixed some tests so that they are correctly skipped on a single GPU system * more fixes * formatting overly long lines * format * better test without trigger a warning * fix an optim state bug with newer pytorch - adam optimizer seems to return "step" as a singleton tensor now in the nightly build - this fixes it assumeing non-tensor value can still be loaded back by the optimizer * improve oss.py - use min_loss for regression checking is a bit more reliable - also increased the num epochs from 10 to 12 * small oss.py fix * Update fairscale/nn/data_parallel/fully_sharded_data_parallel.py Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 11 Feb, 2022 1 commit
-
-
Min Xu authored
* skipping one more test * formatting * minor fix and copyright header * comment Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 08 Feb, 2022 1 commit
-
-
anj-s authored
* update intersphinx mapping for numpy * update intersphinx mapping for numpy * update pytorch mapping and disable test
-
- 13 Jan, 2022 1 commit
-
-
tmarkstrum authored
* fixed padding size of input tensor for reduce scatter, and fixed an error that assigned wrong group * Update fairscale/nn/data_parallel/fully_sharded_data_parallel.py Co-authored-by:
Min Xu <24926999+min-xu-ai@users.noreply.github.com> * added changelog * fixed some commit. * added unit test to ensure the reduce_scatter process group size is correct in default cases. And fall back to default process grouop when the reduce_scatter process group has the wrong size. * throw an error instead of rolling back to use default process group for reduce_scatter_process_group * Revert "throw an error instead of rolling back to use default process group for reduce_scatter_process_group" This reverts commit eab5620da3b726ea55d3088ae4ca10d94dcdf4d9. * added check for None to avoid unit test failure * fixed an error to avoid the unit tests failure Co-authored-by:
Min Xu <24926999+min-xu-ai@users.noreply.github.com>
-
- 07 Jan, 2022 1 commit
-
-
tmarkstrum authored
* enable reduce scatter overlap with other operations * fixed unit tests and added docstrings for the new parameters for fsdp * fixed more unit tests * fixed unit tests * avoided the pickle error on process_group_reduce_scatter * removed an unnecessary parameter in unit tests * remove unnecessary prints * fixed the docstring * skipped the test_offload unit test because this unit test failed in the main branch * removed the enable_reduce_scatter_overlap API parameter * added doc string for the defualt value of process_group_reduce_scatter parameter * fixed a syntax bug * fixed a bug which cause unitest failure * removed the all_gather in the ProcessGroupName enum * added more comment * changed the default value of process_group_reduce_scatter from None to ProcessGroupName.reduce_scatter
-
- 06 Jan, 2022 1 commit
-
-
four4fish authored
* FullyShardedDataParallel: only return full state dict on rank 0 * Add flag and make rank 0 only optional * Add tests * Add docs * address comments * update comments * update torch nightly version * update torchvision number for torch nightly dependence * add changelog * Update CHANGELOG.md * Update CHANGELOG.md
-
- 15 Nov, 2021 1 commit
-
-
Anupam Bhatnagar authored
* first commit * sharded scaler hitting nan assertions * adding test for sharded grad scaler without cpu offload * ddp grad scaler and fsdp sharded grad scaler test failing * removing test_output * fix no cpu offload test * changing optimizer from OSS to SGD * all tests passing, code cleanup pending * code cleanup * fix pyproject.toml * removing .isort.cfg * running isort linter * resolving isort issues * resolving black linter issue * resolving mypy issues * fix import statement * fix mypy error * modifying import statement * adding pytorch version requirement * fixing pytest skip test decorator * apply version guard for ShardedGradScaler * removing test_fsdp_grad_scaler * increasing num_epochs for ShardedGradScaler so that updates are not skipped * adding support for torch 1.8 * minor edit * [skip ci] more torch 1.8 changes * parametrizing the tests * cleanup code with linters * [skip ci] update doc string * [skip ci] addressing some more comments
-
- 12 Nov, 2021 1 commit
-
-
Anupam Bhatnagar authored
* adding pre-commit files * applying pre-commit to all files * adding no-strict-optional argument to mypy in circle ci config * fix typo * updating python versions * [skip ci] remove extra args * adding python 3.9 * [skip ci] set pre-commit version in requirements-dev.txt * set CACHE_VERSION * move linters from circleci to github actions * update python version * update python version in benchmarks_2 * moving to python 3.9.7
-
- 27 Oct, 2021 1 commit
-
-
anj-s authored
* remove offload dependency on fp16 * update python version for cpu tess * run CPU tests with updated PyTorch version * split changes * revert tests config * fix lint errors * update nightly and test PyTorch versions * skip failing multiprocess pipe test * always skip test * always skip test * always skip test * lint error * skip unsupported versions * improve skip message * lint errors * modify docs * add tests * fix test failures * modify comments * fix lint errors * fix lint errors
-
- 06 Sep, 2021 1 commit
-
-
Min Xu authored
[cleanup] CI test updates; mypy cleanup; partial broadcast_object cleanup; pre-commit documentation (#744) * changelog; mypy; oss cleanup * more broadcast_object cleanup in FSDP * one more mypy fix * retire pytorch 1.6 from circleci, add new lightly, add 1.8 LTS and 1.9 stable release * update torch version for LTS * minor fixes * update cache key * trying newer gpu VMs * bump the cache * update to gpu.medium, which should be 2 GPUs * update nightly version * add pre-commit instruction * fixed CHANGELOG after merging * updated to newer nightly * retained the older broadcast function for older GPUs for oss.py * fixed a bug * added a comment * fixing a test for pytorch 1.10 * testing a fix * Update fairscale/optim/oss.py * Update CONTRIBUTING.md Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 12 Aug, 2021 1 commit
-
-
anj-s authored
[FSDP][feature] Support returning the original parameter names after a model has been wrapped with FSDP (#755) * checkpoint work * fix lint issues * remove debug statement * remove print * fix lint errors * fix lint errors * fix lint errors * add comments and fix lint errors * modified comments and tests
-
- 30 Jul, 2021 1 commit
-
-
Yanli Zhao authored
Move final backward callback to pre-backward hook of root FSDP instance Summary: Move final backward callback to pre-backward hook of root FSDP instance, so that it is always attached to the outer most backward call and fired after all backward calls are completed. Also added flags to check final backward callback is fired when final backward callback is required. If root FSDP is checkpointed and called multiple times in forward, check pointer counter is used to make sure final backward callback is queued inside last inner backward call as well. Test Plan: unit tests Reviewers: Subscribers: Tasks: Tags: * reformat Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * nits and unit tests Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * address some comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * replace m with self Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * reformat Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * nits Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * remove the fired flag Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * assert state on root only Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags: * comments Summary: Test Plan: Reviewers: Subscribers: Tasks: Tags:
-
- 26 Jun, 2021 1 commit
-
-
Pavel Belevich authored
-
- 17 May, 2021 1 commit
-
-
Quentin Duval authored
* Save FSDP metadata for offline unflattening * Complete the meta-data saving method with all the information needed to reconstruct a checkpoint offline, and implement the method that reconstruct a consolidated checkpoint from a sharded checkpoint * Complete the meta-data saving method with all the information needed to reconstruct a checkpoint offline, and implement the method that reconstruct a consolidated checkpoint from a sharded checkpoint * Add a unit test to show how to use the function * Code review + improvement of the unit tests * Code review: extract clean_path * Make meta data and consolidation of checkpoint work for flatten_parameter=False * Add new unit test file in CI * Complete changelog and fix mypy issues * Add support for module buffers in the consolidation of sharded checkpoints * Better support for module buffers: save them in the meta data * Refactoring: use a data-format for the meta data that is simpler to understand (move from object of array to array of object format) * Renaming to make code clearer * Code review: in_temporary_directory rework and typo correction * Renaming Co-authored-by:
Sam Shleifer <sshleifer@gmail.com> Co-authored-by:
QuentinDuval <QuentinDuval@users.noreply.github.com>
-
- 12 May, 2021 1 commit
-
-
anj-s authored
* rename files * add newly renamed file * rename and move checkpoint activations related files * add test files to ci list * fix lint errors * modify docs * add changelog * retain old path for now * fix lint errors * add another import test case * fix merge conflict * add missing test file
-
- 13 Apr, 2021 1 commit
-
-
Sam Shleifer authored
-
- 08 Apr, 2021 1 commit
-
-
Sam Shleifer authored
-
- 07 Apr, 2021 1 commit
-
-
Myle Ott authored
-
- 04 Apr, 2021 1 commit
-
-
Sam Shleifer authored
-
- 20 Mar, 2021 1 commit
-
-
Myle Ott authored
* Add new test for weight init (fails) * Set FSDP.compute_device so summon_full_params works before module moves to CUDA * Override FSDP.apply to enable custom weight init
-
- 09 Mar, 2021 1 commit
-
-
Myle Ott authored
-
- 06 Mar, 2021 1 commit
-
-
Myle Ott authored
-
- 04 Mar, 2021 1 commit
-
-
Sam Shleifer authored
-
- 03 Mar, 2021 1 commit
-
-
Min Xu authored
-
- 02 Mar, 2021 1 commit
-
-
Myle Ott authored
-
- 01 Mar, 2021 1 commit
-
-
Min Xu authored
* [chores]: CI py39 on GPU and more efficiency * add test list files * fix * add test list files * split benchmark run into 2 runs * fix 1.8 version and balance benchmarks * fix * fix * fix * fix * recording tests * py39 install fix * test again * move tests * reorg tests * skip tests for torch 1.8 due to an upstream bug * removed __init__.py from tests since it confuses pytest * Revert "removed __init__.py from tests since it confuses pytest" This reverts commit 7e156ba33dfaa5ed052031780613ec0cb57a45b0. * don't include __init__ in file list * notes on __init__.py and added missing ones * fixed mypy in a test file * balance test runtime * better pip install * balance more * pip fix * balance * balance more, all test should finish within 20m now * minor license update * trying cu102 * more doc and addressed Ben's comments * debugging * debugging...
-
- 27 Feb, 2021 1 commit
-
-
Min Xu authored
* [fix] FSDP corner case of all params at in the children * lint * fix * tradeoff * fix doc build * review comments
-
- 26 Feb, 2021 2 commits
- 24 Feb, 2021 1 commit
-
-
Myle Ott authored
-
- 23 Feb, 2021 1 commit
-
-
Myle Ott authored
Recent work by [Microsoft](https://arxiv.org/abs/1910.02054) and [Google](https://arxiv.org/abs/2004.13336 ) has shown that data parallel training can be made significantly more efficient by sharding the model parameters and optimizer state across data parallel workers. These ideas are encapsulated in the new **`FullyShardedDataParallel` (FSDP)** wrapper, which is a drop-in replacement for PyTorch's `DistributedDataParallel` (DDP) wrapper. Compared to PyTorch DDP: * FSDP shards parameters (FP16 + FP32) and optimizer state across data parallel GPUs * FSDP with `reshard_after_forward=False` has the same communication cost as PyTorch DDP and is similar to ZeRO-2 * FSDP with `reshard_after_forward=True` increases total communication by 50% and is similar to ZeRO-3: * all-gather parameters at start of forward pass and start of backward pass * reduce-scatter grads at end of backward pass Co-authored-by:
Min Xu <24926999+min-xu-ai@users.noreply.github.com> Co-authored-by:
Sam Shleifer <sshleifer@gmail.com>
-