- 29 Jun, 2022 1 commit
-
-
Min Xu authored
Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 12 Jun, 2022 1 commit
-
-
Crutcher Dunnavant authored
-
- 26 May, 2022 1 commit
-
-
Crutcher Dunnavant authored
-
- 02 May, 2022 1 commit
-
-
Paul Johnson authored
[FSDP] ssd_offload fixing backward path (grad_fn) for SsdFlatParameter and SsdFlatParameterView (#974) * [FSDP] fixing backward path for SsdFlatParameter and SsdFlatParameterView when overriding .data * Get ssd_offload unit tests passing * [FSDP] get all test_fsdp_offload tests passing w/ ssd_offload on * Update changelog
-
- 26 Apr, 2022 1 commit
-
-
Min Xu authored
Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 06 Apr, 2022 1 commit
-
-
Paul Johnson authored
Improvements to ssd_offload to support pickling/unpickling SsdTensorHandle (and derived classes) (#964) Verified that FSDP wrapped models using ssd_offload checkpoint save and restore correctly
-
- 14 Feb, 2022 1 commit
-
-
Min Xu authored
* update pytest versions * [test] test related changes - upgrade to newer pytorch versions - added function to make test more deterministic on A100 and TF32 - fixed some tests so that they are correctly skipped on a single GPU system * more fixes * formatting overly long lines * format * better test without trigger a warning * fix an optim state bug with newer pytorch - adam optimizer seems to return "step" as a singleton tensor now in the nightly build - this fixes it assumeing non-tensor value can still be loaded back by the optimizer * improve oss.py - use min_loss for regression checking is a bit more reliable - also increased the num epochs from 10 to 12 * small oss.py fix * Update fairscale/nn/data_parallel/fully_sharded_data_parallel.py Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 28 Jan, 2022 1 commit
-
-
Min Xu authored
* [feat] add CosFace paper's LMCL to MEVO - added baseline algorithm to the reference kernel - added MEVO version of LMCL - added unit test to verify it is correct with respect to the reference as well as its memory usage * updated changelog Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 07 Jan, 2022 1 commit
-
-
tmarkstrum authored
* enable reduce scatter overlap with other operations * fixed unit tests and added docstrings for the new parameters for fsdp * fixed more unit tests * fixed unit tests * avoided the pickle error on process_group_reduce_scatter * removed an unnecessary parameter in unit tests * remove unnecessary prints * fixed the docstring * skipped the test_offload unit test because this unit test failed in the main branch * removed the enable_reduce_scatter_overlap API parameter * added doc string for the defualt value of process_group_reduce_scatter parameter * fixed a syntax bug * fixed a bug which cause unitest failure * removed the all_gather in the ProcessGroupName enum * added more comment * changed the default value of process_group_reduce_scatter from None to ProcessGroupName.reduce_scatter
-
- 05 Jan, 2022 1 commit
-
-
Paul Johnson authored
* Enabling ssd_offload training and test via tests/nn/data_parallel/test_fsdp_offload.py. * Removed unused classes: SsdBuffer, SsdTensorHandleView, SsdParameter, SsdTensor * Enhance test coverage of test_ssd_offloading_train_flatten_params_wrapper * Modifications from PR #887 review comments. * Update Changelog
-
- 13 Dec, 2021 1 commit
-
-
Min Xu authored
- During eval, we will fallback to just output projection without fusing - added unit test to ensure the shape is correct
-
- 12 Nov, 2021 1 commit
-
-
Anupam Bhatnagar authored
* adding pre-commit files * applying pre-commit to all files * adding no-strict-optional argument to mypy in circle ci config * fix typo * updating python versions * [skip ci] remove extra args * adding python 3.9 * [skip ci] set pre-commit version in requirements-dev.txt * set CACHE_VERSION * move linters from circleci to github actions * update python version * update python version in benchmarks_2 * moving to python 3.9.7
-
- 08 Nov, 2021 2 commits
-
-
anj-s authored
* update release notes * initial commit * lint cleanup etc. * helper functions; lint errors * lint errors * lint errors * add back the boolean for named_parameters * address comments and fix lint * remove unused functions and class * remove unused state
-
Benjamin Lefaudeux authored
Add SlowMo Distributed Data Parallel for clusters with slow interconnects Co-authored-by:Vinayak Tantia <tantia.vinayak1@gmail.com>
-
- 05 Nov, 2021 1 commit
-
-
Min Xu authored
* [feat] MEVO kernel - initial import from min/softmax and min/testing branches - need to rename and further cleanup * only test with newer pytorch * renamed and added comments and code cleanup * rename and reduce test memory * testing * minor fixing * fixing * more fix * changelog * more 1.7 and 1.8 paper cuts * remove dead code * addressed Benjamin's comments * addressed more comments Co-authored-by:Min Xu <min.xu.public@gmail.com>
-
- 01 Nov, 2021 1 commit
-
-
anj-s authored
* add doc strings * add lower level SSD APIs and tests * add the test to the list to be run * remove unused imports * more doc string changes * fix lint errors
-
- 27 Oct, 2021 1 commit
-
-
Eugen Hotaj authored
Fixes #827. Co-authored-by:Eugen Hotaj <ehotaj@fb.com>
-
- 22 Oct, 2021 1 commit
-
-
Eugen Hotaj authored
auto_shard.py currently uses torch.fx to create a symbolic DAG of operations and linearizes that DAG into an nn.Sequential so it can later be used for model offloading. This works in most cases but runs into issues for certain eager mode features, such as dynamic conditionals, shape-dependent computation, etc. This PR extends auto_shard.py to first run a preprocessing step which wraps any nn.Module which cannot be traced through. It adds a test for dynamic conditionals and updates existing failing test code. There are some immediate extensions to this approach which are marked as TODO in the code.
-
- 21 Oct, 2021 1 commit
-
-
anj-s authored
* update python version for cpu tess * run CPU tests with updated PyTorch version * update nightly and test PyTorch versions * skip failing multiprocess pipe test * always skip test * always skip test * always skip test * lint error * skip unsupported versions * improve skip message * lint errors
-
- 12 Sep, 2021 1 commit
-
-
Darryl Barnhart authored
* [fix] FSDP intra-backwards gradient accumulation. Ensure gradient reduction accumulates into the unsharded gradient tensor within a backwards pass. This matters when an FSDP module is called multiple times within a forward pass, and reduction is _not_ deferred using activation checkpoint forward counters, bucketing or some other mechanism. Closes #780 * [refactor] Remove forward counters. Comments. Removed forward counters from the activation checkpointing utility, now that FSDP does not require them for correct operation. Add more detailed comment about memory usage behaviour with gradient reduction. * [refactor] Delete deprecated forward counter usage. * [refactor] Add state assertion as end of pre-backward hook.
-
- 28 Jun, 2021 1 commit
-
-
Mehdi Mirzazadeh authored
* fixing bug in setting dependancies in parition handler * modifying unit test to need the fix * black
-
- 26 Jun, 2021 1 commit
-
-
Pavel Belevich authored
-
- 25 Jun, 2021 2 commits
-
-
Mehdi Mirzazadeh authored
-
Mehdi Mirzazadeh authored
* Preparing pipeline for newer versions of pytorch * updated error message
-
- 22 Jun, 2021 1 commit
-
-
Pavel Belevich authored
* Update torch to 1.9.0.dev20210614+cu102 * Update config.yml * Update config.yml * Update setup.py * Update config.yml * Update config.yml * Update config.yml * Update config.yml
-
- 11 Jun, 2021 1 commit
-
-
anj-s authored
[Offload][feature] Add auto shard functionality to remove requirement of nn.Sequential models. (#695) * auto wrap functionality * lint and doc strings * fix lint errors * lint errors and version skips * remove mypy checking and add conditional import * another math.prod instance * another import fix * address comments * lint errors * address comments * fix lint errors * add placeholder nodes to tracker list
-
- 27 May, 2021 1 commit
-
-
msbaines authored
This change also ensure that we calculate running_{mean,var} correctly when wrapped.
-
- 14 May, 2021 1 commit
-
-
msbaines authored
-
- 07 May, 2021 1 commit
-
-
msbaines authored
* [feat] experimental.nn.SyncBatchNorm: initial commit Fast/simple re-implementation of SyncBatchNorm. When profiling SSL Vision, I was seeing a majority of cycles spent in SyncBatchNorm. With this change, I see a 10% to 20% speedup on the model I was profiling. When running benchmarks/experimental/sync_batchnorm.py on 8 x V100, I get a 6x speedup: <class 'torch.nn.modules.batchnorm.BatchNorm2d'> Elapsed time is 0.08709120750427246 Elapsed time is 0.12632274627685547 Elapsed time is 0.14095258712768555 Elapsed time is 0.16529417037963867 Elapsed time is 0.1419970989227295 Elapsed time is 0.15166854858398438 Elapsed time is 0.12000870704650879 Elapsed time is 0.17534875869750977 <class 'torch.nn.modules.batchnorm.SyncBatchNorm'> Elapsed time is 2.5087168216705322 Elapsed time is 2.497001886367798 Elapsed time is 2.5204885005950928 Elapsed time is 2.526789903640747 Elapsed time is 2.5080230236053467 Elapsed time is 2.524489641189575 Elapsed time is 2.513214588165283 Elapsed time is 2.5359973907470703 <class 'fairscale.experimental.nn.sync_batchnorm.SyncBatchNorm'> Elapsed time is 0.4126114845275879 Elapsed time is 0.39051294326782227 Elapsed time is 0.40685415267944336 Elapsed time is 0.4159870147705078 Elapsed time is 0.42383885383605957 Elapsed time is 0.4080159664154053 Elapsed time is 0.41202712059020996 Elapsed time is 0.42400121688842773
-
- 28 Apr, 2021 1 commit
-
-
Mehdi Mirzazadeh authored
* adding auto graph generation for distributed pipeline * ignore trace.py for my for now, since it needs pytorch 1.8 * fixing tests * simplifying graph api * remove unused debug utilities * use inspect to find argument lists * use sharded linear layer * flkae8 * comment * polishing * polishing
-
- 15 Apr, 2021 1 commit
-
-
anj-s authored
[fix] Revert change that removed the option to run OffloadModel with out activation checkpointing. (#608) * revert change made * add tests and revert sync shard changes * add tests * remove file checked in by error * inine var * fix lint errors * add checkpoint activation * fix mypy * use a bigger model * modify tests for now * resolve conflicts Co-authored-by:Anjali Sridhar <anj@devfair0443.h2.fair>
-
- 13 Apr, 2021 1 commit
-
-
Mehdi Mirzazadeh authored
replacing multip-process pipe implementation with more flexible one Initial implementation of proposal pytorch/pytorch#55256
-
- 31 Mar, 2021 2 commits
- 29 Mar, 2021 1 commit
-
-
msbaines authored
-
- 28 Mar, 2021 1 commit
-
-
msbaines authored
-
- 19 Mar, 2021 2 commits
- 04 Mar, 2021 1 commit
-
-
Siddharth Goyal authored
* Fix ampnet unit test by adding delegate object * Remove comments
-
- 01 Mar, 2021 1 commit
-
-
Min Xu authored
* [chores]: CI py39 on GPU and more efficiency * add test list files * fix * add test list files * split benchmark run into 2 runs * fix 1.8 version and balance benchmarks * fix * fix * fix * fix * recording tests * py39 install fix * test again * move tests * reorg tests * skip tests for torch 1.8 due to an upstream bug * removed __init__.py from tests since it confuses pytest * Revert "removed __init__.py from tests since it confuses pytest" This reverts commit 7e156ba33dfaa5ed052031780613ec0cb57a45b0. * don't include __init__ in file list * notes on __init__.py and added missing ones * fixed mypy in a test file * balance test runtime * better pip install * balance more * pip fix * balance * balance more, all test should finish within 20m now * minor license update * trying cu102 * more doc and addressed Ben's comments * debugging * debugging...
-