- 15 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies new import merging and sorting from µsort v1.0. When merging imports, µsort will make a best-effort to move associated comments to match merged elements, but there are known limitations due to the diynamic nature of Python and developer tooling. These changes should not produce any dangerous runtime changes, but may require touch-ups to satisfy linters and other tooling. Note that µsort uses case-insensitive, lexicographical sorting, which results in a different ordering compared to isort. This provides a more consistent sorting order, matching the case-insensitive order used when sorting import statements by module name, and ensures that "frog", "FROG", and "Frog" always sort next to each other. For details on µsort's sorting and merging semantics, see the user guide: https://usort.readthedocs.io/en/stable/guide.html#sorting Reviewed By: lisroach Differential Revision: D36402205 fbshipit-source-id: a4efc688d02da80c6e96685aa8eb00411615a366
-
- 12 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies the black-fbsource codemod with the new build of pyfmt. paintitblack Reviewed By: lisroach Differential Revision: D36324783 fbshipit-source-id: 280c09e88257e5e569ab729691165d8dedd767bc
-
- 16 Mar, 2022 1 commit
-
-
Ananth Subramaniam authored
Reviewed By: kazhang Differential Revision: D34669519 fbshipit-source-id: 8cfee968104c823a55960f2730d8e888ac1f298e
-
- 08 Mar, 2022 1 commit
-
-
Ananth Subramaniam authored
Reviewed By: tangbinh Differential Revision: D34669294 fbshipit-source-id: c87bc1d4c589518f7c9fc21e6dfe27b03e700b6d
-
- 23 Feb, 2022 1 commit
-
-
Binh Tang authored
Summary: We proactively remove references to the deprecated DDP accelerator to prepare for the breaking changes following the release of PyTorch Lighting 1.6 (see T112240890). Differential Revision: D34295318 fbshipit-source-id: 7b2245ca9c7c2900f510722b33af8d8eeda49919
-
- 29 Dec, 2021 1 commit
-
-
Yanghan Wang authored
Summary: DDPPlugin has been renamed to DDPStrategy (as part of https://github.com/PyTorchLightning/pytorch-lightning/issues/10549), causing oss CI to fail. Simply skipping the import to unblock CI since DDP feature is not used in test. Reviewed By: kazhang Differential Revision: D33351636 fbshipit-source-id: 7a1881c8cd48d9ff17edd41137d27a976103fdde
-
- 18 Nov, 2021 1 commit
-
-
Ananth Subramaniam authored
Summary: ### New commit log messages fa0ed17f8 remove deprecated train_loop (#10482) Reviewed By: kandluis Differential Revision: D32454980 fbshipit-source-id: a35237dde06cc9ddac5373b75992ce88a6771c76
-
- 28 Oct, 2021 1 commit
-
-
Kai Zhang authored
Summary: In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training. However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError. Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error. In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`. Reviewed By: wat3rBro Differential Revision: D31902368 fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
-
- 06 Oct, 2021 1 commit
-
-
Supriya Rao authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/124 Update callsites from torch.quantization to torch.ao.quantization Reviewed By: z-a-f, jerryzh168 Differential Revision: D31286125 fbshipit-source-id: ef24ca87d8db398c65bb5b89f035afe0423a5685
-
- 12 May, 2021 1 commit
-
-
Luis Perez authored
Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: # Manual - remove fixme's in `model_checkpoint.py`, `parameter_monitor.py`, `test_quantization.py`, and `speed_monitor.py` now that `Trainer` is properly annotated. - update `test_quantization.py` to `trainer.train_loop.global_step` instead of `trainer.global_step` which is a read-only. - update `loop_callback.py` to read from `train_loop` for `batch_idx` (which is no longer available). # Automatic ### New commit log messages 7b283e3c Bugfix/Multiple dataloaders (#7433) d7c44cc6 Docs: sync chlog 1.3.1 (#7478) fdf50a5e Mark certain Trainer APIs as protected (#7420) ad9118f0 remove trainer hidden state | sanity refactor [1 / n] (#7437) 4a1134db Log epoch metrics before firing the `on_evaluation_end` hook (#7272) b65ae794 Automatically check `DataModule.has_{setup,teardown,prepare_data}` [2/2] (#7238) 8660d8cf [pre-commit.ci] pre-commit autoupdate (#7475) f6fe715e Fix Sphinx argument deprecation (#7464) Reviewed By: shuyingsunshine21 Differential Revision: D28353491 fbshipit-source-id: 98b87d99e2f09b47b07270858fcbdb5d5299730b
-
- 28 Apr, 2021 1 commit
-
-
Ananth Subramaniam authored
Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ### New commit log messages 7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053) bab72255 [fix] Add barriers before and after setup hook is run (#7202) f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228) e147127c [feat] Add better support for predict + ddp 2/3 (#7215) ca6c87ff Add back `clip_gradients(model)` (#7231) 3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014) 5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213) 52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937) dd5ec75e Deprecate save_function from model checkpoint callback (#7201) ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194) 6be0a859 Update teardown for TPU acc (#7211) bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814) 68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130) 44d775fc Update Error message for ProfileConnector (#7204) 31fcd7d0 Deprecate write_predictions on the LightningModule (#7066) 591b9cee make bug_report_model minimal (#7191) b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180) f58865aa Properly set `LightningModule.device` after model replacement (#7188) 8439aead Update FairScale on CI (#7017) 92af3632 Fix `lr_finder` suggesting too high learning rates (#7076) d534e53e add missing predict docs (#7150) Reviewed By: kazhang Differential Revision: D28032962 fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
-
- 09 Apr, 2021 1 commit
-
-
Ananth Subramaniam authored
Summary: `checkpoint_callback` now only accepts boolean values: https://github.com/PyTorchLightning/pytorch-lightning/blob/19e67d18c472c3a03dec4dd9bfcef031e9ca8719/pytorch_lightning/trainer/connectors/callback_connector.py#L65-L73 Reviewed By: shuyingsunshine21 Differential Revision: D27682178 fbshipit-source-id: 9e863aad7a23a76dee8ae5df9f5a78e7a94bfe8a
-
- 31 Mar, 2021 1 commit
-
-
Kai Zhang authored
Reviewed By: newstzpz Differential Revision: D27255960 fbshipit-source-id: 1699ff23d2bc610dffc0215a90a7c1c17e3783c3
-
- 30 Mar, 2021 1 commit
-
-
Sam Tsai authored
Summary: Separate unit tests into individual folder based on functionality. Reviewed By: wat3rBro Differential Revision: D27132567 fbshipit-source-id: 9a8200be530ca14c7ef42191d59795b05b9800cc
-
- 20 Mar, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Not d2go.tests is not a library for oss, move utils code to d2go.utils.testing Reviewed By: zhanghang1989 Differential Revision: D26706933 fbshipit-source-id: 85767b66bbb6c67db05e11823beb4840220b2aa3
-
- 03 Mar, 2021 1 commit
-
-
Kai Zhang authored
Summary: As titled. Make a copy of quantization callback to unblock D2go OSS. Reviewed By: zhanghang1989 Differential Revision: D26735525 fbshipit-source-id: 12b77f04cfa1361e856b26ea218a262da1fadd88
-