- 28 Feb, 2023 1 commit
-
-
Yanghan Wang authored
Reviewed By: mattcyu1 Differential Revision: D43557002 fbshipit-source-id: b929875f479b215b3e6034a03d8bea3e4cb3c2f8
-
- 24 Feb, 2023 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/482 We should avoid using interleaving during save if we are calling save on one process: ``` if comm.is_main_process(): save() ``` this is because interleave calls comm.synchronize() so will just wait indefinitely. This diff updates the FSDP checkpointer to use save(interleave=False) when running on one process. Reviewed By: wat3rBro, YanjunChen329 Differential Revision: D43526328 fbshipit-source-id: 672993a87af627aca090384b0c218798bd42fcde
-
- 23 Feb, 2023 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/481 X-link: https://github.com/facebookresearch/mobile-vision/pull/139 also support specifying number of concurrency for interleaving. Reviewed By: mattcyu1 Differential Revision: D43522445 fbshipit-source-id: 790a8527c6b42c9098ef82c4fc01ec1a528e2418
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/479 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/467 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/466 This allows internal solution to be plugged in, in a generic fashion, rather than relying on training patterns (FSDP or not). Reviewed By: wat3rBro Differential Revision: D42983444 fbshipit-source-id: a70bf0d25737d9cbbf22e3368363d3fdec57b8b5
-
- 17 Feb, 2023 1 commit
-
-
Anthony Chen authored
Summary: X-link: https://github.com/facebookresearch/mobile-vision/pull/138 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/477 Interleave FSDP checkpointing to avoid excessive reading/writing patterns that may cause manifold quota exceeding error Reviewed By: wat3rBro Differential Revision: D43266742 fbshipit-source-id: 85549c3b10413e0ffad2f3ec8e198d8c77486478
-
- 13 Jan, 2023 3 commits
-
-
Anthony Chen authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/446 ## Design Following D41861308, local checkpoints need to be converted to global ones before being loaded and used in non-FSDP wrapped models. This diff implements such conversion in d2go checkpointer level to allow automatic conversion with minimal user interference and no new config key. In previous diff, `FSDPWrapper` has 2 loading modes and 2 saving modes: it uses `load_local_state_dict` to determine whether the ckpt we want to load is local or global, and uses `use_local_state_dict` to decide whether to save new ckpts as local or global. Thus, there are 4 combinations of loading/saving modes: 1. load local + save local 2. load local + save global 3. load global + save local 4. load global + save global And the local-to-global checkpoint conversion maps to mode 2: load local + save global. Thus, when the checkpointer is in mode 2, it automatically saves the model to a global ckpt right after it loads the local ckpt. Because this happens in checkpointer level, normal training/eval can resume after ckpt conversion. This gives users a consistent and seamless experience with normal training/eval, while also providing a separate ckpt conversion feature via eval-only. ## Usage Suppose we want to convert local checkpoint `/tmp/model_final`, user can run the same training command with extra args: `MODEL.WEIGHTS=/tmp/model_final` and `FSDP.USE_LOCAL_STATE_DICT=False` Wiki: https://www.internalfb.com/intern/wiki/Mobile_Vision/Detectron2Go/D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go_Tutorials/Diffusion_Pipeline/Diffusion_Model_Inference/#using-checkpoints-traine Reviewed By: wat3rBro Differential Revision: D41926662 fbshipit-source-id: 18a62607a79b0e917d929e9ea85ac1658fb895ca
-
Anthony Chen authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/457 ## Context: The Pytorch FSDP (Fully Sharded Data Parallel) backend supports two checkpointing modes. The first one is full_state_dict mode, where each FSDP worker summons parameters from other workers to produce a global state dict that can be loaded by non-FSDP models. This mode is the desired mode for checkpointing because checkpoint structures and key names follows the default convention. It's already supported in D39228316 (https://github.com/facebookresearch/d2go/commit/02625ff83207b836df349eadc4a61eb3d4a5810c) However, when the model is too large to fit into a single GPU memory, this approach would fail because a worker's GPU can't hold all the summoned parameters during checkpoint saving. The rescue is to use the second checkpointing mode: local_state_dict. This mode saves the sharded parameters in each GPU process locally. It can only be loaded by FSDP-wrapped models with the same distributed training settings (i.e. num processes), but it reduces the need for summoning parameters and greatly saves peak GPU memory during training This diff enables local state dict checkpointing in d2go. ## API: This diff supports both **saving** local state and **loading** state dict that is locally sharded. Whether to save local state is controlled by `FSDP.USE_LOCAL_STATE`. If `FSDP.USE_LOCAL_STATE=True` and we want to save `output/model_0000001.pth` as in the old pattern, the local checkpoints will be saved as: ``` - output - model_0000001 - rank0.pth - rank1.pth - rank2.pth - rank3.pth ``` Whether to load local state, on the other hand, is controlled by the path of the checkpoint to load. If the path is a file, i.e. `output/model_final.pth`, the file will be loaded as a full state dict by all GPU processes like before. If the path is a directory, i.e. `output/model_final`, the checkpointer will attempt to load `output/model_final/rankX.pth` for rank X. This API design enables the full combinations of loading local/full states and saving local/full states. ## Conversion to full state dict [Temporary] Conversion from local state dict to full state dict is needed during an e2e workflow. This will be implemented in another diff Reviewed By: wat3rBro Differential Revision: D41861308 fbshipit-source-id: 2e01b601683d06b46f0c5517c6cff30bbcffa8f7
-
Anthony Chen authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/440 Move FSDP wrapping to runner.build_model by rewriting it as a modeling hook **Motivation** When a model is too large to run inference on a single GPU, it requires using FSDP with local checkpointing mode to save peak GPU memory. However, in eval_pytorch workflow (train_net with eval-only), models are evaluated without being wrapped by FSDP. This may cause OOM errors for the reasons above. Thus, it may be a better practice to wrap model with FSDP during `runner.build_model(cfg)`, so evaluation can also be run in the same FSDP setting as in training. This diff moves FSDP wrapping to `runner.build_model(cfg)` by rewriting it as a modeling hook. **API changes** * Users need to append `"FSDPModelingHook"` to `MODEL.MODELING_HOOKS` to enable FSDP. * `FSDP.ALGORITHM` can only be `full` or `grad_optim` **Note** It's not possible to unwrap an FSDP model back to the normal model, so FSDPModelingHook.unapply() can't be implemented Reviewed By: wat3rBro Differential Revision: D41416917 fbshipit-source-id: f3fc72d574cc6ccbe0d238e48c575926ba5b4d06
-
- 09 Dec, 2022 1 commit
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/436 Renaming `model_ema.py` to `ema.py` (as `modeling` is already in the folder name. Fixing dependencies after rename Reviewed By: wat3rBro Differential Revision: D41685115 fbshipit-source-id: 006999a020a901ea8be4b71e072d688bd36cdce2
-
- 17 Nov, 2022 1 commit
-
-
Anthony Chen authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/396 Integrate PyTorch FSDP, which supports two sharding modes: 1. gradient + optimizer sharding; 2. full model sharding (params + gradient + optimizer). This feature is enabled in the train_net.py code path. Sources * Integration follows this tutorial: https://pytorch.org/tutorials/intermediate/FSDP_tutorial.html API changes * Add new config keys to support the new feature. Refer to mobile-vision/d2go/d2go/trainer/fsdp.py for the full list of config options * Add `FSDPCheckpointer` as an inheritance of `QATCheckpointer` to support special loading/saving logic for FSDP models Reviewed By: wat3rBro Differential Revision: D39228316 fbshipit-source-id: 342ecb3bcbce748453c3fba2d6e1b7b7e478473c
-