1. 16 Feb, 2023 2 commits
  2. 14 Feb, 2023 3 commits
    • Fei Sun's avatar
      Add NUMA binding · 07ddd262
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/472
      
      Add NUMA binding to d2go. It equally distributes the GPUs to the CPU sockets so that the CPU traffic, GPU to CPU traffic are all balanced. It helps the diffusion model training, but it is a general technique that can be applied to all models. We still want to manually enable it in each case though, until we are confident that it gives better performance and set it as a default.
      
      NUMA binding is based on jspark1105's work D42827082. Full credit goes to him.
      
      This diff does not enable the feature.
      
      Reviewed By: newstzpz
      
      Differential Revision: D43036817
      
      fbshipit-source-id: fe67fd656ed3980f04bc81909cae7ba2527346fd
      07ddd262
    • Fei Sun's avatar
      Add option to use fused adamw optimizer · 8bb24bb0
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/471
      
      adamw recently added an option to use a fused optimizer. It may give better performance than foreach argument. However, we cannot enable it by default, since it requires all parameters to be in CUDA and maybe some other restrictions. So, enable it on a per project basis.
      
      On DALLE2, it results about 23ms faster.
      
      Reviewed By: newstzpz
      
      Differential Revision: D43027327
      
      fbshipit-source-id: 82c6855116094e86386ad2edeea3a74f9e555174
      8bb24bb0
    • Fei Sun's avatar
      Ignore modules · 7ef9d897
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/470
      
      Enable ignore FSDP modules. Those modules will not be put in FSDP. It is useful in the diffusion model, where the CLIP model is not used in training. Thus, it is OK to have a separate copy in each GPU. It reduces the CLIP execution time from 63ms to 48ms (15ms reduction). This is mostly because it is a CPU bounded module and in each FSDP block, some code is injected. In addition, it also reduces the FSDP all gather time before the CLIP execution from 56ms to 7ms (49ms reduction).
      
      In total, this change may reduce the CLIP runtime from 119ms to 64ms (63ms reduction)
      
      This feature is controlled by this flag:
          IGNORED_MODULES: ["clip_model"]
      
      Reviewed By: newstzpz
      
      Differential Revision: D42910383
      
      fbshipit-source-id: dc4c12254d45ac45d88329feb63a26ec4ae04aef
      7ef9d897
  3. 05 Feb, 2023 1 commit
    • Maayan Frid-Adar's avatar
      Fix TB train visualization · c4c512ce
      Maayan Frid-Adar authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/465
      
      Training visualization was basically activated only for the first training iterations if TRAIN_LOADER_VIS_MAX_IMAGES and TRAIN_LOADER_VIS_WRITE_PERIOD were set to be > 0. because the MAX_IMAGES was taken as the number of samples to log + the allowed number of samples to load overall. So after the first log to TB it was set to 0 and the visualization was not activated for later training steps (ignoring WRITE_PERIOD).
      
      I've added a TRAIN_LOADER_VIS_MAX_BATCH_IMAGES parameter to set a number of samples to visualize each write period up to the max images defined with TRAIN_LOADER_VIS_MAX_IMAGES
      
      Reviewed By: tglik
      
      Differential Revision: D42832903
      
      fbshipit-source-id: 02a0d9aa4ea6d0ee725120916d26b77843a3e8ab
      c4c512ce
  4. 04 Feb, 2023 1 commit
  5. 03 Feb, 2023 1 commit
  6. 01 Feb, 2023 2 commits
    • Licheng Yu's avatar
      missing keys in _convert_to_d2 · c5bf9222
      Licheng Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/462
      
      Fix errors in `_convert_to_d2`. Sometimes the keys are missing, we don't need remove them.
      
      {F860805441}
      
      Reviewed By: newstzpz
      
      Differential Revision: D42929485
      
      fbshipit-source-id: 8584879df5a07cbe5a864b4f170eef3d5f34dd6c
      c5bf9222
    • Yanghan Wang's avatar
      Allow specifying extra lightning trainer params via `_DEFAULTS_` in yaml · 6940fa9c
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/461
      
      There're needs for extending trainer parameters that are not in (or conflict with) the base d2go config, this diff adds a way to inject those configs without touching the base d2go config.
      - In `get_trainer_params`, it simply checks the `LIGHTNING_TRAINER` and use whatever configs under it.
      - Adds `GeneralizedRCNNTaskNoDefaultConfig`, which allows specify default config via yaml file for `GeneralizedRCNNTask`. (also make some changes for prerequisite)
      - (next diff) User can add their own config updater by registering it in `CONFIG_UPDATER_REGISTRY`.
      
      Differential Revision: D42928992
      
      fbshipit-source-id: f2a1d8a3f2bec9908bb1af03928611d963b92c0e
      6940fa9c
  7. 23 Jan, 2023 1 commit
  8. 16 Jan, 2023 1 commit
  9. 14 Jan, 2023 1 commit
  10. 13 Jan, 2023 4 commits
    • Anthony Chen's avatar
      Make AMP compatible with FSDP · abf0ca0c
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/458
      
      Make AMP compatible with FSDP. FSDP does not depend on the torch AMP module and implements its own MixedPrecision module. This MixedPrecision module directly saves additional copy of weights in lower precision and use run these tensors in mixed precision training. This is very different from AMP, which automatically casts tensors to lower precision upon tensor operations.
      
      This diff solves some compatibility bugs between AMP and FSDP with 2 changes:
      1. Use "never_wrap_policy" as the default dummy autowrap policy.
      FSDP Mixed Precision doesn't work with Batchnorm layers. This is because FSDP and other resources like NVidia apex highly discourage running lower precision for batchnorm: https://github.com/pytorch/pytorch/issues/75478. We need to use some autowrap policy in order to let FSDP surpass batchnorm layers in constructing mixed precision.
      2. Wrap FSDPWrapper.forward() with autocast()
      FSDP Mixed Precision uses lower-precision tensors in computation, which could raise type mismatch error when amp.autocast() is not enabled, like in eval. Thus, we wrap FSDP forward() with autocast()
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41328834
      
      fbshipit-source-id: 18cf94c4ad8d9422ffd3bb335873cd29ac987ae9
      abf0ca0c
    • Anthony Chen's avatar
      Convert local checkpoint to global one automatically in d2go FSDP checkpointer · 5ad2d57e
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/446
      
      ## Design
      Following D41861308, local checkpoints need to be converted to global ones before  being loaded and used in non-FSDP wrapped models. This diff implements such conversion in d2go checkpointer level to allow automatic conversion with minimal user interference and no new config key.
      
      In previous diff, `FSDPWrapper` has 2 loading modes and 2 saving modes: it uses `load_local_state_dict` to determine whether the ckpt we want to load is local or global, and uses `use_local_state_dict` to decide whether to save new ckpts as local or global. Thus, there are 4 combinations of loading/saving modes:
      1. load local + save local
      2. load local + save global
      3. load global + save local
      4. load global + save global
      
      And the local-to-global checkpoint conversion maps to mode 2: load local + save global. Thus, when the checkpointer is in mode 2, it automatically saves the model to a global ckpt right after it loads the local ckpt. Because this happens in checkpointer level, normal training/eval can resume after ckpt conversion. This gives users a consistent and seamless experience with normal training/eval, while also providing a separate ckpt conversion feature via eval-only.
      
      ## Usage
      Suppose we want to convert local checkpoint `/tmp/model_final`, user can run the same training command with extra args: `MODEL.WEIGHTS=/tmp/model_final` and `FSDP.USE_LOCAL_STATE_DICT=False`
      
      Wiki: https://www.internalfb.com/intern/wiki/Mobile_Vision/Detectron2Go/D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go_Tutorials/Diffusion_Pipeline/Diffusion_Model_Inference/#using-checkpoints-traine
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41926662
      
      fbshipit-source-id: 18a62607a79b0e917d929e9ea85ac1658fb895ca
      5ad2d57e
    • Anthony Chen's avatar
      Support local state dict checkpointing for FSDP · eea6339f
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/457
      
      ## Context:
      
      The Pytorch FSDP (Fully Sharded Data Parallel) backend supports two checkpointing modes. The first one is full_state_dict mode, where each FSDP worker summons parameters from other workers to produce a global state dict that can be loaded by non-FSDP models. This mode is the desired mode for checkpointing because checkpoint structures and key names follows the default convention. It's already supported in D39228316 (https://github.com/facebookresearch/d2go/commit/02625ff83207b836df349eadc4a61eb3d4a5810c)
      
      However, when the model is too large to fit into a single GPU memory, this approach would fail because a worker's GPU can't hold all the summoned parameters during checkpoint saving. The rescue is to use the second checkpointing mode: local_state_dict. This mode saves the sharded parameters in each GPU process locally. It can only be loaded by FSDP-wrapped models with the same distributed training settings (i.e. num processes), but it reduces the need for summoning parameters and greatly saves peak GPU memory during training
      
      This diff enables local state dict checkpointing in d2go.
      
      ## API:
      
      This diff supports both **saving** local state and **loading** state dict that is locally sharded. Whether to save local state is controlled by `FSDP.USE_LOCAL_STATE`. If `FSDP.USE_LOCAL_STATE=True` and we want to save `output/model_0000001.pth` as in the old pattern, the local checkpoints will be saved as:
      ```
      - output
          - model_0000001
              - rank0.pth
              - rank1.pth
              - rank2.pth
              - rank3.pth
      ```
      Whether to load local state, on the other hand, is controlled by the path of the checkpoint to load. If the path is a file, i.e. `output/model_final.pth`, the file will be loaded as a full state dict by all GPU processes like before. If the path is a directory, i.e. `output/model_final`, the checkpointer will attempt to load `output/model_final/rankX.pth` for rank X.
      
      This API design enables the full combinations of loading local/full states and saving local/full states.
      
      ## Conversion to full state dict [Temporary]
      
      Conversion from local state dict to full state dict is needed during an e2e workflow. This will be implemented in another diff
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41861308
      
      fbshipit-source-id: 2e01b601683d06b46f0c5517c6cff30bbcffa8f7
      eea6339f
    • Anthony Chen's avatar
      Rewrite FSDP wrapping as modeling hook · dc6fac12
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/440
      
      Move FSDP wrapping to runner.build_model by rewriting it as a modeling hook
      
      **Motivation**
      When a model is too large to run inference on a single GPU, it requires using FSDP with local checkpointing mode to save peak GPU memory. However, in eval_pytorch workflow (train_net with eval-only), models are evaluated without being wrapped by FSDP. This may cause OOM errors for the reasons above. Thus, it may be a better practice to wrap model with FSDP during `runner.build_model(cfg)`, so evaluation can also be run in the same FSDP setting as in training.
      
      This diff moves FSDP wrapping to `runner.build_model(cfg)` by rewriting it as a modeling hook.
      
      **API changes**
      * Users need to append `"FSDPModelingHook"` to `MODEL.MODELING_HOOKS` to enable FSDP.
      * `FSDP.ALGORITHM` can only be `full` or `grad_optim`
      
      **Note**
      It's not possible to unwrap an FSDP model back to the normal model, so FSDPModelingHook.unapply() can't be implemented
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41416917
      
      fbshipit-source-id: f3fc72d574cc6ccbe0d238e48c575926ba5b4d06
      dc6fac12
  11. 10 Jan, 2023 2 commits
  12. 05 Jan, 2023 3 commits
  13. 04 Jan, 2023 1 commit
    • Yanghan Wang's avatar
      upgrade pytorch-lightning version to 1.8.6 · 9e93852d
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/453
      
      Previous diffs updated the LRScheduler to public version (eg. https://github.com/facebookresearch/detectron2/pull/4709), this also requires newer version of pytorch-lightning. This diff upgrades the lightning version to 1.8.6, also fixes some deprecated call sites of old lightning versions.
      - `deepcopy` seems to be supported now, remove `_deepcopy` (there's now not allowed to access `trainer` attributed when it is `None`)
      - `dataloader_idx` is removed from `on_train_batch_start`.
      - stop using `_accelerator_connector` (the AcceleratorConnector doesn't have those attributes anymore).
      - deprecated `on_pretrain_routine_end` -> `on_fit_start`
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D42319019
      
      fbshipit-source-id: ba46abbd98da96783e15d187a361fda47dc7d4d6
      9e93852d
  14. 20 Dec, 2022 1 commit
  15. 19 Dec, 2022 4 commits
    • Yanghan Wang's avatar
      add check to `import_runner` · fb41d071
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/450
      
      move checking logic into `import_runner`, simplfy `_is_lightning_task`
      
      Reviewed By: mcimpoi
      
      Differential Revision: D42105853
      
      fbshipit-source-id: 5fd51865a01f2cbac38aaedcac49207c26172ab9
      fb41d071
    • Haroun Habeeb's avatar
      temp_new_allowed · 7bed2910
      Haroun Habeeb authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/438
      
      Adding new fields to a config is only allowed if `new_allowed=True`. yacs `CfgNode` provides a `set_new_allowed(value: bool)` function.
      
      We create a context manager like `temp_defrost` but for new_allowed to use it. We also implement unit test for the same
      
      Reviewed By: yanglinfang, newstzpz, wat3rBro
      
      Differential Revision: D41748992
      
      fbshipit-source-id: 71d048511476001ca96e6b36dde4d177b11268d7
      7bed2910
    • Yanghan Wang's avatar
      separate TestNetOutput and TrainNetOutput · e2537c82
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/449
      
      separate TestNetOutput and TrainNetOutput
      - update d2go binaries
      - update operators / workflows
      
      Reviewed By: mcimpoi
      
      Differential Revision: D42103714
      
      fbshipit-source-id: 53f318c79d7339fb6fcfc3486e8b9cf249a598bf
      e2537c82
    • Anton Rigner's avatar
      Fix WeightedSampler to also work with adhoc datasets · ab49d0b6
      Anton Rigner authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/437
      
      # Problem
      - We use `TRAIN_CATEGORIES` to overrider the classes for convenient experimentation, to not have to re-map the JSON file
      - But it's not possible to use the WeightedTrainingSampler with specified repeat factors (`DATASETS.TRAIN_REPEAT_FACTOR`) when also overriding the classes to use for training (ad-hoc datasets), because the underlying dataset name doesn't match the datasets specified in the `TRAIN_REPEAT_FACTOR` pairs (mapping between <dataset_name, repeat_factor>)
      
      # Fix
      
      - Update the dataset names for the REPEAT_FACTORS mapping as well, if we have enabled the `WeightedTrainingSampler` and use ad-hoc datasets.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D41765638
      
      fbshipit-source-id: 51dad484e4d715d2de900b5d0b7c7caa19903fb7
      ab49d0b6
  16. 16 Dec, 2022 1 commit
  17. 12 Dec, 2022 2 commits
  18. 09 Dec, 2022 1 commit
  19. 08 Dec, 2022 1 commit
  20. 30 Nov, 2022 6 commits
    • Matthew Yu's avatar
      support caching tuples · dece58ba
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/432
      
      We support caching of tuples since they behave similarly to lists
      
      Reviewed By: XiaoliangDai
      
      Differential Revision: D41483876
      
      fbshipit-source-id: 9d741074f8e2335ddd737ae3f1bdb288910f5564
      dece58ba
    • Matthew Yu's avatar
      algorithm · 150db2d1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/431
      
      Add a generic domain adaptation algorithm. This algorithm:
      * gets domain0 data out of the dataloader
      * runs domain0 data into the model and saves target layer output
      * gets domain1 data of the dataloader
      * runs domain1 data into the model and saves target layer output
      * runs domain adaptation loss on domain0, domain1 outputs
      * combines losses using model training iteration
      
      This diffs adds `get_preprocess_domain0_input` and `get_preprocess_domain1_input` to the distillation helper. These are functions that the user can use to convert the dataloader output to something that will be used by the model (e.g., pull the domain0 or domain1 key out of a dataloader that returns a dict).
      
      Differential Revision: D40970724
      
      fbshipit-source-id: fff050fbe864654fa6cb0df927f6843855ec1c14
      150db2d1
    • Matthew Yu's avatar
      support registering layer losses to model · c4860c5b
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/430
      
      We add losses in distillation by instantiating them in the distillation algorithm's init and then running them during the forward pass.
      
      However this has some issues:
      * the losses are not registered as a module in the model since they we organize them as a list of layerlossmetadata => this means that things like AMP do not behave as expected
      * the losses are not on the same device as the rest of the model since they are created potentially after the model is moved to a new device
      
      This diff solves both of these issues by including a helper function that registers and moves the losses to the same device as the model. `register_layer_losses_and_to_device` takes as input `List[LayerLossMetadata]`, moves the losses to the same device as the model and then registers these losses to the model.
      
      Differential Revision: D41296932
      
      fbshipit-source-id: ae7ae0847bce1b5cc481d838b9cae69cea424f25
      c4860c5b
    • Matthew Yu's avatar
      support ignoring teacher · 909de50d
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/429
      
      Add a teacher type called `no_teacher` which can be specified by the user in the case they ignore the teacher (e.g., domain adaptation). Building the teacher just returns a noop (`nn.Identity`)
      
      Differential Revision: D40971788
      
      fbshipit-source-id: fc49ac44224c92806a7be253eefb8454305814eb
      909de50d
    • Peizhao Zhang's avatar
      add an augmentation to pad image to square. · c2d7dbab
      Peizhao Zhang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/428
      
      add an augmentation to pad image to square.
      * For example, image with shape (10, 7, 3) will become (10, 10, 3) and pad with value specified by `pad_value`.
      
      Reviewed By: tax313, wat3rBro
      
      Differential Revision: D41545182
      
      fbshipit-source-id: 6d5fd9d16984a9904d44f22386920cdf130edda7
      c2d7dbab
    • Matthew Yu's avatar
      set cache in recorded layers · 30ac5858
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/433
      
      Distillation uses a module called `CachedLayer` to record the outputs of a layer to an internal dict. This dict is typically initialized by the object itself and any value is overwritten every time the model runs.
      
      However, sometimes we need more than one output run of the layer (e.g., domain adaptation => we run the model on real, then synthetic data and need to use both outputs).
      
      This diff adds a helper to set externally set the cache dict of a model. In other words, we can run `set_cache_dict` on some model to change the dict used by all `CachedLayer` in the model. This allows us to run the model and record some outputs, then change the cache dict and rerun the model to save different outputs.
      
      Differential Revision: D40970577
      
      fbshipit-source-id: 49cb851af49ae193d0c8ac9218e02fdaf4e6587b
      30ac5858
  21. 28 Nov, 2022 1 commit