1. 31 Mar, 2023 1 commit
  2. 30 Mar, 2023 4 commits
    • David Yan's avatar
      Update AIInfraCheckpointer to use the new gather/scatter functions for EMA and optimizer states · fe8680c1
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/520
      
      - Move gather/scatter functions to their own util function
      - Make changes to onboard AIInfraCheckpointer to the gather/scatter functions for optimizer and ema state
      - Add a test for fsdp checkpointer and ai infra checkpointer
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44400633
      
      fbshipit-source-id: bcfe3e0a4fbf53f91a83e88f74c4538699a50293
      fe8680c1
    • David Yan's avatar
      Save and load model EMA state for sharded state dicts in FSDPCheckpointer · e7652751
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/519
      
      Prior to this, FSDP checkpointer did not save EMA state which matched the model state when the model used sharded state dict. This diff adds this functionality.
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44270790
      
      fbshipit-source-id: f522765ad56e8279f355c43a19f26c3b6bcf01e3
      e7652751
    • Mircea Cimpoi's avatar
      Run zoomer profiling · 67267821
      Mircea Cimpoi authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/518
      
      Enable profiling for eval step only, not on every eval (which can be called during training)
      
      Reviewed By: frabu6
      
      Differential Revision: D44535915
      
      fbshipit-source-id: 4497a3f74f5d751277df9ed41bc9bf21056341c4
      67267821
    • Anton Rigner's avatar
      Read metadata for actual dataset in Visualizer, if available · c4b2d09c
      Anton Rigner authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/516
      
      # Context
      
      D2go allows for training with more than one datasets, and as long as the categories are consistent, the IDs do not necessarily have to correspond to each other between annotations of two different data sets.
      
      It is still loaded correctly to the data loader, and the training works as expected.
      
      # Problem
      
      However, I observed weird mis-labelleing issues in the Visualizer for Tensorboard. Originally I thought this was a data/conversion issue, but upon inspecting the logs I see that the data is loaded correctly. See example below.
      
      {F924075931}
      
      "Plant" labelled as "Refrigerator", "Floor" labelled as "Lamp"
      
      {F924078113}
      
      ... but the loaded annotations doesn't actually contain any samples of "Refrigerator".
      
      The reason is that the Visualizer always loads the metadata (and thus the labels) from the first train data set, but the order of the categories between the data sets may not be consistent, but still be a valid training run.
      
      # Fix
      
      If there is a data set name associated with the data to visualize, use that to fetch the metadata, and the correct labels, otherwise default to the first data set (current situation).
      
      Reviewed By: wat3rBro
      
      Differential Revision:
      D44495363
      
      Privacy Context Container: L1127277
      
      fbshipit-source-id: 37b940d393aa794cd2f39aabdc66c6d23abd8000
      c4b2d09c
  3. 26 Mar, 2023 1 commit
  4. 24 Mar, 2023 2 commits
    • David Yan's avatar
      Add tests for sharded_state_dict and fix compatibility problems · 46606a02
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/511
      
      Add tests for sharded_state_dict integration in AIF Checkpointer
      
      Fix compatibility problems including:
      1. small API errors of flatten_sharded_optim_state_dict
      2. deprecate model.use_local_state_dict and model.load_local_state_dict
      3. fix auto conversion for local_state_dict
      4. fix T148056077: add metadata to differentiate between local_state_dict and sharded_state_dict when loading a directory with FSDPCheckpointer
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44160045
      
      fbshipit-source-id: f607b7076d0e49b9407f9adfbc8ecfe439c3b0c9
      46606a02
    • David Yan's avatar
      Add support for FSDP SHARDED_STATE_DICT in D2Go · fbc1c2e8
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/512
      
      Currently, when saving and loading checkpoints for FSDP-wrapped modules, we are saving and loading using `StateDictType.LOCAL_STATE_DICT`, where the state_dict becomes essentially a single flat tensor under the `_flat_param` key (or some other layer-specific key for flat weights). This means that
      1. It's impossible to load weights directly from checkpoints, for example in notebooks
      2. Converting from a local to a global checkpoint requires running a special workflow (https://fburl.com/code/6yqa4ldb) that occupies the same number of GPUs as was used during training
      
      This diff adds an option, `FSDP.STATE_DICT_TYPE`, which allows selection of the type of state dict to save (local, sharded, full). In sharded mode, with AIF checkpointing, we are able to have the benefit of allowing local loading of state dicts in minutes with any number of GPUs, in notebooks and elsewhere.
      
      Note: for backwards compatibility, `CFG.FSDP.use_local_state_dict` and `CFG.FSDP.load_local_state_dict` still need to work when the new config parameter (`CFG.FSDP.state_dict_type`) is not set. Also, it's used to signify that local/sharded state dicts need to be converted to a full state dict when loading. This functionality can be deprecated when everyone migrates to AIF checkpointing with sharded dicts.
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D43840887
      
      fbshipit-source-id: d112f7b7ad97ba82fd5bf1da986b95ad7fc61c42
      fbc1c2e8
  5. 23 Mar, 2023 1 commit
    • Mik Vyatskov's avatar
      Redirect prints to logging module · d912e9f8
      Mik Vyatskov authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/509
      
      print function is used all over the place and it's not realistic to enforce not using print for everyone. So this diff attempts to improve the debuggability of the code that was written using prints by redirecting prints to the logging module.
      
      Additionally call logger setup from `setup_after_launch` to make sure logging settings are applied in every of the spawned processes.
      
      Reviewed By: frabu6, wat3rBro
      
      Differential Revision: D44280241
      
      fbshipit-source-id: 713400ac2b2edacef3c7a99067cbb1e684c3c5ad
      d912e9f8
  6. 22 Mar, 2023 2 commits
  7. 21 Mar, 2023 2 commits
  8. 16 Mar, 2023 1 commit
  9. 11 Mar, 2023 1 commit
  10. 09 Mar, 2023 1 commit
  11. 08 Mar, 2023 1 commit
  12. 06 Mar, 2023 1 commit
    • Alan Lin's avatar
      Add export registry for FCOS · 8cb50233
      Alan Lin authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/491
      
      As titled, although FCOS usually requires no customized export methods. We found that our internal MUI platform asks the exported model to follow certain protocols. To avoid mixing-up with external code, adding a export func registry to bypass it.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D43800839
      
      fbshipit-source-id: 41c8ebb10610ec92d17461211315c15908277b28
      8cb50233
  13. 05 Mar, 2023 3 commits
    • Fei Sun's avatar
      Move EMA to after backward. · a7dc757c
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/494
      
      Currently EMA computation is in the after step hook. It is in the critical path where no other work is available. This increases the training iteration time. This diff moves the EMA computation to after the backward but before the optimizer step. This way, the majority of the EMA computation time on the CPU can be hidden since CPU at that time is waiting for the GPU to finish the backward anyway. This change may completely hide the EMA CPU time. It reduces the EMA time from 20ms to 4ms, where the 4ms is the GPU time.
      
      However, with this change, the EMA gets its value from the previous iteration value (since it is before step). but since we do many epochs of training, one iteration difference may not be significant.
      
      Reviewed By: tglik
      
      Differential Revision: D43527552
      
      fbshipit-source-id: 1faa9d910b20cae0fc77da541bc0ad176bce18a8
      a7dc757c
    • Fei Sun's avatar
      Prefetch forward · 5f1ef548
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/492
      
      Enable prefetching the FSDP all gathers. Forward prefetch may or may not improve performance. Its effectiveness is determined by other FSDP options, such as zero2/zero3, HSDP/FSDP. Need to do a HPO sweep to figure out the best configuration.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D43027253
      
      fbshipit-source-id: cbf1b4bcf5b0b8301b5b9547e3c22b1f0ffc7590
      5f1ef548
    • Fei Sun's avatar
      Use LERP to implement EMA · 255313d8
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/493
      
      Currently the EMA implementation first does the multiplication and then does the addition. It requires two round trips from HBM. With the lerp operator, one kernel can do both. This change uses LERP to compute EMA instead. It reduces the GPU EMA computation time by 40%.
      
      Reviewed By: newstzpz
      
      Differential Revision: D43525938
      
      fbshipit-source-id: ca1e14453bdfda958d3c412a52ff48efa65b3dd4
      255313d8
  14. 02 Mar, 2023 1 commit
    • Anthony Chen's avatar
      add correctness test for ai infra checkpointer with FSDP local mode in d2go runner · fd0cbb8f
      Anthony Chen authored
      Summary:
      X-link: https://github.com/facebookresearch/mobile-vision/pull/141
      
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/484
      
      This diff adds a correctness test for ai infra checkpointer + FSDP local mode within a d2go runner. It verifies that ai infra checkpointer saves the exact same model as the old checkpointer, and that we can convert between ai infra checkpoints (local) and fsdp checkpoints (local + global) seamlessly.
      
      Note: adapted from mattcyu1's script D43492498.
      
      ## Testing
      
      Testing is done by saving with both ai infra and fsdp checkponter and compare the state dict produced. Here are the steps:
      
      1. Build the model. Save a local ckpt using the FSDP checkpointer and another local ckpt using the AIInfra checkpointer
      2. Reset the model. Load local ckpt using the FSDP checkpointer and convert it to global ckpt
      3. Reset the model. Load local ckpt using the AIInfra checkpointer and re-save it as global ckpt using the FSDP checkpointer
      4. Compare the two global state dicts
      
      ## Others
      
      1. Add a launch decorator for d2go.distributed worker using the one in `fbcode/mobile-vision/mobile_cv/mobile_cv/torch/utils_pytorch/distributed_helper.py`
      
      2. Remove `ema_state.load_state_dict()` in loading. This is needed because ai infra checkpointer loads state dict in place before `ema_state.load_state_dict()` is called. Since it's inplace loading, both ema_state and state_dict['ema_state'] points to the same tensor. Calling` ema.load_state_dict()` clears ema_state, effectively freeing the tensor and cause it to return an empty dict.
      Solution: Don't call `ema_state.load_state_dict()` because it's already loaded. More info: https://www.internalfb.com/intern/wiki/Components_in_AI/Checkpoint/Getting_Started/Input_Output_Contract/#load
      
      Reviewed By: xunnanxu
      
      Differential Revision: D43423572
      
      fbshipit-source-id: 8c4a47917670ea1205f952540d1e4cb9fc9232c0
      fd0cbb8f
  15. 01 Mar, 2023 2 commits
  16. 28 Feb, 2023 1 commit
  17. 25 Feb, 2023 1 commit
  18. 24 Feb, 2023 1 commit
    • Matthew Yu's avatar
      turn off interleaving if only saving on rank0 · 3111ae59
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/482
      
      We should avoid using interleaving during save if we are calling save on one process:
      ```
      if comm.is_main_process():
        save()
      ```
      this is because interleave calls comm.synchronize() so will just wait indefinitely.
      
      This diff updates the FSDP checkpointer to use save(interleave=False) when running on one process.
      
      Reviewed By: wat3rBro, YanjunChen329
      
      Differential Revision: D43526328
      
      fbshipit-source-id: 672993a87af627aca090384b0c218798bd42fcde
      3111ae59
  19. 23 Feb, 2023 2 commits
  20. 17 Feb, 2023 2 commits
  21. 16 Feb, 2023 4 commits
    • Anthony Chen's avatar
      Add an option to specify the period of metric gathering and writing in Trainer · 6f43a43a
      Anthony Chen authored
      Summary:
      X-link: https://github.com/fairinternal/detectron2/pull/591
      
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/469
      
      X-link: https://github.com/facebookresearch/detectron2/pull/4785
      
      Add an option to specify the period of metric gathering and writing in Trainer.
      
      This feature is needed to optimize training speed for large-scale training jobs like generative AI. The reason is that the all_gather call in metric writing at every iteration is time-consuming when hundreds of gpus are used. This takes ~10% of the total training time. With this feature we can set the metric writing period as the same as cfg.WRITER_PERIOD=20 to reduce training time while still keeping metric logging the same to users
      
      Reviewed By: miqueljubert, wat3rBro
      
      Differential Revision:
      D43098985
      
      Privacy Context Container: 2011691122555468
      
      fbshipit-source-id: 63c93a7331aa63badce5125e5240d2d5f7e61b74
      6f43a43a
    • Sudarshan Raghunathan's avatar
      Add reply files to d2go training processes · f0f55cdc
      Sudarshan Raghunathan authored
      Summary:
      This diff contains a minimal set of changes to support returning reply files to MAST.
      
      There are three parts:
      1. First, we have a try..except in the main function to catch all the "catchable" Python exceptions. Exceptions from C++ code or segfaults will not be handled here.
      2. Each exception is then written to a per-process JSON reply file.
      3. At the end, all per-process files are stat-ed and the earliest file is copied to a location specified by MAST.
      
      # Limitations
      1. This only works when local processes are launched using multiprocessing (which is the default)
      2. If any error happens in C++ code - it will likely not be caught in Python and the reply file might not have the correct logs
      
      Differential Revision: D43097683
      
      fbshipit-source-id: 0eaf4f19f6199a9c77f2ce4c7d2bbc2a2078be99
      f0f55cdc
    • Tao Xu's avatar
      fix the issue of tensorboard visualization · b21607b1
      Tao Xu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/473
      
      As shown in the attached image and tb visualization, some of our jobs fail to save the results to tensorboard.
      There should be some messages between circled lines of the screenshot if the images are added to tensorboard.
      One possible reason is that the tensorbord visualization evaluator is only added for the rank 0 gpu. It may fail to fetch any data during evaluation of diffusion model which only do 1 batch of inference during validataion.
      To resolve this issue, we add the visualization evaluator to all ranks of gpus and gather their results, and finally add the results with biggest batchsize to the tensorboard for visualization.
      
      The screenshot is from f410204704 (https://www.internalfb.com/manifold/explorer/mobile_vision_workflows/tree/workflows/xutao/20230211/latest_train/dalle2_decoder.SIULDLpgix/e2e_train/log.txt)
      
      Refactored the default_runner.py to have a new function _create_evaluators for create all evaluators. Thus we do not need to override the whole _do_test function in the runner which need to add the visualization evaluator of all ranks.
      
      (Note: this ignores all push blocking failures!)
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D43263543
      
      fbshipit-source-id: eca2259277584819dcc5400d47fa4fb142f2ed9b
      b21607b1
    • Yanghan Wang's avatar
      add type annotations to preserve return type · 31197c3e
      Yanghan Wang authored
      Summary:
      X-link: https://github.com/facebookresearch/mobile-vision/pull/137
      
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/475
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D42148563
      
      fbshipit-source-id: 76b794988bda7f773a734838c79d2de087d7ce94
      31197c3e
  22. 14 Feb, 2023 3 commits
    • Fei Sun's avatar
      Add NUMA binding · 07ddd262
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/472
      
      Add NUMA binding to d2go. It equally distributes the GPUs to the CPU sockets so that the CPU traffic, GPU to CPU traffic are all balanced. It helps the diffusion model training, but it is a general technique that can be applied to all models. We still want to manually enable it in each case though, until we are confident that it gives better performance and set it as a default.
      
      NUMA binding is based on jspark1105's work D42827082. Full credit goes to him.
      
      This diff does not enable the feature.
      
      Reviewed By: newstzpz
      
      Differential Revision: D43036817
      
      fbshipit-source-id: fe67fd656ed3980f04bc81909cae7ba2527346fd
      07ddd262
    • Fei Sun's avatar
      Add option to use fused adamw optimizer · 8bb24bb0
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/471
      
      adamw recently added an option to use a fused optimizer. It may give better performance than foreach argument. However, we cannot enable it by default, since it requires all parameters to be in CUDA and maybe some other restrictions. So, enable it on a per project basis.
      
      On DALLE2, it results about 23ms faster.
      
      Reviewed By: newstzpz
      
      Differential Revision: D43027327
      
      fbshipit-source-id: 82c6855116094e86386ad2edeea3a74f9e555174
      8bb24bb0
    • Fei Sun's avatar
      Ignore modules · 7ef9d897
      Fei Sun authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/470
      
      Enable ignore FSDP modules. Those modules will not be put in FSDP. It is useful in the diffusion model, where the CLIP model is not used in training. Thus, it is OK to have a separate copy in each GPU. It reduces the CLIP execution time from 63ms to 48ms (15ms reduction). This is mostly because it is a CPU bounded module and in each FSDP block, some code is injected. In addition, it also reduces the FSDP all gather time before the CLIP execution from 56ms to 7ms (49ms reduction).
      
      In total, this change may reduce the CLIP runtime from 119ms to 64ms (63ms reduction)
      
      This feature is controlled by this flag:
          IGNORED_MODULES: ["clip_model"]
      
      Reviewed By: newstzpz
      
      Differential Revision: D42910383
      
      fbshipit-source-id: dc4c12254d45ac45d88329feb63a26ec4ae04aef
      7ef9d897
  23. 05 Feb, 2023 1 commit
    • Maayan Frid-Adar's avatar
      Fix TB train visualization · c4c512ce
      Maayan Frid-Adar authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/465
      
      Training visualization was basically activated only for the first training iterations if TRAIN_LOADER_VIS_MAX_IMAGES and TRAIN_LOADER_VIS_WRITE_PERIOD were set to be > 0. because the MAX_IMAGES was taken as the number of samples to log + the allowed number of samples to load overall. So after the first log to TB it was set to 0 and the visualization was not activated for later training steps (ignoring WRITE_PERIOD).
      
      I've added a TRAIN_LOADER_VIS_MAX_BATCH_IMAGES parameter to set a number of samples to visualize each write period up to the max images defined with TRAIN_LOADER_VIS_MAX_IMAGES
      
      Reviewed By: tglik
      
      Differential Revision: D42832903
      
      fbshipit-source-id: 02a0d9aa4ea6d0ee725120916d26b77843a3e8ab
      c4c512ce
  24. 04 Feb, 2023 1 commit