1. 24 May, 2023 1 commit
  2. 22 May, 2023 1 commit
    • Anthony Chen's avatar
      Add a GPU memory snapshot profiler in d2go · 20e18edc
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/542
      
      ## Overview
      Add an option to enable GPU memory snapshot profiler in d2go. The profiler is natively supported by Pytorch and is able to record stack traces associated with all CUDA memory allocation/free events, allowing users to understand which parts of code contribute to the memory bottleneck. It also provides a powerful interactive web tool to visualize memory utilization ordered by time:
      {F978609840}
      Each colored block represents an allocated cuda memory block. User can click on the block to see the corresponding python stack trace that allocates the block.
      
      ## d2go integration
      This diff integrates the profiler as a hook controlled by config key `USE_MEMORY_PROFILER`. The profiler will log snapshots and web tools to the output directory. There are three places that logging could happen: start of training, during training and OOM. Please read the docstring of `D2GoGpuMemorySnapshot` for more information.
      
      Reviewed By: tglik, jaconey
      
      Differential Revision: D45673764
      
      fbshipit-source-id: 8900484a2266d94421fe3ee7a85a4dea3a9f6b72
      20e18edc
  3. 19 May, 2023 1 commit
    • Yanghan Wang's avatar
      another implementation of log_interval · 876c6756
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/543
      
      The previous implementation:
      > the problem is the ContextDecorator somehow swallows the exception in the wrapped function and just returns None.
      
      This diff adds a test such that previous implementation would fail:
      ```
      ======================================================================
      FAIL: test_log_interval_error_prop (d2go.tests.fb.test_utils_logging.TestUtilsLogging)
      Make sure the log_interval can handle error propagation.
      ----------------------------------------------------------------------
      Traceback (most recent call last):
        File "/data/sandcastle/boxes/fbsource/buck-out/v2/gen/fbcode/ef4169ac7f95fb74/mobile-vision/d2go/tests/__init_tests__/init_tests#link-tree/d2go/tests/fb/test_utils_logging.py", line 152, in test_log_interval_error_prop
          foo(-1)
      AssertionError: ValueError not raised
      
      ----------------------------------------------------------------------
      Ran 1 test in 0.098s
      ```
      
      The new version seems easier to understand and doesn't have the error swallowing.
      
      Reviewed By: jaconey
      
      Differential Revision: D46009938
      
      fbshipit-source-id: 6b632deb513ab47c4d760f796bf49fc45eae3005
      876c6756
  4. 18 May, 2023 1 commit
  5. 16 May, 2023 1 commit
  6. 12 May, 2023 1 commit
  7. 10 May, 2023 1 commit
  8. 08 May, 2023 1 commit
    • Jiaxu Zhu's avatar
      Quantize FBS model with 16bit FX Quantization · e3642005
      Jiaxu Zhu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/531
      
      As title, enable mixed precision FX quantization for FBS model.
      
      This diff includes
      1. Add `custom_prepare_fx` to the FBS d2go model to enable the FX quantization.
      2. Add two new d2go config params `QUANTIZATION.ACT_BITS/QUANTIZATION.WEIGHTS`
      3. Add `backend_config/qconfig_mapping` to d2go convert function calls.
      4. Add an example FBS fx QAT config.
      
      Reviewed By: ayushidalmia
      
      Differential Revision: D45252545
      
      fbshipit-source-id: 813b192fcdd66c17629490b8908ce8cd8534506a
      e3642005
  9. 07 May, 2023 1 commit
  10. 02 May, 2023 1 commit
    • Anthony Chen's avatar
      Use FSDP.STATE_DICT_TYPE = SHARDED_STATE_DICT by default · 5ecbb174
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/535
      
      Use `FSDP.STATE_DICT_TYPE = SHARDED_STATE_DICT` for FSDP checkpointing by default.` FSDP.USE_LOCAL_STATE_DICT` will be deprecated in the future.
      
      # Note
      After the change, config usage of `FSDP.USE_LOCAL_STATE_DICT` will not be picked up by code: it will be superseded by the default type of FSDP.STATE_DICT_TYPE instead
      
      Reviewed By: tglik
      
      Differential Revision: D45413143
      
      fbshipit-source-id: e7bc2d5dc04ac09004cb89353333be020a9c80b5
      5ecbb174
  11. 01 May, 2023 3 commits
  12. 21 Apr, 2023 1 commit
    • Tao Xu's avatar
      enable the diffusion visualization evaluators to run on multiple datasets · c7bd7dfe
      Tao Xu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/527
      
      - Add model.reset_generation_counter() to enable the diffusion visualization evaluators to run on multiple test datasets.
        - Before this fix, the visualization evaluators will only run on the 1st test dataset since self.generation_counter will set to <0 after running on the 1st test datasaet. Thus the visualization evaluators will skip for all the other test sets since self.generation_counter < 0.
      - Use the ddim for upsampler by default for better results
      
      Reviewed By: zechenghe
      
      Differential Revision: D45058672
      
      fbshipit-source-id: 2f7919bf6ecd2e5f6f242ce3e7891cb3dc8d6af4
      c7bd7dfe
  13. 20 Apr, 2023 2 commits
  14. 18 Apr, 2023 1 commit
  15. 11 Apr, 2023 2 commits
  16. 05 Apr, 2023 2 commits
    • Mik Vyatskov's avatar
      Setup root logger once & on import time · abdeafb0
      Mik Vyatskov authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/523
      
      To avoid setting it up multiple times, add run_once() decorator.
      
      Additionally make sure logging is configured for datalodaing workers, which have a different entry point, by moving setting up logging to the import time. Right now when a dataloader worker is created using spawn method from multiprocessing module, a new Python interpreter is created, with all the modules imported anew and with the entry point set to the method specified. This means that the entry point of the training framework is skipped, together with the logging setup.
      
      With this change, the logging is configured on the import time, which means that when a dataloading process is created, even though the training main is not invoked, the logging is still configured because even though train_net is not invoked as an entry point, it's still imported in the child process.
      
      Reviewed By: miqueljubert
      
      Differential Revision: D44641142
      
      fbshipit-source-id: 06ea85363d965b31d7f9ade3c2615ed9db67470b
      abdeafb0
    • Anthony Chen's avatar
      change default FSDP strategy to grad_optim (ZERO2) · 35affd74
      Anthony Chen authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/522
      
      Change d2go's default FSDP sharding strategy to grad_optim, which corresponds to ShardingStrategy.SHARD_GRAD_OP in FSDP API, or ZERO2 in literature. grad_optim is shown to have the best tradeoff between memory utilization and training speed for mid-sized models.
      
      `FSDP.ALGORITHM = ""` was from the previous design to indicate that no FSDP is used. It will not work now
      
      Reviewed By: tglik
      
      Differential Revision: D44657184
      
      fbshipit-source-id: 3888eea5f2b5042269e69453f3cdd8db7cf1581c
      35affd74
  17. 03 Apr, 2023 1 commit
  18. 31 Mar, 2023 2 commits
  19. 30 Mar, 2023 4 commits
    • David Yan's avatar
      Update AIInfraCheckpointer to use the new gather/scatter functions for EMA and optimizer states · fe8680c1
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/520
      
      - Move gather/scatter functions to their own util function
      - Make changes to onboard AIInfraCheckpointer to the gather/scatter functions for optimizer and ema state
      - Add a test for fsdp checkpointer and ai infra checkpointer
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44400633
      
      fbshipit-source-id: bcfe3e0a4fbf53f91a83e88f74c4538699a50293
      fe8680c1
    • David Yan's avatar
      Save and load model EMA state for sharded state dicts in FSDPCheckpointer · e7652751
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/519
      
      Prior to this, FSDP checkpointer did not save EMA state which matched the model state when the model used sharded state dict. This diff adds this functionality.
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44270790
      
      fbshipit-source-id: f522765ad56e8279f355c43a19f26c3b6bcf01e3
      e7652751
    • Mircea Cimpoi's avatar
      Run zoomer profiling · 67267821
      Mircea Cimpoi authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/518
      
      Enable profiling for eval step only, not on every eval (which can be called during training)
      
      Reviewed By: frabu6
      
      Differential Revision: D44535915
      
      fbshipit-source-id: 4497a3f74f5d751277df9ed41bc9bf21056341c4
      67267821
    • Anton Rigner's avatar
      Read metadata for actual dataset in Visualizer, if available · c4b2d09c
      Anton Rigner authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/516
      
      # Context
      
      D2go allows for training with more than one datasets, and as long as the categories are consistent, the IDs do not necessarily have to correspond to each other between annotations of two different data sets.
      
      It is still loaded correctly to the data loader, and the training works as expected.
      
      # Problem
      
      However, I observed weird mis-labelleing issues in the Visualizer for Tensorboard. Originally I thought this was a data/conversion issue, but upon inspecting the logs I see that the data is loaded correctly. See example below.
      
      {F924075931}
      
      "Plant" labelled as "Refrigerator", "Floor" labelled as "Lamp"
      
      {F924078113}
      
      ... but the loaded annotations doesn't actually contain any samples of "Refrigerator".
      
      The reason is that the Visualizer always loads the metadata (and thus the labels) from the first train data set, but the order of the categories between the data sets may not be consistent, but still be a valid training run.
      
      # Fix
      
      If there is a data set name associated with the data to visualize, use that to fetch the metadata, and the correct labels, otherwise default to the first data set (current situation).
      
      Reviewed By: wat3rBro
      
      Differential Revision:
      D44495363
      
      Privacy Context Container: L1127277
      
      fbshipit-source-id: 37b940d393aa794cd2f39aabdc66c6d23abd8000
      c4b2d09c
  20. 26 Mar, 2023 1 commit
  21. 24 Mar, 2023 2 commits
    • David Yan's avatar
      Add tests for sharded_state_dict and fix compatibility problems · 46606a02
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/511
      
      Add tests for sharded_state_dict integration in AIF Checkpointer
      
      Fix compatibility problems including:
      1. small API errors of flatten_sharded_optim_state_dict
      2. deprecate model.use_local_state_dict and model.load_local_state_dict
      3. fix auto conversion for local_state_dict
      4. fix T148056077: add metadata to differentiate between local_state_dict and sharded_state_dict when loading a directory with FSDPCheckpointer
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D44160045
      
      fbshipit-source-id: f607b7076d0e49b9407f9adfbc8ecfe439c3b0c9
      46606a02
    • David Yan's avatar
      Add support for FSDP SHARDED_STATE_DICT in D2Go · fbc1c2e8
      David Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/512
      
      Currently, when saving and loading checkpoints for FSDP-wrapped modules, we are saving and loading using `StateDictType.LOCAL_STATE_DICT`, where the state_dict becomes essentially a single flat tensor under the `_flat_param` key (or some other layer-specific key for flat weights). This means that
      1. It's impossible to load weights directly from checkpoints, for example in notebooks
      2. Converting from a local to a global checkpoint requires running a special workflow (https://fburl.com/code/6yqa4ldb) that occupies the same number of GPUs as was used during training
      
      This diff adds an option, `FSDP.STATE_DICT_TYPE`, which allows selection of the type of state dict to save (local, sharded, full). In sharded mode, with AIF checkpointing, we are able to have the benefit of allowing local loading of state dicts in minutes with any number of GPUs, in notebooks and elsewhere.
      
      Note: for backwards compatibility, `CFG.FSDP.use_local_state_dict` and `CFG.FSDP.load_local_state_dict` still need to work when the new config parameter (`CFG.FSDP.state_dict_type`) is not set. Also, it's used to signify that local/sharded state dicts need to be converted to a full state dict when loading. This functionality can be deprecated when everyone migrates to AIF checkpointing with sharded dicts.
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D43840887
      
      fbshipit-source-id: d112f7b7ad97ba82fd5bf1da986b95ad7fc61c42
      fbc1c2e8
  22. 23 Mar, 2023 1 commit
    • Mik Vyatskov's avatar
      Redirect prints to logging module · d912e9f8
      Mik Vyatskov authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/509
      
      print function is used all over the place and it's not realistic to enforce not using print for everyone. So this diff attempts to improve the debuggability of the code that was written using prints by redirecting prints to the logging module.
      
      Additionally call logger setup from `setup_after_launch` to make sure logging settings are applied in every of the spawned processes.
      
      Reviewed By: frabu6, wat3rBro
      
      Differential Revision: D44280241
      
      fbshipit-source-id: 713400ac2b2edacef3c7a99067cbb1e684c3c5ad
      d912e9f8
  23. 22 Mar, 2023 2 commits
  24. 21 Mar, 2023 2 commits
  25. 16 Mar, 2023 1 commit
  26. 11 Mar, 2023 1 commit
  27. 09 Mar, 2023 1 commit
  28. 08 Mar, 2023 1 commit