1. 07 Apr, 2022 1 commit
    • Owen Wang's avatar
      add metal GPU to d2go export · 6b4dbb31
      Owen Wang authored
      Summary: Allow string name of export type to indicate which mobile opt backend user wants to trigger.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D35375928
      
      fbshipit-source-id: dc3f91564681344e1d43862423ab5dc63b6644d3
      6b4dbb31
  2. 05 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      support do_postprocess when tracing rcnn model in D2 style · 647a3fdf
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/200
      
      Currently when exporting the RCNN model, we call it with `self.model.inference(inputs, do_postprocess=False)[0]`, therefore the output of exported model is not post-processed, eg. the mask is in the squared shape. This diff adds the option to include postprocess in the exported model.
      
      Worth noting that since the input is a single tensor, the post-process doesn't resize the output to original resolution, and we can't apply the post-process twice to further resize it in the Predictor's PostProcessFunc, add an assertion to raise error in this case. But this is fine for most production use cases where the input is not resized.
      
      Set `RCNN_EXPORT.INCLUDE_POSTPROCESS` to `True` to enable this.
      
      Reviewed By: tglik
      
      Differential Revision: D34904058
      
      fbshipit-source-id: 65f120eadc9747e9918d26ce0bd7dd265931cfb5
      647a3fdf
    • Yanghan Wang's avatar
      refactor create_fake_detection_data_loader · 312c6b62
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/199
      
      - `create_fake_detection_data_loader` currently doesn't take `cfg` as input, sometimes we need to test the augmentation that needs more complicated different cfg.
      - name is a bit bad, rename it to `create_detection_data_loader_on_toy_dataset`.
      - width/height were the resized size previously, we want to change it to the size of data source (image files) and use `cfg` to control resized size.
      
      Update V3:
      In V2 there're some test failures, the reason is that V2 is building data loader (via GeneralizedRCNN runner) using actual test config instead of default config before this diff + dataset name change. In V3 we uses the test's runner instead of default runner for the consistency. This reveals some real bugs that we didn't test before.
      
      Reviewed By: omkar-fb
      
      Differential Revision: D35238890
      
      fbshipit-source-id: 28a6037374e74f452f91b494bd455b38d3a48433
      312c6b62
  3. 24 Mar, 2022 2 commits
  4. 16 Mar, 2022 2 commits
  5. 08 Mar, 2022 2 commits
  6. 05 Mar, 2022 1 commit
  7. 04 Mar, 2022 4 commits
  8. 25 Feb, 2022 1 commit
  9. 23 Feb, 2022 2 commits
  10. 14 Jan, 2022 1 commit
  11. 13 Jan, 2022 1 commit
    • Tsahi Glik's avatar
      Add support for custom training step via meta_arch · b6e244d2
      Tsahi Glik authored
      Summary:
      Add support in the default lightning task to run a custom training step from Meta Arch if exists.
      The goal is to allow custom training step without the need to inherit from the default lightning task class and override it. This will allow us to use a signle lightning task and still allow users to customize the training step. In the long run this will be further encapsulated in modeling hook, making it more modular and compositable with other custom code.
      
      This change is a follow up from discussion in  https://fburl.com/diff/yqlsypys
      
      Reviewed By: wat3rBro
      
      Differential Revision: D33534624
      
      fbshipit-source-id: 560f06da03f218e77ad46832be9d741417882c56
      b6e244d2
  12. 12 Jan, 2022 1 commit
  13. 08 Jan, 2022 1 commit
  14. 30 Dec, 2021 1 commit
  15. 29 Dec, 2021 2 commits
  16. 22 Dec, 2021 1 commit
    • Sam Tsai's avatar
      registry and copy keys for extended coco load · bfd78461
      Sam Tsai authored
      Summary:
      1. Add registry for coco injection to allow for easier overriding of cococ injections
      2. Coco loading currently is limited to certain keys. Adding option to allow for copying certain keys from the outputs.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D33132517
      
      fbshipit-source-id: 57ac4994a66f9c75457cada7e85fb15da4818f3e
      bfd78461
  17. 18 Nov, 2021 1 commit
    • Ananth Subramaniam's avatar
      remove deprecated train_loop (#10482) · bb49d171
      Ananth Subramaniam authored
      Summary:
      ### New commit log messages
        fa0ed17f8 remove deprecated train_loop (#10482)
      
      Reviewed By: kandluis
      
      Differential Revision: D32454980
      
      fbshipit-source-id: a35237dde06cc9ddac5373b75992ce88a6771c76
      bb49d171
  18. 08 Nov, 2021 1 commit
    • Yanghan Wang's avatar
      rename @legacy to @c2_ops · 95ab768e
      Yanghan Wang authored
      Reviewed By: sstsai-adl
      
      Differential Revision: D32216605
      
      fbshipit-source-id: bebee1edae85e940c7dcc6a64dbe341a2fde36a2
      95ab768e
  19. 28 Oct, 2021 1 commit
    • Kai Zhang's avatar
      Fix unused param in QAT training · 8b03f9aa
      Kai Zhang authored
      Summary:
      In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training.
      However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError.
      Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error.
      
      In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D31902368
      
      fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
      8b03f9aa
  20. 26 Oct, 2021 3 commits
    • Yanghan Wang's avatar
      support multi-base for config re-route · 39054767
      Yanghan Wang authored
      Summary: as title
      
      Reviewed By: Cysu
      
      Differential Revision: D31901433
      
      fbshipit-source-id: 1749527c04c392c830e1a49bca8313ddf903d7b1
      39054767
    • Yanghan Wang's avatar
      move fcos into meta_arch · 421960b3
      Yanghan Wang authored
      Summary: FCOS is registered only because we make an import from `get_default_cfg`, if user don't call it (eg. using their own runner), they might find that the meta-arch is not registered.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D31920026
      
      fbshipit-source-id: 59eeeb3d1bf30d6b08463c2814930b1cadd7d549
      421960b3
    • Yanghan Wang's avatar
      populate meta-arch registry when importing d2go · cc7973c2
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/130
      
      We want to make sure that after importing `d2go.modeling` all the meta-arch is registered.
      
      Reviewed By: Maninae
      
      Differential Revision: D31904303
      
      fbshipit-source-id: 3f32b65b764b2458e2fb9c4e0bbd99824b37ecfc
      cc7973c2
  21. 22 Oct, 2021 1 commit
  22. 20 Oct, 2021 2 commits
    • Peizhao Zhang's avatar
      Supported learnable qat. · f6ce583e
      Peizhao Zhang authored
      Summary:
      Supported learnable qat.
      * Added a config key `QUANTIZATION.QAT.FAKE_QUANT_METHOD` to specify the qat metod (`default` or `learnable`).
      * Added a config key `QUANTIZATION.QAT.ENABLE_LEARNABLE_OBSERVER_ITER` to specify the start iteration for learnable observers (before that it is using static observers).
      * Custom quantization code needs to call ` d2go.utils.qat_utils.get_qat_qconfig()` to get proper qconfig for learnable qat. An exception will raise if qat method is learnable but no learnable observers are used in the model.
      * Set the weight decay for scale/zero_point to 0 for the optimizer automatically.
      * The way to use larnable qat: enable static observers -> enable fake quant -> enable learnable observers -> freeze bn.
      
      Differential Revision: D31370822
      
      fbshipit-source-id: a5a5044a539d0d7fe1cc6b36e6821fc411ce752a
      f6ce583e
    • Peizhao Zhang's avatar
      Refactored qat related code. · ef9c20cc
      Peizhao Zhang authored
      Summary:
      Refactored qat related code.
      * Moved `_prepare_model_for_qat` related code to a function.
      * Moved `_setup_non_qat_to_qat_state_dict_map` related code to a function.
      * Moved QATHook related code to the quantization file and implemented as a class.
      
      Differential Revision: D31370819
      
      fbshipit-source-id: 836550b2c8d68cd93a84d5877ad9cef6f0f0eb39
      ef9c20cc
  23. 15 Oct, 2021 2 commits
    • Peizhao Zhang's avatar
      Supported specifying customized parameter groups from model. · 87ce583c
      Peizhao Zhang authored
      Summary:
      Supported specifying customized parameter groups from model.
      * Allow model to specify customized parameter groups by implementing a function `model.get_optimizer_param_groups(cfg)`
      * Supported model with ddp.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D31289315
      
      fbshipit-source-id: c91ba8014508e9fd5f172601b9c1c83c188338fd
      87ce583c
    • Peizhao Zhang's avatar
      Refactor for get_optimizer_param_groups. · 2dc3bc02
      Peizhao Zhang authored
      Summary:
      Refactor for get_optimizer_param_groups.
      * Split `get_default_optimizer_params()` into multiple functions:
        * `get_optimizer_param_groups_default()`
        * `get_optimizer_param_groups_lr()`
        * `get_optimizer_param_groups_weight_decay()`
      * Regroup the parameters to create the minimal amount of groups.
      * Print all parameter groups when the optimizer is created.
          Param group 0: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 10.0, params: 1, weight_decay: 1.0}
          Param group 1: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 1, weight_decay: 1.0}
          Param group 2: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 2, weight_decay: 0.0}
      * Add some unit tests.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D31287783
      
      fbshipit-source-id: e87df0ae0e67343bb2130db945d8faced44d7411
      2dc3bc02
  24. 06 Oct, 2021 1 commit
  25. 24 Sep, 2021 2 commits
  26. 15 Sep, 2021 1 commit