1. 08 Nov, 2021 1 commit
    • Yanghan Wang's avatar
      rename @legacy to @c2_ops · 95ab768e
      Yanghan Wang authored
      Reviewed By: sstsai-adl
      
      Differential Revision: D32216605
      
      fbshipit-source-id: bebee1edae85e940c7dcc6a64dbe341a2fde36a2
      95ab768e
  2. 28 Oct, 2021 1 commit
    • Kai Zhang's avatar
      Fix unused param in QAT training · 8b03f9aa
      Kai Zhang authored
      Summary:
      In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training.
      However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError.
      Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error.
      
      In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D31902368
      
      fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
      8b03f9aa
  3. 26 Oct, 2021 3 commits
    • Yanghan Wang's avatar
      support multi-base for config re-route · 39054767
      Yanghan Wang authored
      Summary: as title
      
      Reviewed By: Cysu
      
      Differential Revision: D31901433
      
      fbshipit-source-id: 1749527c04c392c830e1a49bca8313ddf903d7b1
      39054767
    • Yanghan Wang's avatar
      move fcos into meta_arch · 421960b3
      Yanghan Wang authored
      Summary: FCOS is registered only because we make an import from `get_default_cfg`, if user don't call it (eg. using their own runner), they might find that the meta-arch is not registered.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D31920026
      
      fbshipit-source-id: 59eeeb3d1bf30d6b08463c2814930b1cadd7d549
      421960b3
    • Yanghan Wang's avatar
      populate meta-arch registry when importing d2go · cc7973c2
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/130
      
      We want to make sure that after importing `d2go.modeling` all the meta-arch is registered.
      
      Reviewed By: Maninae
      
      Differential Revision: D31904303
      
      fbshipit-source-id: 3f32b65b764b2458e2fb9c4e0bbd99824b37ecfc
      cc7973c2
  4. 22 Oct, 2021 1 commit
  5. 20 Oct, 2021 2 commits
    • Peizhao Zhang's avatar
      Supported learnable qat. · f6ce583e
      Peizhao Zhang authored
      Summary:
      Supported learnable qat.
      * Added a config key `QUANTIZATION.QAT.FAKE_QUANT_METHOD` to specify the qat metod (`default` or `learnable`).
      * Added a config key `QUANTIZATION.QAT.ENABLE_LEARNABLE_OBSERVER_ITER` to specify the start iteration for learnable observers (before that it is using static observers).
      * Custom quantization code needs to call ` d2go.utils.qat_utils.get_qat_qconfig()` to get proper qconfig for learnable qat. An exception will raise if qat method is learnable but no learnable observers are used in the model.
      * Set the weight decay for scale/zero_point to 0 for the optimizer automatically.
      * The way to use larnable qat: enable static observers -> enable fake quant -> enable learnable observers -> freeze bn.
      
      Differential Revision: D31370822
      
      fbshipit-source-id: a5a5044a539d0d7fe1cc6b36e6821fc411ce752a
      f6ce583e
    • Peizhao Zhang's avatar
      Refactored qat related code. · ef9c20cc
      Peizhao Zhang authored
      Summary:
      Refactored qat related code.
      * Moved `_prepare_model_for_qat` related code to a function.
      * Moved `_setup_non_qat_to_qat_state_dict_map` related code to a function.
      * Moved QATHook related code to the quantization file and implemented as a class.
      
      Differential Revision: D31370819
      
      fbshipit-source-id: 836550b2c8d68cd93a84d5877ad9cef6f0f0eb39
      ef9c20cc
  6. 15 Oct, 2021 2 commits
    • Peizhao Zhang's avatar
      Supported specifying customized parameter groups from model. · 87ce583c
      Peizhao Zhang authored
      Summary:
      Supported specifying customized parameter groups from model.
      * Allow model to specify customized parameter groups by implementing a function `model.get_optimizer_param_groups(cfg)`
      * Supported model with ddp.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D31289315
      
      fbshipit-source-id: c91ba8014508e9fd5f172601b9c1c83c188338fd
      87ce583c
    • Peizhao Zhang's avatar
      Refactor for get_optimizer_param_groups. · 2dc3bc02
      Peizhao Zhang authored
      Summary:
      Refactor for get_optimizer_param_groups.
      * Split `get_default_optimizer_params()` into multiple functions:
        * `get_optimizer_param_groups_default()`
        * `get_optimizer_param_groups_lr()`
        * `get_optimizer_param_groups_weight_decay()`
      * Regroup the parameters to create the minimal amount of groups.
      * Print all parameter groups when the optimizer is created.
          Param group 0: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 10.0, params: 1, weight_decay: 1.0}
          Param group 1: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 1, weight_decay: 1.0}
          Param group 2: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 2, weight_decay: 0.0}
      * Add some unit tests.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D31287783
      
      fbshipit-source-id: e87df0ae0e67343bb2130db945d8faced44d7411
      2dc3bc02
  7. 06 Oct, 2021 1 commit
  8. 24 Sep, 2021 2 commits
  9. 15 Sep, 2021 1 commit
  10. 09 Sep, 2021 1 commit
  11. 31 Aug, 2021 1 commit
    • Yanghan Wang's avatar
      enable (fake) inference for bolt exported model · e62c0e4c
      Yanghan Wang authored
      Summary:
      Enable the inference for boltnn (via running torchscript).
      - merge rcnn's boltnn test with other export types.
      - misc fixes.
      
      Differential Revision: D30610386
      
      fbshipit-source-id: 7b78136f8ca640b5fc179cb47e3218e709418d71
      e62c0e4c
  12. 18 Aug, 2021 2 commits
    • Siddharth Shah's avatar
      torch batch boundary CE loss · 7ae35eec
      Siddharth Shah authored
      Summary:
      A torch version which is batched allows us to avoid CPU <--> GPU copy which
      gets us ~200ms per iteration saving. This new version of generating boundary
      weight mask produces identical masks.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30176412
      
      fbshipit-source-id: 877f4c6337e7870d3bafd8eb9157ac166ddd588a
      7ae35eec
    • Valentin Andrei's avatar
      Add multi-tensor optimizer version for SGD · 918abe42
      Valentin Andrei authored
      Summary:
      Added multi-tensor optimizer implementation for SGD, from `torch.optim._multi_tensor`. It can potentially provide ~5% QPS improvement by using `foreach` API to speed up the optimizer step.
      
      Using it is optional, from the configuration file, by specifying `SGD_MT` in the `SOLVER.OPTIMIZER` setting.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30377761
      
      fbshipit-source-id: 06107f1b91e9807c1db5d1b0ca6be09fcbb13e67
      918abe42
  13. 17 Aug, 2021 1 commit
  14. 16 Aug, 2021 1 commit
  15. 27 Jun, 2021 1 commit
  16. 25 Jun, 2021 1 commit
    • Sam Tsai's avatar
      use src dataset name instead of the derived class name · d4aedb83
      Sam Tsai authored
      Summary: "@ [0-9]classes" is appended to datasets to mark whether it is a derived class of the original one and saved as a config. When reloading the config, the derived class name will be used as the source instead of the original source. Adding a check to remove the derived suffix.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D29315132
      
      fbshipit-source-id: 0cc204d305d2da6c9f1817aaf631270bd874f90d
      d4aedb83
  17. 21 Jun, 2021 1 commit
    • Yuxin Wu's avatar
      additional flop counting using fvcore's flop counter · bc9d5070
      Yuxin Wu authored
      Summary:
      1. save 3 versions of flop count, using both mobile_cv's flop counter and fvcore's flop counter
      2. print only a simple short table in terminal, but save others to files
      
      The `print_flops` function seems not used anywhere so this diff just replaced it.
      
      TODO: enable this feature automatically for train/eval workflows in the next diff
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29182412
      
      fbshipit-source-id: bfa1dfad41b99fcda06b96c4732237b5e753f1bb
      bc9d5070
  18. 16 Jun, 2021 1 commit
    • Sam Tsai's avatar
      add check/filter for invalid bounding boxes · 692a4fb3
      Sam Tsai authored
      Summary: Checks for invalid bounding boxes and removes from the being included.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D28902711
      
      fbshipit-source-id: 1f017d6ccf5c959059bcb94a09ddd81de868feed
      692a4fb3
  19. 14 Jun, 2021 1 commit
  20. 01 Jun, 2021 1 commit
    • Yanghan Wang's avatar
      misc update to config utils · 81ab967f
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/77
      
      - Reimplement `get_cfg_diff_table` by reusing other utils
      - Adding `reorder` option for `flatten_config_dict`
      - Remove the legacy BC support for `ARCH_DEF`, including `str_wrap_fbnet_arch_def` and customized `merge_from_other_cfg`.
      - Move `temp_defrost` from `utils.py` to `config.py`, this way there's no more namespace forwarding for `utils.py`
      - Merge `test_config_utils.py` and `test_configs.py`
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28734493
      
      fbshipit-source-id: 925f5944cf0e9019e4c54462e851ea16a5c94b8c
      81ab967f
  21. 25 May, 2021 2 commits
  22. 22 May, 2021 1 commit
  23. 21 May, 2021 2 commits
    • Sanjeev Kumar's avatar
      Enable inference config in export step · 90aff5da
      Sanjeev Kumar authored
      Summary:
      - Enable sdk inference config specification in export step. This enables adding the sdk configuration as part of model file in the export step. The sdk config can be specified as infernece_config.yaml and is zipped together with torchscript model. The main goal of sdk configuration is to control the model inference behavior with model.
      - SDK inference config design doc: https://docs.google.com/document/d/1j5qx8IrnFg1DJFzTnu4W8WmXFYJ-AgCDfSQHb2ACJsk/edit
      - One click fblearner pipeline is in next diff on the stack
      
      Differential Revision: D27881742
      
      fbshipit-source-id: 34a3ab7a88f456b74841cf671ea1b3f678cdb733
      90aff5da
    • Sam Tsai's avatar
      adding bounding box only options · 27bef8e3
      Sam Tsai authored
      Summary: Option to change only bounding boxes, others remain the same.
      
      Differential Revision: D28339388
      
      fbshipit-source-id: 7a6d4c5153cf10c473992119f4c684e0b9159b44
      27bef8e3
  24. 17 May, 2021 1 commit
    • Kai Zhang's avatar
      add dataset visualization · 536e9d25
      Kai Zhang authored
      Summary: Add dataset visualization so that we could visualize test results in Tensorboard.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28457363
      
      fbshipit-source-id: 4c2fd9dce349c6fb9e1cec51c9138cf0abb45d7b
      536e9d25
  25. 12 May, 2021 1 commit
    • Luis Perez's avatar
      Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to... · 0848c589
      Luis Perez authored
      Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to github/third-party/PyTorchLightning/pytorch-lightning
      
      Summary:
      # Manual
       - remove fixme's in `model_checkpoint.py`, `parameter_monitor.py`, `test_quantization.py`, and `speed_monitor.py` now that `Trainer` is properly annotated.
      - update `test_quantization.py` to `trainer.train_loop.global_step` instead of `trainer.global_step` which is a read-only.
      - update `loop_callback.py` to read from `train_loop` for `batch_idx` (which is no longer available).
      
      # Automatic
      ### New commit log messages
        7b283e3c Bugfix/Multiple dataloaders (#7433)
        d7c44cc6 Docs: sync chlog 1.3.1 (#7478)
        fdf50a5e Mark certain Trainer APIs as protected (#7420)
        ad9118f0 remove trainer hidden state | sanity refactor [1 / n] (#7437)
        4a1134db Log epoch metrics before firing the `on_evaluation_end` hook (#7272)
        b65ae794 Automatically check `DataModule.has_{setup,teardown,prepare_data}` [2/2] (#7238)
        8660d8cf [pre-commit.ci] pre-commit autoupdate (#7475)
        f6fe715e Fix Sphinx argument deprecation (#7464)
      
      Reviewed By: shuyingsunshine21
      
      Differential Revision: D28353491
      
      fbshipit-source-id: 98b87d99e2f09b47b07270858fcbdb5d5299730b
      0848c589
  26. 07 May, 2021 1 commit
  27. 05 May, 2021 1 commit
    • Sam Tsai's avatar
      add enlarge bounging box manipulation · e1961ad4
      Sam Tsai authored
      Summary: Add a bounding manipulation tool to padding bounding box data.
      
      Reviewed By: newstzpz
      
      Differential Revision: D28082071
      
      fbshipit-source-id: f168cae48672c4fa5c4ec98697c57ed7833787ab
      e1961ad4
  28. 04 May, 2021 1 commit
  29. 30 Apr, 2021 1 commit
    • Sam Tsai's avatar
      add keypoints metadata registry · 77ebe09f
      Sam Tsai authored
      Summary:
      1. Add a keypoint metadata registry for registering different keypoint metadata
      2. Add option to inject_coco_dataset for adding keypoint metadata
      
      Reviewed By: newstzpz
      
      Differential Revision: D27730541
      
      fbshipit-source-id: c6ba97f60664fce4dcbb0de80222df7490bc6d5d
      77ebe09f
  30. 28 Apr, 2021 1 commit
    • Ananth Subramaniam's avatar
      Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to... · a95c7983
      Ananth Subramaniam authored
      Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning
      
      Summary:
      ### New commit log messages
        7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053)
        bab72255 [fix] Add barriers before and after setup hook is run (#7202)
        f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228)
        e147127c [feat] Add better support for predict + ddp 2/3 (#7215)
        ca6c87ff Add back `clip_gradients(model)` (#7231)
        3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014)
        5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213)
        52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937)
        dd5ec75e Deprecate save_function from model checkpoint callback (#7201)
        ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194)
        6be0a859 Update teardown for TPU acc (#7211)
        bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814)
        68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130)
        44d775fc Update Error message for ProfileConnector (#7204)
        31fcd7d0 Deprecate write_predictions on the LightningModule (#7066)
        591b9cee make bug_report_model minimal (#7191)
        b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180)
        f58865aa Properly set `LightningModule.device` after model replacement (#7188)
        8439aead Update FairScale on CI (#7017)
        92af3632 Fix `lr_finder` suggesting too high learning rates (#7076)
        d534e53e add missing predict docs (#7150)
      
      Reviewed By: kazhang
      
      Differential Revision: D28032962
      
      fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
      a95c7983
  31. 21 Apr, 2021 1 commit
  32. 20 Apr, 2021 1 commit