1. 31 Aug, 2021 2 commits
  2. 30 Aug, 2021 2 commits
    • Jake Popham's avatar
      Refactor Preprocessors, Add CoordConv · ad2973b2
      Jake Popham authored
      Summary:
      Refactors the `MODEL.REGRESSOR.PREPROCESSORS` usage to allow for multiple preprocessors, and adds a new `ADD_COORD_CHANNELS` preprocessor.
      
      Note: `MODEL.FBNET_V2.STEM_IN_CHANNELS` should be modified in your config to reflect the preprocessors that are enabled. Specifically, `ADD_COORD_CHANNELS` increases the input channels by 2, while `SPLIT_AND_CONCAT` decreases by a factor of the chunk size (typically 2). See the included `quick_pupil_3d_*` configs as an example.
      
      Differential Revision: D30459924
      
      fbshipit-source-id: dd8e3293a416a1a556e091cecc058a1be5288cc0
      ad2973b2
    • Xiaoliang Dai's avatar
      Support customized subclass selection · a11cb507
      Xiaoliang Dai authored
      Summary: Support customized subclass selection.  Only the selected gestures are used for model training.
      
      Reviewed By: sanjeevk42
      
      Differential Revision: D30205443
      
      fbshipit-source-id: 36337893aa5d06bb8be5d5587da11398b246b02e
      a11cb507
  3. 27 Aug, 2021 1 commit
    • Jake Popham's avatar
      Remove social_eye reference from public docstring · 5e521841
      Jake Popham authored
      Summary: d2go/modeling/misc.py is open source, and references an internal code path in its docstring. This diff removes that reference.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30578876
      
      fbshipit-source-id: b255af215e6c096f62f17f65c5f72a0ab95458a9
      5e521841
  4. 25 Aug, 2021 2 commits
    • Kai Zhang's avatar
      only log evaluation metric on rank 0 · 567a9a80
      Kai Zhang authored
      Summary: All metrics should have been reduced on rank 0 by dataset evaluator.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30389938
      
      fbshipit-source-id: f8dfb6f1f17635c2fb98391780fdefe90c630054
      567a9a80
    • Zhicheng Yan's avatar
      fix two-stage DF-DETR · aea87f6c
      Zhicheng Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/106
      
      # 2-stage DF-DETR
      
      DF-DETR supports 2-stage detection. In the 1st stage, we detect class-agnostic boxes using the feature pyramid (a.k.a. `memory` in the code) computed by the encoder.
      
      Current implementation has a few flaws
      - In `setcriterion.py`, when computing loss for encoder 1st stage predictions, `num_boxes` should be reduced across gpus and also clamped to be positive integer to avoid divide-by-zero bug. Current implementation will lead to divide-by-zero NaN issue when `num_boxes` is zero (e.g. no box annotation in the cropped input image).
      - In `gen_encoder_output_proposals()`, it manually fill in `float("inf")` at invalid spatial positions outside of actual image size. However, it is not guaranteed that those positions won't be selected as top-scored positions.  `float("inf")` can easily cause affected parameters to be updated to NaN value.
      - `class_embed` for encoder should has 1 channel rather than num_class channels because we only need to predict the probability of being a foreground box.
      
      This diff fixes the issues above.
      
      # Gradient blocking in decoder
      
      Currently, gradient of reference point is blocked at each decoding layer to improve numerical stability during training.
      In this diff, add an option `MODEL.DETR.DECODER_BLOCK_GRAD`. When False, we do NOT block the gradient. Empirically, we find this leads to better box AP.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30325396
      
      fbshipit-source-id: 7d7add1e05888adda6e46cc6886117170daa22d4
      aea87f6c
  5. 24 Aug, 2021 1 commit
  6. 20 Aug, 2021 1 commit
    • Yanghan Wang's avatar
      remove interface of export_predictor · 7992f913
      Yanghan Wang authored
      Summary: `export_predictor` is now not customizable, all customization will be done via `prepare_for_export` and `ModelExportMethod`
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28083607
      
      fbshipit-source-id: e584fff185912ca3e985194b741860276f0943df
      7992f913
  7. 18 Aug, 2021 2 commits
    • Siddharth Shah's avatar
      torch batch boundary CE loss · 7ae35eec
      Siddharth Shah authored
      Summary:
      A torch version which is batched allows us to avoid CPU <--> GPU copy which
      gets us ~200ms per iteration saving. This new version of generating boundary
      weight mask produces identical masks.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30176412
      
      fbshipit-source-id: 877f4c6337e7870d3bafd8eb9157ac166ddd588a
      7ae35eec
    • Valentin Andrei's avatar
      Add multi-tensor optimizer version for SGD · 918abe42
      Valentin Andrei authored
      Summary:
      Added multi-tensor optimizer implementation for SGD, from `torch.optim._multi_tensor`. It can potentially provide ~5% QPS improvement by using `foreach` API to speed up the optimizer step.
      
      Using it is optional, from the configuration file, by specifying `SGD_MT` in the `SOLVER.OPTIMIZER` setting.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30377761
      
      fbshipit-source-id: 06107f1b91e9807c1db5d1b0ca6be09fcbb13e67
      918abe42
  8. 17 Aug, 2021 1 commit
  9. 16 Aug, 2021 2 commits
  10. 13 Aug, 2021 1 commit
    • Valentin Andrei's avatar
      Reduce number of parameter groups to make optimizer more efficient · 737d099b
      Valentin Andrei authored
      Summary:
      `torch.optim._multi_tensor` provides faster Optimizer implementations as it uses foreach APIs. We can enable it by modifying from `OPTIMIZER: "ADAMW"` to `OPTIMIZER: "ADAMW_MT"` in the config file.
      
      In order to profit from the speedup, we need to reduce the number of parameter groups as suggested in this post: https://fb.workplace.com/groups/1405155842844877/permalink/4971600462867046/
      
      The current implementation uses one parameter group per parameter which is not optimal. The proposed change groups parameters by learning rate and weight decay combinations.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30272112
      
      fbshipit-source-id: d8d24298a59b52c2fc2930f7d614a0c6380a432f
      737d099b
  11. 11 Aug, 2021 3 commits
  12. 06 Aug, 2021 2 commits
  13. 05 Aug, 2021 2 commits
    • Abduallah Mohamed's avatar
      Clarifying the use of do_test function · 610d2d03
      Abduallah Mohamed authored
      Summary: The `do_test` method might be used to perform testing outside the training process. One might think it will load the weights of the models before testing similar to `do_train` method. This diff adds a comment that clarifies this confusion.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D29082338
      
      fbshipit-source-id: 6ec7d7f7f243503414fa904f4eb8856e62e9ed6d
      610d2d03
    • Yuxin Wu's avatar
      avoid warnings of NCCL · 30d5ca55
      Yuxin Wu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/detectron2/pull/3322
      
      avoid warnings like the following:
      ```
      [W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by
      this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is
      incorrect. Specify device_ids in barrier() to force use of a particular device.
      ```
      
      maybe can fix the hang in https://github.com/facebookresearch/detectron2/issues/3319
      
      Reviewed By: vaibhava0
      
      Differential Revision: D30077957
      
      fbshipit-source-id: b8827e66c5eecc06b650acde2e7ff44106327f69
      30d5ca55
  14. 04 Aug, 2021 1 commit
  15. 03 Aug, 2021 3 commits
  16. 01 Aug, 2021 1 commit
    • Zhicheng Yan's avatar
      stabilize deformable DETR training · a4f06b88
      Zhicheng Yan authored
      Summary:
      Deformable DETR training can be unstable due to iterative box refinement in the transformer decoder. To stabilize the training, introduce two changes
      - Remove the unnecessary use of inverse sigmoid.
      It is possible to completely avoid using inverse sigmoid when box refinement is turned on.
      - In `DeformableTransformer` class, detach `init_reference_out` before passing it into decoder to update memory and computer per-decoder-layer reference points/
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29903599
      
      fbshipit-source-id: a374ba161be0d7bcdfb42553044c4c6700e92623
      a4f06b88
  17. 29 Jul, 2021 1 commit
  18. 21 Jul, 2021 1 commit
    • Xi Yin's avatar
      fix bug in valid_bbox check · b4d9aad9
      Xi Yin authored
      Summary: In case the height/width is None, the original version will cause a crash. So adding additional check to bypass this issue.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D29807853
      
      fbshipit-source-id: b2b1a7edb52b7911da79a11329d4cf93f343c279
      b4d9aad9
  19. 14 Jul, 2021 1 commit
  20. 09 Jul, 2021 2 commits
    • Mircea Cimpoi's avatar
      Add tests for exporter / boltnn export via torch delegate · d0c38c43
      Mircea Cimpoi authored
      Summary:
      Adding test for previous diff.
      Boltnn backend is supported on device -- so this test only checks if the conversion takes place and the output file is present.
      
      Differential Revision: D29589245
      
      fbshipit-source-id: ba66a733295304531d177086ce6459a50cfbaa07
      d0c38c43
    • Mircea Cimpoi's avatar
      Add BoltNN conversion to d2go exporter · ecf832da
      Mircea Cimpoi authored
      Summary:
      Added predictor_type `boltnn_int8` to export to BoltNN via torch delegate.
      
      - `int8` needs to be in the name, otherwise the post-train quantization won't happen;
      
      ```
      cfg.QUANTIZATION.BACKEND = "qnnpack"
      // cfg.QUANTIZATION.CUSTOM_QSCHEME = "per_tensor_affine"
      ```
      
      Seems that ` QUANTIZATION.CUSTOM_QSCHEME per_tensor_affine` is not needed - likely covered by "qnnpack".
      
      Reviewed By: wat3rBro
      
      Differential Revision: D29106043
      
      fbshipit-source-id: 865ac5af86919fe7b4530b48433a1bd11e295bf4
      ecf832da
  21. 08 Jul, 2021 3 commits
    • Zhicheng Yan's avatar
      fix a bug in D2GoDatasetMapper · abf2f327
      Zhicheng Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/101
      
      In D2 (https://github.com/facebookresearch/d2go/commit/4f3f3401173ee842995ec69a7ce2635e2deb178a)GoDatasetMapper, when crop transform is applied to the image. "Inputs" should be updated to use the cropped images before other transforms are applied later.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29551488
      
      fbshipit-source-id: 48917ffc91c8a80286d61ba3ae8391541ec2c930
      abf2f327
    • Zhicheng Yan's avatar
      remove redundant build_optimizer() · b1e2cc56
      Zhicheng Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/96
      
      In `DETRRunner`, the method `build_optimizer` customized the following logics, which are actually redundant to parent class implementation and can be removed.
      - Discount LR for certain modules, such as those with name `reference_points`, `backbone`, and `sampling_offsets`.
        - Those can be achieved by `SOLVER.LR_MULTIPLIER_OVERWRITE` after we update `get_default_optimizer_params` in `mobile-vision/d2go/d2go/optimizer/build.py`.
      - Full model gradient clipping
        - This is also implemented in `mobile-vision/d2go/d2go/optimizer/build.py`
      
      It also has minor issues
      - It ignores `SOLVER.WEIGHT_DECAY_NORM` which can set a different weight decay for affine parameters in the norm modules.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29420642
      
      fbshipit-source-id: deeb9348c9d282231c540dde6161acedd8e3a119
      b1e2cc56
    • Sam Tsai's avatar
      fix extended coco load missing comma · 4f3f3401
      Sam Tsai authored
      Summary: Fix missing comma for extended coco load, which would ignore bbox_mode and keypoints field.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29608815
      
      fbshipit-source-id: 8c737df1dfef7f88494f7de25e06b0c37742ac30
      4f3f3401
  22. 07 Jul, 2021 1 commit
  23. 06 Jul, 2021 1 commit
    • Cheng-Yang Fu's avatar
      Add the fields which will be used in point-based modeling. · 80c18641
      Cheng-Yang Fu authored
      Summary:
      Add the fields which will be used in point-based modeling.
      - `point_coords` : indicates the point_coords in the image.
      - `point_labels`: indicates the foreground or background points.
      
      Differential Revision: D29532103
      
      fbshipit-source-id: 9af6c9b049e1d05fd0d77909b09de1feec391ce9
      80c18641
  24. 02 Jul, 2021 1 commit
    • Zhicheng Yan's avatar
      revert D29048363 · e69e0ffe
      Zhicheng Yan authored
      Summary:
      In D29048363 (https://github.com/facebookresearch/d2go/commit/c480d4e4e213a850cced7758f7b62c20caad8820) we make the detaching of `reference_points` earlier in the hope of allowing more gradient flow to update weights in `self.bbox_embed`.
      In this diff, we revert the changes as i) it does not improve box AP ii) it may potential cause in-stable optimization when iterative box refinement is turned on.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29530735
      
      fbshipit-source-id: 3217c863343836e129d53e07c0eedb2db8164fe6
      e69e0ffe
  25. 01 Jul, 2021 1 commit
  26. 30 Jun, 2021 1 commit
    • Jerry Zhang's avatar
      Remove redundant quant/dequant in GenrealizedRCNN · 2ff49517
      Jerry Zhang authored
      Summary: Removed quant/dequant between backbone and proposal generator, and roi_box_conv and the following avg_pool
      
      Reviewed By: wat3rBro
      
      Differential Revision: D29383036
      
      fbshipit-source-id: ef07b3d1997b1fc7f92bcd9201523e9071510a8b
      2ff49517