"tests/vscode:/vscode.git/clone" did not exist on "2ca4f0eefa0f6b83f4448e1c10c3624ae5d24dee"
  1. 21 Sep, 2021 2 commits
  2. 20 Sep, 2021 2 commits
    • Yanghan Wang's avatar
      merge internal data build files · 07c4e54c
      Yanghan Wang authored
      Reviewed By: ppwwyyxx
      
      Differential Revision: D31035247
      
      fbshipit-source-id: 7340e6f6bb813e284416e37060d0d511c5c79e03
      07c4e54c
    • Shiyu Dong's avatar
      Check if new_ds_name registered to MetadataCatalog before removing · f4fcff31
      Shiyu Dong authored
      Summary:
      As title, sometimes new_ds_name is not registered so it crashes the program when calling remove(). Adding a check.
      A side effect to this is if it's not registered, get() method will register it first and then remove() will remove it from registery.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D31049303
      
      fbshipit-source-id: 149168fb89fd3b661b60717ff2aafa7a9bd52849
      f4fcff31
  3. 18 Sep, 2021 2 commits
  4. 15 Sep, 2021 2 commits
  5. 10 Sep, 2021 1 commit
  6. 09 Sep, 2021 1 commit
  7. 08 Sep, 2021 1 commit
  8. 02 Sep, 2021 2 commits
    • Lydia Chan's avatar
      Increase limit on number of detections per image in {COCO,LVIS}Evaluator · 2fb273ab
      Lydia Chan authored
      Summary:
      ## Context
      - The current limit on the number of detections per image (`K`) in LVIS is 300.
      - Implementing AP_pool/AP_fixed requires removing this default limit on `K`
      - [Literature](https://arxiv.org/pdf/2102.01066.pdf) has shown that increasing `K` correlates with AP gains
      
      ## This Diff
      - Changed limit on number of detections per image (`K`) to be customizable for LVIS and COCO through `TEST.DETECTIONS_PER_IMAGE` in the config
         - For COCO:
             - Maintain the default `max_dets_per_image` to be [1, 10, 100] as from [COCOEval](https://www.internalfb.com/code/fbsource/[88bb57c3054a]/fbcode/deeplearning/projects/cocoApi/PythonAPI/pycocotools/cocoeval.py?lines=28-29)
             - Allow users to input a custom integer for `TEST.DETECTIONS_PER_IMAGE` in the config, and use  [1, 10, `TEST.DETECTIONS_PER_IMAGE`] for COCOEval
         - For LVIS:
             - Maintain the default `max_dets_per_image` to be 300 as from [LVISEval](https://www.internalfb.com/code/fbsource/[f6b86d023721]/fbcode/deeplearning/projects/lvisApi/lvis/eval.py?lines=528-529)
             - Allow users to input a custom integer for `TEST.DETECTIONS_PER_IMAGE` in the config, and use this in LVISEval
      - Added `COCOevalMaxDets` for evaluating AP with the custom limit on number of detections per image (since default `COCOeval` uses 100 as limit on detections per image for evaluating AP)
      
      ## Inference Runs using this Diff
      - Performed inference using `K = {300, 1000, 10000, 100000}`
      - Launched fblearner flows for object detector baseline models with N1055536 (LVIS) and N1055756 (COCO)
        - Recorded [results of running inference](https://docs.google.com/spreadsheets/d/1rgdjN2KvxcYfKCkGUC4tMw0XQJ5oZL0dwjOIh84YRg8/edit?usp=sharing)
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D30077359
      
      fbshipit-source-id: 372eb5e0d7c228fb77fe23bf80d53597ec66287b
      2fb273ab
    • Zhicheng Yan's avatar
      clamp reference point max to 1.0 to avoid NaN in regressed bbox · 0a38f8c8
      Zhicheng Yan authored
      Summary:
      For training DF-DETR with swin-transformer backbone which uses large size_divisibility 224 (=32 * 7) and potentially has more zero-padding, we find the regressed box can contain NaN values and fail the assertion here (https://fburl.com/code/p27ztcce).
      
      This issue might be caused by two potential reasons.
      - Fix 1. In DF-DETR encoder, the reference points prepared by `get_reference_points()` can contain normalized x,y coordinates larger than 1 due to the rounding issues during mask interpolation across feature scales (specific examples can be given upon request LoL). Thus, we clamp max of x,y coordinates to 1.0.
      
      - Fix 2. The MLP used in bbox_embed heads contains 3 FC layers, which might be too many. We introduce an argument `BBOX_EMBED_NUM_LAYERS` to allow users to configure the number of FC layers. This change is back-compatible.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30661167
      
      fbshipit-source-id: c7e94983bf1ec07426fdf1b9d363e5163637f21a
      0a38f8c8
  9. 31 Aug, 2021 2 commits
  10. 30 Aug, 2021 2 commits
    • Jake Popham's avatar
      Refactor Preprocessors, Add CoordConv · ad2973b2
      Jake Popham authored
      Summary:
      Refactors the `MODEL.REGRESSOR.PREPROCESSORS` usage to allow for multiple preprocessors, and adds a new `ADD_COORD_CHANNELS` preprocessor.
      
      Note: `MODEL.FBNET_V2.STEM_IN_CHANNELS` should be modified in your config to reflect the preprocessors that are enabled. Specifically, `ADD_COORD_CHANNELS` increases the input channels by 2, while `SPLIT_AND_CONCAT` decreases by a factor of the chunk size (typically 2). See the included `quick_pupil_3d_*` configs as an example.
      
      Differential Revision: D30459924
      
      fbshipit-source-id: dd8e3293a416a1a556e091cecc058a1be5288cc0
      ad2973b2
    • Xiaoliang Dai's avatar
      Support customized subclass selection · a11cb507
      Xiaoliang Dai authored
      Summary: Support customized subclass selection.  Only the selected gestures are used for model training.
      
      Reviewed By: sanjeevk42
      
      Differential Revision: D30205443
      
      fbshipit-source-id: 36337893aa5d06bb8be5d5587da11398b246b02e
      a11cb507
  11. 27 Aug, 2021 1 commit
    • Jake Popham's avatar
      Remove social_eye reference from public docstring · 5e521841
      Jake Popham authored
      Summary: d2go/modeling/misc.py is open source, and references an internal code path in its docstring. This diff removes that reference.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30578876
      
      fbshipit-source-id: b255af215e6c096f62f17f65c5f72a0ab95458a9
      5e521841
  12. 25 Aug, 2021 2 commits
    • Kai Zhang's avatar
      only log evaluation metric on rank 0 · 567a9a80
      Kai Zhang authored
      Summary: All metrics should have been reduced on rank 0 by dataset evaluator.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30389938
      
      fbshipit-source-id: f8dfb6f1f17635c2fb98391780fdefe90c630054
      567a9a80
    • Zhicheng Yan's avatar
      fix two-stage DF-DETR · aea87f6c
      Zhicheng Yan authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/106
      
      # 2-stage DF-DETR
      
      DF-DETR supports 2-stage detection. In the 1st stage, we detect class-agnostic boxes using the feature pyramid (a.k.a. `memory` in the code) computed by the encoder.
      
      Current implementation has a few flaws
      - In `setcriterion.py`, when computing loss for encoder 1st stage predictions, `num_boxes` should be reduced across gpus and also clamped to be positive integer to avoid divide-by-zero bug. Current implementation will lead to divide-by-zero NaN issue when `num_boxes` is zero (e.g. no box annotation in the cropped input image).
      - In `gen_encoder_output_proposals()`, it manually fill in `float("inf")` at invalid spatial positions outside of actual image size. However, it is not guaranteed that those positions won't be selected as top-scored positions.  `float("inf")` can easily cause affected parameters to be updated to NaN value.
      - `class_embed` for encoder should has 1 channel rather than num_class channels because we only need to predict the probability of being a foreground box.
      
      This diff fixes the issues above.
      
      # Gradient blocking in decoder
      
      Currently, gradient of reference point is blocked at each decoding layer to improve numerical stability during training.
      In this diff, add an option `MODEL.DETR.DECODER_BLOCK_GRAD`. When False, we do NOT block the gradient. Empirically, we find this leads to better box AP.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30325396
      
      fbshipit-source-id: 7d7add1e05888adda6e46cc6886117170daa22d4
      aea87f6c
  13. 24 Aug, 2021 1 commit
  14. 20 Aug, 2021 1 commit
    • Yanghan Wang's avatar
      remove interface of export_predictor · 7992f913
      Yanghan Wang authored
      Summary: `export_predictor` is now not customizable, all customization will be done via `prepare_for_export` and `ModelExportMethod`
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D28083607
      
      fbshipit-source-id: e584fff185912ca3e985194b741860276f0943df
      7992f913
  15. 18 Aug, 2021 2 commits
    • Siddharth Shah's avatar
      torch batch boundary CE loss · 7ae35eec
      Siddharth Shah authored
      Summary:
      A torch version which is batched allows us to avoid CPU <--> GPU copy which
      gets us ~200ms per iteration saving. This new version of generating boundary
      weight mask produces identical masks.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D30176412
      
      fbshipit-source-id: 877f4c6337e7870d3bafd8eb9157ac166ddd588a
      7ae35eec
    • Valentin Andrei's avatar
      Add multi-tensor optimizer version for SGD · 918abe42
      Valentin Andrei authored
      Summary:
      Added multi-tensor optimizer implementation for SGD, from `torch.optim._multi_tensor`. It can potentially provide ~5% QPS improvement by using `foreach` API to speed up the optimizer step.
      
      Using it is optional, from the configuration file, by specifying `SGD_MT` in the `SOLVER.OPTIMIZER` setting.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30377761
      
      fbshipit-source-id: 06107f1b91e9807c1db5d1b0ca6be09fcbb13e67
      918abe42
  16. 17 Aug, 2021 1 commit
  17. 16 Aug, 2021 2 commits
  18. 13 Aug, 2021 1 commit
    • Valentin Andrei's avatar
      Reduce number of parameter groups to make optimizer more efficient · 737d099b
      Valentin Andrei authored
      Summary:
      `torch.optim._multi_tensor` provides faster Optimizer implementations as it uses foreach APIs. We can enable it by modifying from `OPTIMIZER: "ADAMW"` to `OPTIMIZER: "ADAMW_MT"` in the config file.
      
      In order to profit from the speedup, we need to reduce the number of parameter groups as suggested in this post: https://fb.workplace.com/groups/1405155842844877/permalink/4971600462867046/
      
      The current implementation uses one parameter group per parameter which is not optimal. The proposed change groups parameters by learning rate and weight decay combinations.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D30272112
      
      fbshipit-source-id: d8d24298a59b52c2fc2930f7d614a0c6380a432f
      737d099b
  19. 11 Aug, 2021 3 commits
  20. 06 Aug, 2021 2 commits
  21. 05 Aug, 2021 2 commits
    • Abduallah Mohamed's avatar
      Clarifying the use of do_test function · 610d2d03
      Abduallah Mohamed authored
      Summary: The `do_test` method might be used to perform testing outside the training process. One might think it will load the weights of the models before testing similar to `do_train` method. This diff adds a comment that clarifies this confusion.
      
      Reviewed By: ppwwyyxx
      
      Differential Revision: D29082338
      
      fbshipit-source-id: 6ec7d7f7f243503414fa904f4eb8856e62e9ed6d
      610d2d03
    • Yuxin Wu's avatar
      avoid warnings of NCCL · 30d5ca55
      Yuxin Wu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/detectron2/pull/3322
      
      avoid warnings like the following:
      ```
      [W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by
      this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is
      incorrect. Specify device_ids in barrier() to force use of a particular device.
      ```
      
      maybe can fix the hang in https://github.com/facebookresearch/detectron2/issues/3319
      
      Reviewed By: vaibhava0
      
      Differential Revision: D30077957
      
      fbshipit-source-id: b8827e66c5eecc06b650acde2e7ff44106327f69
      30d5ca55
  22. 04 Aug, 2021 1 commit
  23. 03 Aug, 2021 3 commits
  24. 01 Aug, 2021 1 commit
    • Zhicheng Yan's avatar
      stabilize deformable DETR training · a4f06b88
      Zhicheng Yan authored
      Summary:
      Deformable DETR training can be unstable due to iterative box refinement in the transformer decoder. To stabilize the training, introduce two changes
      - Remove the unnecessary use of inverse sigmoid.
      It is possible to completely avoid using inverse sigmoid when box refinement is turned on.
      - In `DeformableTransformer` class, detach `init_reference_out` before passing it into decoder to update memory and computer per-decoder-layer reference points/
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D29903599
      
      fbshipit-source-id: a374ba161be0d7bcdfb42553044c4c6700e92623
      a4f06b88