- 04 Jan, 2024 1 commit
-
-
generatedunixname89002005287564 authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/645 Reviewed By: zsol Differential Revision: D52536030 fbshipit-source-id: e6d0004c5bea81b5dab0ff69a1e9f6df4929b952
-
- 17 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/298 Reviewed By: tglik, newstzpz Differential Revision: D37152248 fbshipit-source-id: 58a6899c5f6465f36961a2ebf60a64f20509cec2
-
- 14 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/293 In order to pass runner during the workflow using "runner name" instead of runner instance, we need to make sure the `get_default_cfg` is not instance method. It can be either staticmethod or classmethod, but I choose classmethod for better inheritance. code mode using following script: ``` #!/usr/bin/env python3 import json import os import subprocess result = subprocess.check_output("fbgs --json 'def get_default_cfg('", shell=True) fbgs = json.loads(result) fbsource_root = os.path.expanduser("~") def _indent(s): return len(s) - len(s.lstrip()) def resolve_instance_method(content): lines = content.split("\n") for idx, line in enumerate(lines): if "def get_default_cfg(self" in line: indent = _indent(line) # find the class for j in range(idx, 0, -1): if lines[j].startswith(" " * (indent - 4) + "class "): class_line = lines[j] break else: raise RuntimeError("Can't find class") print("class_line: ", class_line) if "Runner" in class_line: # check self if not used for j in range(idx + 1, len(lines)): if _indent(lines[j]) < indent: break assert "self" not in lines[j], (j, lines[j]) # update the content assert "def get_default_cfg(self)" in line lines[idx] = lines[idx].replace( "def get_default_cfg(self)", "def get_default_cfg(cls)" ) lines.insert(idx, " " * indent + "classmethod") return "\n".join(lines) return content def resolve_static_method(content): lines = content.split("\n") for idx, line in enumerate(lines): if "def get_default_cfg()" in line: indent = _indent(line) # find the class for j in range(idx, 0, -1): if "class " in lines[j]: class_line = lines[j] break else: print("[WARNING] Can't find class!!!") continue if "Runner" in class_line: # check staticmethod is used for j in range(idx, 0, -1): if lines[j] == " " * indent + "staticmethod": staticmethod_line_idx = j break else: raise RuntimeError("Can't find staticmethod") # update the content lines[idx] = lines[idx].replace( "def get_default_cfg()", "def get_default_cfg(cls)" ) lines[staticmethod_line_idx] = " " * indent + "classmethod" return "\n".join(lines) return content for result in fbgs["results"]: filename = os.path.join(fbsource_root, result["file_name"]) print(f"processing: {filename}") with open(filename) as f: content = f.read() orig_content = content while True: old_content = content content = resolve_instance_method(content) content = resolve_static_method(content) if content == old_content: break if content != orig_content: print("Updating ...") with open(filename, "w") as f: f.write(content) ``` Reviewed By: tglik Differential Revision: D37059264 fbshipit-source-id: b09d5518f4232de95d8313621468905cf10a731c
-
- 12 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies the black-fbsource codemod with the new build of pyfmt. paintitblack Reviewed By: lisroach Differential Revision: D36324783 fbshipit-source-id: 280c09e88257e5e569ab729691165d8dedd767bc
-
- 26 Apr, 2022 1 commit
-
-
Yanghan Wang authored
Reviewed By: tglik Differential Revision: D35910666 fbshipit-source-id: 8225aca6696484dcc78e91ce50b936e1bee086d1
-
- 19 Apr, 2022 1 commit
-
-
Lisa Roach authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/212 Applies new import merging and sorting from µsort v1.0. When merging imports, µsort will make a best-effort to move associated comments to match merged elements, but there are known limitations due to the diynamic nature of Python and developer tooling. These changes should not produce any dangerous runtime changes, but may require touch-ups to satisfy linters and other tooling. Note that µsort uses case-insensitive, lexicographical sorting, which results in a different ordering compared to isort. This provides a more consistent sorting order, matching the case-insensitive order used when sorting import statements by module name, and ensures that "frog", "FROG", and "Frog" always sort next to each other. For details on µsort's sorting and merging semantics, see the user guide: https://usort.readthedocs.io/en/stable/guide.html#sorting Reviewed By: jreese, wat3rBro Differential Revision: D35559673 fbshipit-source-id: feeae2465ac2b62c44a0e92dc566e9a386567c9d
-
- 07 Feb, 2022 1 commit
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/169 Make d2go DETR exportable (torchscript compatible) Move generating masks to preprocessing Reviewed By: sstsai-adl Differential Revision: D33798073 fbshipit-source-id: d629b0c9cbdb67060982be717c7138a0e7e9adbc
-
- 02 Feb, 2022 1 commit
-
-
Steven Troxler authored
Summary: Convert type comments in fbcode/mobile-vision Produced by running: ``` python -m libcst.tool codemod convert_type_comments.ConvertTypeComment fbcode/mobile-vision ``` from fbsource. See https://fb.workplace.com/groups/pythonfoundation/permalink/3106231549690303/ Reviewed By: grievejia Differential Revision: D33897026 fbshipit-source-id: e7666555e47a9abc769975f6db6b2e6eda792d72
-
- 27 Jan, 2022 3 commits
-
-
Hang Zhang authored
Summary: As in the tittle Reviewed By: XiaoliangDai Differential Revision: D33413849 fbshipit-source-id: b891849c175edc7b8916bff2fcc40c76c4658f14
-
Hang Zhang authored
Summary: Learnable query doesn't improve the results, but it helps DETR with reference points in D33420993 Reviewed By: XiaoliangDai Differential Revision: D33401417 fbshipit-source-id: 5296f2f969c04df18df292d61a7cf57107bc9b74
-
Hang Zhang authored
Summary: Add DETR_MODEL_REGISTRY registry to better support different variant of DETR (in later diff). Reviewed By: newstzpz Differential Revision: D32874194 fbshipit-source-id: f8e9a61417ec66bec9f2d98631260a2f4e2af4cf
-
- 28 Nov, 2021 1 commit
-
-
Hang Zhang authored
Summary: Experimental models from Xiaoliang [D31749820] Pretrained weights: fbnet_vit_tiny_v3_lepe n/a fbnet_deit_v0 f298782311 Reviewed By: XiaoliangDai Differential Revision: D32054949 fbshipit-source-id: 7c2aa0679a545ed814ba1db421408a5f9a59a2c8
-
- 15 Sep, 2021 1 commit
-
-
Valentin Andrei authored
Reviewed By: stephenyan1231 Differential Revision: D30827134 fbshipit-source-id: e0fcb3b5f62d52283c08870dc9062c2086faf163
-
- 09 Sep, 2021 1 commit
-
-
Yanghan Wang authored
Summary: https://fb.workplace.com/groups/pythonfoundation/posts/2990917737888352 Remove `mobile-vision` from opt-out list; leaving `mobile-vision/SNPE` opted out because of 3rd-party code. arc lint --take BLACK --apply-patches --paths-cmd 'hg files mobile-vision' allow-large-files Reviewed By: sstsai-adl Differential Revision: D30721093 fbshipit-source-id: 9e5c16d988b315b93a28038443ecfb92efd18ef8
-
- 02 Sep, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: For training DF-DETR with swin-transformer backbone which uses large size_divisibility 224 (=32 * 7) and potentially has more zero-padding, we find the regressed box can contain NaN values and fail the assertion here (https://fburl.com/code/p27ztcce). This issue might be caused by two potential reasons. - Fix 1. In DF-DETR encoder, the reference points prepared by `get_reference_points()` can contain normalized x,y coordinates larger than 1 due to the rounding issues during mask interpolation across feature scales (specific examples can be given upon request LoL). Thus, we clamp max of x,y coordinates to 1.0. - Fix 2. The MLP used in bbox_embed heads contains 3 FC layers, which might be too many. We introduce an argument `BBOX_EMBED_NUM_LAYERS` to allow users to configure the number of FC layers. This change is back-compatible. Reviewed By: zhanghang1989 Differential Revision: D30661167 fbshipit-source-id: c7e94983bf1ec07426fdf1b9d363e5163637f21a
-
- 25 Aug, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/106 # 2-stage DF-DETR DF-DETR supports 2-stage detection. In the 1st stage, we detect class-agnostic boxes using the feature pyramid (a.k.a. `memory` in the code) computed by the encoder. Current implementation has a few flaws - In `setcriterion.py`, when computing loss for encoder 1st stage predictions, `num_boxes` should be reduced across gpus and also clamped to be positive integer to avoid divide-by-zero bug. Current implementation will lead to divide-by-zero NaN issue when `num_boxes` is zero (e.g. no box annotation in the cropped input image). - In `gen_encoder_output_proposals()`, it manually fill in `float("inf")` at invalid spatial positions outside of actual image size. However, it is not guaranteed that those positions won't be selected as top-scored positions. `float("inf")` can easily cause affected parameters to be updated to NaN value. - `class_embed` for encoder should has 1 channel rather than num_class channels because we only need to predict the probability of being a foreground box. This diff fixes the issues above. # Gradient blocking in decoder Currently, gradient of reference point is blocked at each decoding layer to improve numerical stability during training. In this diff, add an option `MODEL.DETR.DECODER_BLOCK_GRAD`. When False, we do NOT block the gradient. Empirically, we find this leads to better box AP. Reviewed By: zhanghang1989 Differential Revision: D30325396 fbshipit-source-id: 7d7add1e05888adda6e46cc6886117170daa22d4
-
- 11 Aug, 2021 1 commit
-
-
Valentin Andrei authored
Reviewed By: stephenyan1231 Differential Revision: D30225977 fbshipit-source-id: 479b96acc7f90a8ee2373ab44112e21086e9d1d2
-
- 03 Aug, 2021 1 commit
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/105 exploring deformable attention in transformer Reviewed By: bichenwu09 Differential Revision: D29093714 fbshipit-source-id: dd691754d9e439661e2eddecb3a1e7cefc8fe568
-
- 01 Aug, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Deformable DETR training can be unstable due to iterative box refinement in the transformer decoder. To stabilize the training, introduce two changes - Remove the unnecessary use of inverse sigmoid. It is possible to completely avoid using inverse sigmoid when box refinement is turned on. - In `DeformableTransformer` class, detach `init_reference_out` before passing it into decoder to update memory and computer per-decoder-layer reference points/ Reviewed By: zhanghang1989 Differential Revision: D29903599 fbshipit-source-id: a374ba161be0d7bcdfb42553044c4c6700e92623
-
- 29 Jul, 2021 1 commit
-
-
Hang Zhang authored
Summary: Add new backbone Experimental results are https://fburl.com/7fyecmrc Reviewed By: bichenwu09 Differential Revision: D26877909 fbshipit-source-id: ba3f97a1e4d84bec22d6a345f1fca06c741010cc
-
- 08 Jul, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/96 In `DETRRunner`, the method `build_optimizer` customized the following logics, which are actually redundant to parent class implementation and can be removed. - Discount LR for certain modules, such as those with name `reference_points`, `backbone`, and `sampling_offsets`. - Those can be achieved by `SOLVER.LR_MULTIPLIER_OVERWRITE` after we update `get_default_optimizer_params` in `mobile-vision/d2go/d2go/optimizer/build.py`. - Full model gradient clipping - This is also implemented in `mobile-vision/d2go/d2go/optimizer/build.py` It also has minor issues - It ignores `SOLVER.WEIGHT_DECAY_NORM` which can set a different weight decay for affine parameters in the norm modules. Reviewed By: zhanghang1989 Differential Revision: D29420642 fbshipit-source-id: deeb9348c9d282231c540dde6161acedd8e3a119
-
- 02 Jul, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: In D29048363 (https://github.com/facebookresearch/d2go/commit/c480d4e4e213a850cced7758f7b62c20caad8820) we make the detaching of `reference_points` earlier in the hope of allowing more gradient flow to update weights in `self.bbox_embed`. In this diff, we revert the changes as i) it does not improve box AP ii) it may potential cause in-stable optimization when iterative box refinement is turned on. Reviewed By: zhanghang1989 Differential Revision: D29530735 fbshipit-source-id: 3217c863343836e129d53e07c0eedb2db8164fe6
-
- 30 Jun, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/97 Major changes - Fix a bug within `inference()` function - Refactor code to remove redundant code between `SetCriterion` and `FocalLossSetCriterion`. Reviewed By: zhanghang1989 Differential Revision: D29481067 fbshipit-source-id: 64788f1ff331177db964eb36d380430799d1d2f2
-
- 24 Jun, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Major changes - As described in details in appendix A.4 in deformable DETR paper (https://arxiv.org/abs/2010.04159), the gradient back-propagation is blocked at inverse_sigmoid(bounding box x/y/w/h from last decoder layer). This can be implemented by detaching tensor from compute graph in pytorch. However, currently we detach at an incorrect tensor, preventing update the layers which predicts delta x/y/w/h. Fix this bug. - Add more comments to annotate data types and tensor shape in the code. This should NOT affect the actual implementation. Reviewed By: zhanghang1989 Differential Revision: D29048363 fbshipit-source-id: c5b5e89793c86d530b077a7b999769881f441b69
-
- 20 Jun, 2021 1 commit
-
-
Albert Pumarola authored
Summary: Add create and train unit tests to OSS runner Reviewed By: zhanghang1989 Differential Revision: D29254417 fbshipit-source-id: f7c52b90b2bc7afa83a204895be149664c675e52
-
- 12 Jun, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Major changes - Add a new runner `EgoDETRRunner` which inherit from existing `DETRRunner` in D2 (https://github.com/facebookresearch/d2go/commit/62c21f252ad314961cf0157ee8f37cc4f7835e1d)GO repo. - Add a new data mapper `EgoDETRDatasetMapper` which has custom crop transform generator and supports generic data augmentation. Reviewed By: zhanghang1989 Differential Revision: D28895225 fbshipit-source-id: 4181ff8fce81df22a01d355fdff7e81e83d69e64
-
- 06 Apr, 2021 1 commit
-
-
Hang Zhang authored
Summary: TorchVision recently upgrade their version to 0.10.0 which causes issues in the version check in detr. Reviewed By: wat3rBro Differential Revision: D27575085 fbshipit-source-id: 75f459fe7a711161e908609fcf2f2d28a01a6c74
-
- 03 Mar, 2021 1 commit
-
-
facebook-github-bot authored
fbshipit-source-id: f4a8ba78691d8cf46e003ef0bd2e95f170932778
-