"research/learned_optimizer/optimizer/trainable_optimizer.py" did not exist on "a8ba923c873f9848d0f6453f3e2e3fa2dd1187dc"
- 09 Jul, 2021 2 commits
-
-
Mircea Cimpoi authored
Summary: Adding test for previous diff. Boltnn backend is supported on device -- so this test only checks if the conversion takes place and the output file is present. Differential Revision: D29589245 fbshipit-source-id: ba66a733295304531d177086ce6459a50cfbaa07
-
Mircea Cimpoi authored
Summary: Added predictor_type `boltnn_int8` to export to BoltNN via torch delegate. - `int8` needs to be in the name, otherwise the post-train quantization won't happen; ``` cfg.QUANTIZATION.BACKEND = "qnnpack" // cfg.QUANTIZATION.CUSTOM_QSCHEME = "per_tensor_affine" ``` Seems that ` QUANTIZATION.CUSTOM_QSCHEME per_tensor_affine` is not needed - likely covered by "qnnpack". Reviewed By: wat3rBro Differential Revision: D29106043 fbshipit-source-id: 865ac5af86919fe7b4530b48433a1bd11e295bf4
-
- 08 Jul, 2021 3 commits
-
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/101 In D2 (https://github.com/facebookresearch/d2go/commit/4f3f3401173ee842995ec69a7ce2635e2deb178a)GoDatasetMapper, when crop transform is applied to the image. "Inputs" should be updated to use the cropped images before other transforms are applied later. Reviewed By: zhanghang1989 Differential Revision: D29551488 fbshipit-source-id: 48917ffc91c8a80286d61ba3ae8391541ec2c930
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/96 In `DETRRunner`, the method `build_optimizer` customized the following logics, which are actually redundant to parent class implementation and can be removed. - Discount LR for certain modules, such as those with name `reference_points`, `backbone`, and `sampling_offsets`. - Those can be achieved by `SOLVER.LR_MULTIPLIER_OVERWRITE` after we update `get_default_optimizer_params` in `mobile-vision/d2go/d2go/optimizer/build.py`. - Full model gradient clipping - This is also implemented in `mobile-vision/d2go/d2go/optimizer/build.py` It also has minor issues - It ignores `SOLVER.WEIGHT_DECAY_NORM` which can set a different weight decay for affine parameters in the norm modules. Reviewed By: zhanghang1989 Differential Revision: D29420642 fbshipit-source-id: deeb9348c9d282231c540dde6161acedd8e3a119
-
Sam Tsai authored
Summary: Fix missing comma for extended coco load, which would ignore bbox_mode and keypoints field. Reviewed By: zhanghang1989 Differential Revision: D29608815 fbshipit-source-id: 8c737df1dfef7f88494f7de25e06b0c37742ac30
-
- 07 Jul, 2021 1 commit
-
-
Daniel Li (AI) authored
Summary: Set find_unused_parameters according to DDP_FIND_UNUSED_PARAMETERS with DDPPlugin Reviewed By: kazhang Differential Revision: D29567013 fbshipit-source-id: f3ffac566a2ff046f55e692b3b24f9531913d4d4
-
- 06 Jul, 2021 1 commit
-
-
Cheng-Yang Fu authored
Summary: Add the fields which will be used in point-based modeling. - `point_coords` : indicates the point_coords in the image. - `point_labels`: indicates the foreground or background points. Differential Revision: D29532103 fbshipit-source-id: 9af6c9b049e1d05fd0d77909b09de1feec391ce9
-
- 02 Jul, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: In D29048363 (https://github.com/facebookresearch/d2go/commit/c480d4e4e213a850cced7758f7b62c20caad8820) we make the detaching of `reference_points` earlier in the hope of allowing more gradient flow to update weights in `self.bbox_embed`. In this diff, we revert the changes as i) it does not improve box AP ii) it may potential cause in-stable optimization when iterative box refinement is turned on. Reviewed By: zhanghang1989 Differential Revision: D29530735 fbshipit-source-id: 3217c863343836e129d53e07c0eedb2db8164fe6
-
- 01 Jul, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/98 https://github.com/facebookresearch/d2go/issues/60#issuecomment-863149605 #stamp2ship Reviewed By: zhanghang1989 Differential Revision: D29495802 fbshipit-source-id: 0dc63b1ee1f7c8c0a694c39ce41ab77c25109c60
-
- 30 Jun, 2021 3 commits
-
-
Jerry Zhang authored
Summary: Removed quant/dequant between backbone and proposal generator, and roi_box_conv and the following avg_pool Reviewed By: wat3rBro Differential Revision: D29383036 fbshipit-source-id: ef07b3d1997b1fc7f92bcd9201523e9071510a8b
-
Zhicheng Yan authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/97 Major changes - Fix a bug within `inference()` function - Refactor code to remove redundant code between `SetCriterion` and `FocalLossSetCriterion`. Reviewed By: zhanghang1989 Differential Revision: D29481067 fbshipit-source-id: 64788f1ff331177db964eb36d380430799d1d2f2
-
Kai Zhang authored
Summary: "fb" -> "fn" Reviewed By: ananthsub Differential Revision: D29480559 fbshipit-source-id: 78a0cd3ddd25df2c877514d4a5c0c29c248267a2
-
- 29 Jun, 2021 3 commits
-
-
Arman Kapbasov authored
Summary: Updated load_from_checkpoint method call inside lighting_task.py to include extra 'strict' keyword parameter Reviewed By: kazhang Differential Revision: D29446372 fbshipit-source-id: b14bc13db551f0876ca78d3ea164cfb08e71a757
-
Kai Zhang authored
Summary: A Lightning task for training StyleGAN2. Reviewed By: tax313 Differential Revision: D28922408 fbshipit-source-id: bdc9e7370de1b7b7ca9086bc6c0acbe66810d5f8
-
Kai Zhang authored
Summary: This diff introduces the D2 (https://github.com/facebookresearch/d2go/commit/9d9f438b191634dc38d16f3973e490909b7f67dd)Go GANs Lightning task for migrating D2 (https://github.com/facebookresearch/d2go/commit/9d9f438b191634dc38d16f3973e490909b7f67dd)Go's GANsRunner to Lightning based workflow. The Lightning task could directly work with D2 (https://github.com/facebookresearch/d2go/commit/9d9f438b191634dc38d16f3973e490909b7f67dd)Go e2e workflow. Reviewed By: tax313 Differential Revision: D28165835 fbshipit-source-id: 4d9d679e188f9d5f9a46f01f7d34a8f30c3e170b
-
- 27 Jun, 2021 2 commits
-
-
Kai Zhang authored
Summary: Currently we move EMA weights to expected device right after loading from checkpoint. However, by the time on_load_checkpoint hook is called, current GPU device has not been assigned. This could lead to EMA weights on cuda:0 while the model is on cuda:1. This diff move EMA weights to device in `on_pretrain_routine_end` instead. Reviewed By: zhanghang1989 Differential Revision: D28429843 fbshipit-source-id: d864fb3687eb6958872300c5ec0af7ce90591f83
-
Yuxin Wu authored
Reviewed By: zhanghang1989 Differential Revision: D29379832 fbshipit-source-id: 9283a8796a1dbee81b51611407c22f7d5a2069dc
-
- 26 Jun, 2021 1 commit
-
-
Kai Zhang authored
Summary: # Context In post training quantization callback, we make a deepcopy of the Lightning module before validation start and prepare the copy with FX quantization API. The callback keeps the prepared model inside it. # The problem By the second time we run the validation epoch, we try to make a copy of the Lightning module, which has a reference to trainer, which has a reference to quantization callback, which has a prepared model, which is not deepcopiable. # Mitigation Delete the trainer before making a deepcopy. We're already doing that in stl/callbacks/quantization, but the changes were not ported into D2 (https://github.com/facebookresearch/d2go/commit/4169abc18ec539a24081b179fcbbc5a5754d102b)Go. Reviewed By: zhanghang1989 Differential Revision: D29409085 fbshipit-source-id: 24550124181673b2e567b2a04563bcdfb440e145
-
- 25 Jun, 2021 3 commits
-
-
Haricharan Lakshman authored
Summary: Convert the batchnorm layers that match the specified regular expressions to FrozenBatchNorm2d. If module is an instance of batchnorm and it matches the reg exps, returns a new FrozenBatchNorm2d module. Otherwise, in-place converts the matching batchnorm child modules to FrozenBatchNorm2d and returns the main module. Reviewed By: ppwwyyxx Differential Revision: D29286500 fbshipit-source-id: 3a20f5eeff59ddff50c42fe297eedf0ce2b909bc
-
Luming Ma authored
Summary: Some annotations are using XYXY_ABS for bbox mode so that many images were incorrectly filtered out by assuming XYWH_ABS mode. This diff read bbox_mode from annotation and convert bbox to XYWH_ABS before checking invalid bbox. Differential Revision: D29365700 fbshipit-source-id: 355346b6826f401f504691090631997e169ead4a
-
Sam Tsai authored
Summary: "@ [0-9]classes" is appended to datasets to mark whether it is a derived class of the original one and saved as a config. When reloading the config, the derived class name will be used as the source instead of the original source. Adding a check to remove the derived suffix. Reviewed By: wat3rBro Differential Revision: D29315132 fbshipit-source-id: 0cc204d305d2da6c9f1817aaf631270bd874f90d
-
- 24 Jun, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Major changes - As described in details in appendix A.4 in deformable DETR paper (https://arxiv.org/abs/2010.04159), the gradient back-propagation is blocked at inverse_sigmoid(bounding box x/y/w/h from last decoder layer). This can be implemented by detaching tensor from compute graph in pytorch. However, currently we detach at an incorrect tensor, preventing update the layers which predicts delta x/y/w/h. Fix this bug. - Add more comments to annotate data types and tensor shape in the code. This should NOT affect the actual implementation. Reviewed By: zhanghang1989 Differential Revision: D29048363 fbshipit-source-id: c5b5e89793c86d530b077a7b999769881f441b69
-
- 23 Jun, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/90 Reviewed By: zhanghang1989 Differential Revision: D29279123 fbshipit-source-id: d94cea65bd439d54fd14afded0dba066799cedca
-
- 21 Jun, 2021 1 commit
-
-
Yuxin Wu authored
Summary: 1. save 3 versions of flop count, using both mobile_cv's flop counter and fvcore's flop counter 2. print only a simple short table in terminal, but save others to files The `print_flops` function seems not used anywhere so this diff just replaced it. TODO: enable this feature automatically for train/eval workflows in the next diff Reviewed By: zhanghang1989 Differential Revision: D29182412 fbshipit-source-id: bfa1dfad41b99fcda06b96c4732237b5e753f1bb
-
- 20 Jun, 2021 1 commit
-
-
Albert Pumarola authored
Summary: Add create and train unit tests to OSS runner Reviewed By: zhanghang1989 Differential Revision: D29254417 fbshipit-source-id: f7c52b90b2bc7afa83a204895be149664c675e52
-
- 19 Jun, 2021 2 commits
-
-
Yanghan Wang authored
Reviewed By: leitian Differential Revision: D28363172 fbshipit-source-id: e69a71e6525dc9b76171b0cdc5f55ee8d188d6cc
-
Fu-Chen Chen authored
Summary: The dict `record` might not have keys `"width"` or `"height"`. This diff check if `"width"` and `"height"` are in the dict `record` before getting the values. Reviewed By: sstsai-adl Differential Revision: D29243341 fbshipit-source-id: a1e0e343dd1afcced834c3732e64bb6f372fbd1a
-
- 16 Jun, 2021 4 commits
-
-
Luis Perez authored
Synchronize PyTorchLightning/pytorch-lightning (revision f7459f53@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ## OSS Note these issues are being solved in OSS here: https://github.com/PyTorchLightning/pytorch-lightning/pull/7994/files# ## Manual - `speed_monitor.py` - `Result.unpack_batch_size` has been removed, moved to new implementation. - `fully_sharded.py` - There was a refactor for plugins, so updated corresponding function to keep reduced memory usage. - `hive_writing_classy.py`, `hive_writing_faim.py`, `hive_writing_xrayvideo.py` - Same as `speed_monitor.py`. - [Temporary] Uncommented misconfiguration exception. See https://github.com/PyTorchLightning/pytorch-lightning/pull/7882#pullrequestreview-683282719. - Update `TestModel` to detach appropriately. - Manually `detach` metrics stored in ResultStore. ## Automatic ### New commit log messages f7459f53 DeepSpeed Infinity Update (#7234) 03e7bdf8 Improve `LightningModule` hook tests (#7944) 3a0ed02b Properly handle parent modules w/ parameters in `BaseFinetuning` callback (#7931) ce93d8bc Handle errors due to uninitailized parameters (#7642) cca0e753 remove parsing comments (#7958) 898fb56b added on_test_start() documentation (#7962) 22d82661 Seed all workers when using DDP (#7942) 436fc53c Improve `LightningDataModule` hook test and fix `dataloader_idx` argument (#7941) 6b7b4047 deprecate hpc_load() and integrate it with restore() (#7955) 20a5e09e fix myst-parser warning blocking docs ci (#7967) f15ea601 update chlog + legacy chpt (#7954) 59d0c656 Add dataclass support to `apply_to_collection` (#7935) cdd01f32 LightningCLI support for argument links applied on instantiation (#7895) 6856cced Remove rank_zero_only on DataModule prepare_data (#7945) 96433d03 IPU Integration 5/5 (#7867) 42c7f272 refactor checkpoint loading for training type plugins (#7928) ac4eb0a0 `is_overridden` improvements (#7918) 9e932f4d Delete `on_after_backward` unused argument (#7925) 8b738693 Deprecate the default `EarlyStopping` callback monitor value (#7907) c1eac483 split `restore_training_state` into logical parts [2 / 2] (#7900) d209b689 split `restore_training_state` into logical parts [1 / 2] (#7901) 111287b4 add pre-commit hooks (#7906) 839019a3 Remove legacy teardown check in train loop (#7917) b45a89a2 Clean-up after logger connector redesign 2/2 (#7631) 07b69231 Remove fn check for ipu output (#7915) 580a3b5e Remove dead code (#7910) df812398 Clean-up after logger connector redesign 1/2 (#7909) ec4f8856 Enable logger connector re-design (#7891) 15be9865 add logger to __all__ (#6854) 6fee9262 Deprecate `LightningDataModule` lifecycle properties (#7657) 764d2c77 refactor CheckpointConnector.restore_weights (#7862) 7f4ef6d1 Fix logs overwriting issue for remote fs (#7889) c310ce66 Logger connector re-design `_Metadata.reduce_fx` fixes. (#7890) b214442e New logger connector code (#7882) Reviewed By: yifuwang Differential Revision: D29105294 fbshipit-source-id: 990b2a4a7333908d676de193f5ec930cb50b8a19
-
Kai Zhang authored
Summary: This diff logs D2 (https://github.com/facebookresearch/d2go/commit/692a4fb3c506aeebbb49070a20d139d617381b19)Go model instantiation events to table scuba_caffe2_pytorch_usage_stats, so that we could track model usage in fblearner, bento, local scripts, etc. Reviewed By: zhanghang1989 Differential Revision: D28986723 fbshipit-source-id: 3e865354e5884c9e82bd1b08819cc10d349f93bd
-
Sam Tsai authored
Summary: 1. Circular pattern segmentation points 2. Use circular pattern for kp patterns Reviewed By: wat3rBro Differential Revision: D29069224 fbshipit-source-id: c4c01d6d93de5abbdfceae07f1cd48fb56e05f57
-
Sam Tsai authored
Summary: Checks for invalid bounding boxes and removes from the being included. Reviewed By: wat3rBro Differential Revision: D28902711 fbshipit-source-id: 1f017d6ccf5c959059bcb94a09ddd81de868feed
-
- 15 Jun, 2021 1 commit
-
-
Kai Zhang authored
Summary: As titled. Reviewed By: zhanghang1989 Differential Revision: D29075952 fbshipit-source-id: 6ef3dc35cd436c1fffb031ea59f20ca23afc5368
-
- 14 Jun, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/83 - Implement `prepare_for_export` for `SemanticSegmentor` - Add unit test comparing numerical matching Reviewed By: zhanghang1989 Differential Revision: D29088421 fbshipit-source-id: ccb86ac4b4b90a63eeebdbf76b2bf31c1da65a8b
-
- 12 Jun, 2021 1 commit
-
-
Zhicheng Yan authored
Summary: Major changes - Add a new runner `EgoDETRRunner` which inherit from existing `DETRRunner` in D2 (https://github.com/facebookresearch/d2go/commit/62c21f252ad314961cf0157ee8f37cc4f7835e1d)GO repo. - Add a new data mapper `EgoDETRDatasetMapper` which has custom crop transform generator and supports generic data augmentation. Reviewed By: zhanghang1989 Differential Revision: D28895225 fbshipit-source-id: 4181ff8fce81df22a01d355fdff7e81e83d69e64
-
- 09 Jun, 2021 2 commits
-
-
Yanghan Wang authored
Summary: EZ Reviewed By: zhanghang1989 Differential Revision: D29000628 fbshipit-source-id: f954214dfe3a989fc145663f8bb1870812e78ce7
-
Sam Tsai authored
Summary: Use all training dataset for export instead of just first. This is to support use cases where there is only a small amount of images per jsons but a number of jsons. Since calibration uses the first dataset, it is limited by the number of images in a single dataset. Reviewed By: ppwwyyxx Differential Revision: D28902673 fbshipit-source-id: f80146b02d2d1bc04703fbb21ef410f5e26ba64c
-
- 07 Jun, 2021 1 commit
-
-
Kai Zhang authored
Summary: Detectron2 and D2 (https://github.com/facebookresearch/d2go/commit/81ab967feb650145d3a5904f20fdddd28be83445)Go use custom sampler, we don't need Lightning to add distributed sampler. Reviewed By: ananthsub Differential Revision: D28921092 fbshipit-source-id: ec8f310d0590ed92227935b979d59a06d7fb7a69
-
- 01 Jun, 2021 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/77 - Reimplement `get_cfg_diff_table` by reusing other utils - Adding `reorder` option for `flatten_config_dict` - Remove the legacy BC support for `ARCH_DEF`, including `str_wrap_fbnet_arch_def` and customized `merge_from_other_cfg`. - Move `temp_defrost` from `utils.py` to `config.py`, this way there's no more namespace forwarding for `utils.py` - Merge `test_config_utils.py` and `test_configs.py` Reviewed By: zhanghang1989 Differential Revision: D28734493 fbshipit-source-id: 925f5944cf0e9019e4c54462e851ea16a5c94b8c
-
Yanghan Wang authored
Reviewed By: sanjeevk42 Differential Revision: D28346869 fbshipit-source-id: b226acf5ee5d90be4ea183dc7de92133db4d5717
-
- 27 May, 2021 1 commit
-
-
Tao Xu authored
Summary: Add an option to set the number of test images. Thus, during finetune, we can set a small number of test images (for only visualization purpose) to save the time for evaluation. Reviewed By: leehomyc Differential Revision: D28720086 fbshipit-source-id: 8085be6a0f4f8742784e3dafe255716f3ae02acb
-