- 08 Nov, 2021 1 commit
-
-
Yanghan Wang authored
Reviewed By: sstsai-adl Differential Revision: D32216605 fbshipit-source-id: bebee1edae85e940c7dcc6a64dbe341a2fde36a2
-
- 28 Oct, 2021 1 commit
-
-
Kai Zhang authored
Summary: In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training. However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError. Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error. In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`. Reviewed By: wat3rBro Differential Revision: D31902368 fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
-
- 26 Oct, 2021 3 commits
-
-
Yanghan Wang authored
Summary: as title Reviewed By: Cysu Differential Revision: D31901433 fbshipit-source-id: 1749527c04c392c830e1a49bca8313ddf903d7b1
-
Yanghan Wang authored
Summary: FCOS is registered only because we make an import from `get_default_cfg`, if user don't call it (eg. using their own runner), they might find that the meta-arch is not registered. Reviewed By: ppwwyyxx Differential Revision: D31920026 fbshipit-source-id: 59eeeb3d1bf30d6b08463c2814930b1cadd7d549
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/130 We want to make sure that after importing `d2go.modeling` all the meta-arch is registered. Reviewed By: Maninae Differential Revision: D31904303 fbshipit-source-id: 3f32b65b764b2458e2fb9c4e0bbd99824b37ecfc
-
- 22 Oct, 2021 1 commit
-
-
Yuxin Wu authored
Summary: this utility function was added in D30272112 (https://github.com/facebookresearch/d2go/commit/737d099b0a8b0fb1f548435e73f95e1252442827) and is useful to all D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) users as well Differential Revision: D31833523 fbshipit-source-id: 0adfc612adb8b448fa7f3dbec1b1278c309554c5
-
- 20 Oct, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Supported learnable qat. * Added a config key `QUANTIZATION.QAT.FAKE_QUANT_METHOD` to specify the qat metod (`default` or `learnable`). * Added a config key `QUANTIZATION.QAT.ENABLE_LEARNABLE_OBSERVER_ITER` to specify the start iteration for learnable observers (before that it is using static observers). * Custom quantization code needs to call ` d2go.utils.qat_utils.get_qat_qconfig()` to get proper qconfig for learnable qat. An exception will raise if qat method is learnable but no learnable observers are used in the model. * Set the weight decay for scale/zero_point to 0 for the optimizer automatically. * The way to use larnable qat: enable static observers -> enable fake quant -> enable learnable observers -> freeze bn. Differential Revision: D31370822 fbshipit-source-id: a5a5044a539d0d7fe1cc6b36e6821fc411ce752a
-
Peizhao Zhang authored
Summary: Refactored qat related code. * Moved `_prepare_model_for_qat` related code to a function. * Moved `_setup_non_qat_to_qat_state_dict_map` related code to a function. * Moved QATHook related code to the quantization file and implemented as a class. Differential Revision: D31370819 fbshipit-source-id: 836550b2c8d68cd93a84d5877ad9cef6f0f0eb39
-
- 15 Oct, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Supported specifying customized parameter groups from model. * Allow model to specify customized parameter groups by implementing a function `model.get_optimizer_param_groups(cfg)` * Supported model with ddp. Reviewed By: zhanghang1989 Differential Revision: D31289315 fbshipit-source-id: c91ba8014508e9fd5f172601b9c1c83c188338fd
-
Peizhao Zhang authored
Summary: Refactor for get_optimizer_param_groups. * Split `get_default_optimizer_params()` into multiple functions: * `get_optimizer_param_groups_default()` * `get_optimizer_param_groups_lr()` * `get_optimizer_param_groups_weight_decay()` * Regroup the parameters to create the minimal amount of groups. * Print all parameter groups when the optimizer is created. Param group 0: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 10.0, params: 1, weight_decay: 1.0} Param group 1: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 1, weight_decay: 1.0} Param group 2: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 2, weight_decay: 0.0} * Add some unit tests. Reviewed By: zhanghang1989 Differential Revision: D31287783 fbshipit-source-id: e87df0ae0e67343bb2130db945d8faced44d7411
-
- 06 Oct, 2021 1 commit
-
-
Supriya Rao authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/124 Update callsites from torch.quantization to torch.ao.quantization Reviewed By: z-a-f, jerryzh168 Differential Revision: D31286125 fbshipit-source-id: ef24ca87d8db398c65bb5b89f035afe0423a5685
-
- 24 Sep, 2021 2 commits
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/117 Fix github ci failure due to lack of coco datset. It was cased by D31134064 (https://github.com/facebookresearch/d2go/commit/f018d4a7ceef437d8fc3ca8b2bba4b7321917e06) Reviewed By: mattcyu1, wat3rBro Differential Revision: D31179666 fbshipit-source-id: fe25129d167afcdcb577e5c8d82f3432ba939ca9
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D31134064 fbshipit-source-id: 825ca14477243a53f84b8521f4430a2b080324bd
-
- 15 Sep, 2021 1 commit
-
-
Valentin Andrei authored
Reviewed By: stephenyan1231, zhanghang1989 Differential Revision: D30903817 fbshipit-source-id: 578e6b02a1bd59b1bd841399fc60111d320ae9aa
-
- 09 Sep, 2021 1 commit
-
-
Yanghan Wang authored
Summary: https://fb.workplace.com/groups/pythonfoundation/posts/2990917737888352 Remove `mobile-vision` from opt-out list; leaving `mobile-vision/SNPE` opted out because of 3rd-party code. arc lint --take BLACK --apply-patches --paths-cmd 'hg files mobile-vision' allow-large-files Reviewed By: sstsai-adl Differential Revision: D30721093 fbshipit-source-id: 9e5c16d988b315b93a28038443ecfb92efd18ef8
-
- 31 Aug, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Enable the inference for boltnn (via running torchscript). - merge rcnn's boltnn test with other export types. - misc fixes. Differential Revision: D30610386 fbshipit-source-id: 7b78136f8ca640b5fc179cb47e3218e709418d71
-
- 18 Aug, 2021 2 commits
-
-
Siddharth Shah authored
Summary: A torch version which is batched allows us to avoid CPU <--> GPU copy which gets us ~200ms per iteration saving. This new version of generating boundary weight mask produces identical masks. Reviewed By: wat3rBro Differential Revision: D30176412 fbshipit-source-id: 877f4c6337e7870d3bafd8eb9157ac166ddd588a
-
Valentin Andrei authored
Summary: Added multi-tensor optimizer implementation for SGD, from `torch.optim._multi_tensor`. It can potentially provide ~5% QPS improvement by using `foreach` API to speed up the optimizer step. Using it is optional, from the configuration file, by specifying `SGD_MT` in the `SOLVER.OPTIMIZER` setting. Reviewed By: zhanghang1989 Differential Revision: D30377761 fbshipit-source-id: 06107f1b91e9807c1db5d1b0ca6be09fcbb13e67
-
- 17 Aug, 2021 1 commit
-
-
Siddharth Shah authored
Summary: The uint8 cast means that the floating point non_bd_weight is never assigned Reviewed By: wat3rBro Differential Revision: D30176377 fbshipit-source-id: 013602bb4313393f220ee0f1510bf1ff83bd56fc
-
- 16 Aug, 2021 1 commit
-
-
Hang Zhang authored
Summary: Add FBNAS toolkit for HPO in D2 (https://github.com/facebookresearch/d2go/commit/adf223bdac5b534514a8ba80f6bd61fc9dd8b464)Go Reviewed By: newstzpz Differential Revision: D28672821 fbshipit-source-id: 6a378af2bb43ef6cb556d4158fd1b0d3e363e956
-
- 27 Jun, 2021 1 commit
-
-
Yuxin Wu authored
Reviewed By: zhanghang1989 Differential Revision: D29379832 fbshipit-source-id: 9283a8796a1dbee81b51611407c22f7d5a2069dc
-
- 25 Jun, 2021 1 commit
-
-
Sam Tsai authored
Summary: "@ [0-9]classes" is appended to datasets to mark whether it is a derived class of the original one and saved as a config. When reloading the config, the derived class name will be used as the source instead of the original source. Adding a check to remove the derived suffix. Reviewed By: wat3rBro Differential Revision: D29315132 fbshipit-source-id: 0cc204d305d2da6c9f1817aaf631270bd874f90d
-
- 21 Jun, 2021 1 commit
-
-
Yuxin Wu authored
Summary: 1. save 3 versions of flop count, using both mobile_cv's flop counter and fvcore's flop counter 2. print only a simple short table in terminal, but save others to files The `print_flops` function seems not used anywhere so this diff just replaced it. TODO: enable this feature automatically for train/eval workflows in the next diff Reviewed By: zhanghang1989 Differential Revision: D29182412 fbshipit-source-id: bfa1dfad41b99fcda06b96c4732237b5e753f1bb
-
- 16 Jun, 2021 1 commit
-
-
Sam Tsai authored
Summary: Checks for invalid bounding boxes and removes from the being included. Reviewed By: wat3rBro Differential Revision: D28902711 fbshipit-source-id: 1f017d6ccf5c959059bcb94a09ddd81de868feed
-
- 14 Jun, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/83 - Implement `prepare_for_export` for `SemanticSegmentor` - Add unit test comparing numerical matching Reviewed By: zhanghang1989 Differential Revision: D29088421 fbshipit-source-id: ccb86ac4b4b90a63eeebdbf76b2bf31c1da65a8b
-
- 01 Jun, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/77 - Reimplement `get_cfg_diff_table` by reusing other utils - Adding `reorder` option for `flatten_config_dict` - Remove the legacy BC support for `ARCH_DEF`, including `str_wrap_fbnet_arch_def` and customized `merge_from_other_cfg`. - Move `temp_defrost` from `utils.py` to `config.py`, this way there's no more namespace forwarding for `utils.py` - Merge `test_config_utils.py` and `test_configs.py` Reviewed By: zhanghang1989 Differential Revision: D28734493 fbshipit-source-id: 925f5944cf0e9019e4c54462e851ea16a5c94b8c
-
- 25 May, 2021 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/75 Refactor the base test case - make test_dir valid throughout the test (rather than under local context), so individual test can load back the export model - refactor the `custom_setup_test` for easier override. - move parameterized into base class to avoid copying naming function Reviewed By: zhanghang1989 Differential Revision: D28651067 fbshipit-source-id: c59a311564f6114039e20ed3a23e5dd9c84f4ae4
-
Kai Zhang authored
Summary: Currently when launching a training flow, we read number of processes from resources.num_gpus. To be backward compatible with existing D2 (https://github.com/facebookresearch/d2go/commit/f82d44d3c33e6c781a3c6f2b27b376fdfbaeda53)Go training config, this diff changes to dist_config.num_processes_per_machine instead. Reviewed By: wat3rBro Differential Revision: D28630334 fbshipit-source-id: 3c684cd56e5d2e247c7b82e1d1eeff0f39e59ee4
-
- 22 May, 2021 1 commit
-
-
Yanghan Wang authored
Differential Revision: D27881742 (https://github.com/facebookresearch/d2go/commit/90aff5daf608473dd312b300db8615326fa40a37) Original commit changeset: 34a3ab7a88f4 fbshipit-source-id: 42c03b4f2b69c656b26774a4665b84b832262650
-
- 21 May, 2021 2 commits
-
-
Sanjeev Kumar authored
Summary: - Enable sdk inference config specification in export step. This enables adding the sdk configuration as part of model file in the export step. The sdk config can be specified as infernece_config.yaml and is zipped together with torchscript model. The main goal of sdk configuration is to control the model inference behavior with model. - SDK inference config design doc: https://docs.google.com/document/d/1j5qx8IrnFg1DJFzTnu4W8WmXFYJ-AgCDfSQHb2ACJsk/edit - One click fblearner pipeline is in next diff on the stack Differential Revision: D27881742 fbshipit-source-id: 34a3ab7a88f456b74841cf671ea1b3f678cdb733
-
Sam Tsai authored
Summary: Option to change only bounding boxes, others remain the same. Differential Revision: D28339388 fbshipit-source-id: 7a6d4c5153cf10c473992119f4c684e0b9159b44
-
- 17 May, 2021 1 commit
-
-
Kai Zhang authored
Summary: Add dataset visualization so that we could visualize test results in Tensorboard. Reviewed By: zhanghang1989 Differential Revision: D28457363 fbshipit-source-id: 4c2fd9dce349c6fb9e1cec51c9138cf0abb45d7b
-
- 12 May, 2021 1 commit
-
-
Luis Perez authored
Synchronize PyTorchLightning/pytorch-lightning (revision 7b283e3c@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: # Manual - remove fixme's in `model_checkpoint.py`, `parameter_monitor.py`, `test_quantization.py`, and `speed_monitor.py` now that `Trainer` is properly annotated. - update `test_quantization.py` to `trainer.train_loop.global_step` instead of `trainer.global_step` which is a read-only. - update `loop_callback.py` to read from `train_loop` for `batch_idx` (which is no longer available). # Automatic ### New commit log messages 7b283e3c Bugfix/Multiple dataloaders (#7433) d7c44cc6 Docs: sync chlog 1.3.1 (#7478) fdf50a5e Mark certain Trainer APIs as protected (#7420) ad9118f0 remove trainer hidden state | sanity refactor [1 / n] (#7437) 4a1134db Log epoch metrics before firing the `on_evaluation_end` hook (#7272) b65ae794 Automatically check `DataModule.has_{setup,teardown,prepare_data}` [2/2] (#7238) 8660d8cf [pre-commit.ci] pre-commit autoupdate (#7475) f6fe715e Fix Sphinx argument deprecation (#7464) Reviewed By: shuyingsunshine21 Differential Revision: D28353491 fbshipit-source-id: 98b87d99e2f09b47b07270858fcbdb5d5299730b
-
- 07 May, 2021 1 commit
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/59 * We have an internal dependency: ``` d2go/export/logfiledb.py", line 8, in <module> from mobile_cv.torch.utils_caffe2.ws_utils import ScopedWS ModuleNotFoundError: No module named 'mobile_cv.torch' ``` This cause the failure of unittest on GitHub https://github.com/facebookresearch/d2go/pull/58/checks?check_run_id=2471727763 * use python 3.8 because another unittest failure on github ci ``` from typing import final ImportError: cannot import name 'final' from 'typing' (/usr/share/miniconda/lib/python3.7/typing.py) ``` Reviewed By: wat3rBro Differential Revision: D28109444 fbshipit-source-id: 95e9774bdaa94f622267aeaac06d7448f37a103f
-
- 05 May, 2021 1 commit
-
-
Sam Tsai authored
Summary: Add a bounding manipulation tool to padding bounding box data. Reviewed By: newstzpz Differential Revision: D28082071 fbshipit-source-id: f168cae48672c4fa5c4ec98697c57ed7833787ab
-
- 04 May, 2021 1 commit
-
-
Yanghan Wang authored
Reviewed By: newstzpz Differential Revision: D27747996 fbshipit-source-id: 6ae3b89c3944098828e246e5a4a89209b8e171a1
-
- 30 Apr, 2021 1 commit
-
-
Sam Tsai authored
Summary: 1. Add a keypoint metadata registry for registering different keypoint metadata 2. Add option to inject_coco_dataset for adding keypoint metadata Reviewed By: newstzpz Differential Revision: D27730541 fbshipit-source-id: c6ba97f60664fce4dcbb0de80222df7490bc6d5d
-
- 28 Apr, 2021 1 commit
-
-
Ananth Subramaniam authored
Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ### New commit log messages 7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053) bab72255 [fix] Add barriers before and after setup hook is run (#7202) f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228) e147127c [feat] Add better support for predict + ddp 2/3 (#7215) ca6c87ff Add back `clip_gradients(model)` (#7231) 3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014) 5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213) 52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937) dd5ec75e Deprecate save_function from model checkpoint callback (#7201) ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194) 6be0a859 Update teardown for TPU acc (#7211) bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814) 68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130) 44d775fc Update Error message for ProfileConnector (#7204) 31fcd7d0 Deprecate write_predictions on the LightningModule (#7066) 591b9cee make bug_report_model minimal (#7191) b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180) f58865aa Properly set `LightningModule.device` after model replacement (#7188) 8439aead Update FairScale on CI (#7017) 92af3632 Fix `lr_finder` suggesting too high learning rates (#7076) d534e53e add missing predict docs (#7150) Reviewed By: kazhang Differential Revision: D28032962 fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
-
- 21 Apr, 2021 1 commit
-
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/46 As titled. The test is flaky because the tensorboard logger might still be writing to temporary folder when we tear down the folder. Reviewed By: ananthsub Differential Revision: D27844504 fbshipit-source-id: 3987f9ec3cd05b2f193e75cd4d85109a46f4ee71
-
- 20 Apr, 2021 1 commit
-
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/49 Reviewed By: wat3rBro Differential Revision: D27875007 fbshipit-source-id: 2f61a4a3de29f3583a54adc914ee5a7eb605a823
-