- 05 Apr, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/199 - `create_fake_detection_data_loader` currently doesn't take `cfg` as input, sometimes we need to test the augmentation that needs more complicated different cfg. - name is a bit bad, rename it to `create_detection_data_loader_on_toy_dataset`. - width/height were the resized size previously, we want to change it to the size of data source (image files) and use `cfg` to control resized size. Update V3: In V2 there're some test failures, the reason is that V2 is building data loader (via GeneralizedRCNN runner) using actual test config instead of default config before this diff + dataset name change. In V3 we uses the test's runner instead of default runner for the consistency. This reveals some real bugs that we didn't test before. Reviewed By: omkar-fb Differential Revision: D35238890 fbshipit-source-id: 28a6037374e74f452f91b494bd455b38d3a48433
-
- 24 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/191 When exporting model to torchscript (using `MODEL.DEVICE = "cpu"`), mean/std are constant instead of model parameters. Therefore after casting the torchscript to CUDA, the mean/std remains on cpu. This will cause problem when running inference on GPU. The fix is exporting the model with `MODEL.DEVICE = "cuda"`. However D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go internally uses "cpu" during export (via cli: https://fburl.com/code/4mpk153i, via workflow: https://fburl.com/code/zcj5ud4u) by default. For CLI, user can manually set `--device`, but for workflow it's hard to do so. Further more it's hard to support mixed model using single `--device` option. So this diff adds a special handling in the RCNN's `default_prepare_for_export` to bypass the `--device` option. Reviewed By: zhanghang1989 Differential Revision: D35097613 fbshipit-source-id: df9f44f49af1f0fd4baf3d7ccae6c31e341f3ef6
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/192 Nowadays lightning will initialize process group when using ddp strategy, since `TestLightningTrainNet` does a training with ddp strategy (https://fburl.com/code/a9yp0kzy), the process group ended up initialized after running the test. However there're other tests that will also set up ddp and thus expect non-initialized process group, this is not a problem on sandcastle since the tests run separately, however in OSS env, the tests are running together, so the error happens (eg. https://github.com/facebookresearch/d2go/runs/5668912203?check_suite_focus=true). This diff adds a clean up step in `TestLightningTrainNet`. Reviewed By: tglik Differential Revision: D35099944 fbshipit-source-id: f5b42b2a87d4efd9aa0ed97e6bd2140d80ab9522
-
- 16 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: D33301363 changes the signature of `update_cfg` from `update_cfg(cfg, *updaters)` to `update_cfg(cfg, updaters, new_allowed)`, while the call sites are not updated. Eg. https://www.internalfb.com/code/fbsource/[9e071979a62ba7fd3d7a71dee1f0809815cbaa43]/fbcode/fblearner/flow/projects/mobile_vision/detectron2go/core/workflow.py?lines=221-225, the `merge_from_list_updater(e2e_train.overwrite_opts),` is then not used. For the fix: - Since there're a lot of call sites for `update_cfg` it's better to keep the original signature. - ~~~The `new_allowed` can actually be passed to each individual updater instead of the `update_cfg`, this also gives finer control.~~~ - Make override the `merge_from_list` to make it respect `new_allowed`. - Preserve the `new_allowed` for all nodes (not only the root) in the FLOW Future calls. Reviewed By: zhanghang1989 Differential Revision: D34840001 fbshipit-source-id: 14aff6bec75a8b53d4109e6cd73d2494f68863b4
-
Ananth Subramaniam authored
Reviewed By: kazhang Differential Revision: D34669519 fbshipit-source-id: 8cfee968104c823a55960f2730d8e888ac1f298e
-
- 08 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/187 Reviewed By: ananthsub, zhanghang1989 Differential Revision: D34650467 fbshipit-source-id: b9518e5dd673b709320b87e57a43d053eca3aabe
-
Ananth Subramaniam authored
Reviewed By: tangbinh Differential Revision: D34669294 fbshipit-source-id: c87bc1d4c589518f7c9fc21e6dfe27b03e700b6d
-
- 05 Mar, 2022 1 commit
-
-
Ananth Subramaniam authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/188 Reviewed By: tangbinh, wat3rBro Differential Revision: D34658350 fbshipit-source-id: 36e8c1e8c5dab97990b1d9a5b1a58667e0e3c455
-
- 04 Mar, 2022 4 commits
-
-
Binh Tang authored
Summary: ### New commit log messages - [7e2f9fbad Refactor codebase to use `trainer.loggers` over `trainer.logger` when needed (#11920)](https://github.com/PyTorchLightning/pytorch-lightning/pull/11920) Reviewed By: edward-io Differential Revision: D34583686 fbshipit-source-id: 98e557b761555c24ff296fff3ec6881d141fa777
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/185 The `DiskCachedDatasetFromList` was originally in the `d2go/data/utils.py`, so the class is declared by default. Therefore the clean up call (https://fburl.com/code/cu7hswhx) is always called even when the feature is not enabled. This diff move it to a new place and delay the import, so the clean up won't run. Reviewed By: tglik Differential Revision: D34601363 fbshipit-source-id: 734bb9b2c7957d7437ad40c4bfe60a441ec2f23a
-
Sam Tsai authored
Summary: Add option for controlling empty annotation filtering. Reviewed By: zhanghang1989 Differential Revision: D34365265 fbshipit-source-id: 261c6879636f19138de781098f47dee4909de9e7
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/179 Refactored extended coco to fix lint errors and also simpler error reporting. Differential Revision: D34365252 fbshipit-source-id: 8bf221eba5b8c5e63ddcf5ca19d7486726aff797
-
- 25 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Summary: # TLDR: To use this feature, setting `D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)GO_DATA.DATASETS.DISK_CACHE.ENABLED` to `True`. To support larger datasets, one idea is to offload the DatasetFromList from RAM to disk to avoid OOM. `DiskCachedDatasetFromList` is a drop-in replacement for `DatasetFromList`, during `__init__`, it puts serialized list onto the disk and only stores the mapping in the RAM (the mapping could be represented by a list of addresses or even just a single number, eg. every N item is grouped together and N is the fixed number), then the `__getitem__` reads data from disk and deserializes the element. Some more details: - Originally the RAM cost is `O(s*G*N)` where `s` is average data size, `G` is #GPUs, `N` is dataset size. When diskcache is enabled, depending on the type of mapping, the final RAM cost is constant or O(N) with a very small coefficient; the final disk cost is `O(s*N)`. - The RAM usage is peaked at preparing stage, the cost is `O(s*N)`, if this becomes bottleneck, we probably need to think about modifying the data loading function (registered in DatasetCatalog). We also change the data loading function to only run on local master process, otherwise RAM will be peaked at `O(s*G*N)` if all processes are loading data at the same time. - The time overhead of initialization is linear to dataset size, this is capped by disk I/O speed and performance of diskcache library. Benchmark shows it can at least handle 1GB per minute if writing in chucks (much worse if not), which should be fine in most use cases. - There're also a bit time overhead when reading the data, but this is usually negligible compared with reading files from external storage like manifold. It's not very easy to integrate this into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)/D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go cleanly without patching the code, several approaches: - Integrate into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) directly (modifying D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)'s `DatasetFromList` and `get_detection_dataset_dicts`): might be the cleanest way, but D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) doesn't depend on `diskcache` and this is a bit experimental right now. - D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go uses its own version of [_train_loader_from_config](https://fburl.com/code/0gig5tj2) that wraps the returned `dataset`. It has two issues: 1): it's hard to make the underlying `get_detection_dataset_dicts` only run on local master, partly because building sampler uses `comm.shared_random_seed()`, things can easily go out-of -sync 2): needs some duplicated code for test loader. - pass new arguments along the way, it requires touching D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)'s code as well, and we need to carry new arguments in lot of places. Lots of TODOs: - Automatically enable this when dataset is larger than certain threshold (need to figure out how to do this in multiple GPUs, some communication is needed if only local master is reading the dataset). - better cleanups - figure out the best way of integrating this (patching is a bit hacky) into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)/D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go. - run more benchmarks - add unit test (maybe also enable integration tests using 2 nodes 2 GPUs for distributed settings) Reviewed By: sstsai-adl Differential Revision: D27451187 fbshipit-source-id: 7d329e1a3c3f9ec1fb9ada0298a52a33f2730e15
-
- 23 Feb, 2022 2 commits
-
-
Binh Tang authored
Summary: We proactively remove references to the deprecated DDP accelerator to prepare for the breaking changes following the release of PyTorch Lighting 1.6 (see T112240890). Differential Revision: D34295318 fbshipit-source-id: 7b2245ca9c7c2900f510722b33af8d8eeda49919
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/mobile-vision/pull/61 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/177 Adhoc datasets currently use default register functions. Changed to checking if it was registered in a look up table for injected coco and just using that instead. Differential Revision: D33489049 fbshipit-source-id: bcb12bba49749a875ea80ae61f4eecc4a5d1e31a
-
- 14 Jan, 2022 1 commit
-
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/160 If the returned object of visualize_train_input is a dictionary, use the key as tag suffix and the values as separate output images. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D33468573 fbshipit-source-id: b0a47ba312ff59700534e917c62af1dfa83dd5be
-
- 13 Jan, 2022 1 commit
-
-
Tsahi Glik authored
Summary: Add support in the default lightning task to run a custom training step from Meta Arch if exists. The goal is to allow custom training step without the need to inherit from the default lightning task class and override it. This will allow us to use a signle lightning task and still allow users to customize the training step. In the long run this will be further encapsulated in modeling hook, making it more modular and compositable with other custom code. This change is a follow up from discussion in https://fburl.com/diff/yqlsypys Reviewed By: wat3rBro Differential Revision: D33534624 fbshipit-source-id: 560f06da03f218e77ad46832be9d741417882c56
-
- 12 Jan, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/163 Make quantizing FPN work, note that this is not a proper fix, which might be making pytorch picking the D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)'s Conv2d, and we need to revert this diff if it's supported. Differential Revision: D33523917 fbshipit-source-id: 3d00f540a9fcb75a34125c244d86263d517a359f
-
- 08 Jan, 2022 1 commit
-
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/158 Add unit tests for visualization wrapper and dataloader visualization wrapper. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D33457734 fbshipit-source-id: e5f946ae4ee711a0914d8ac65b96cac40e7ab13b
-
- 30 Dec, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/152 Reviewed By: zhanghang1989 Differential Revision: D31591900 fbshipit-source-id: 6ee8124419d535caf03532eda4f729e707b6dda7
-
- 29 Dec, 2021 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/154 Reviewed By: zhanghang1989 Differential Revision: D33352204 fbshipit-source-id: e1a9ac6eb2574dfe6931435275e27c9508f66352
-
Yanghan Wang authored
Summary: DDPPlugin has been renamed to DDPStrategy (as part of https://github.com/PyTorchLightning/pytorch-lightning/issues/10549), causing oss CI to fail. Simply skipping the import to unblock CI since DDP feature is not used in test. Reviewed By: kazhang Differential Revision: D33351636 fbshipit-source-id: 7a1881c8cd48d9ff17edd41137d27a976103fdde
-
- 22 Dec, 2021 1 commit
-
-
Sam Tsai authored
Summary: 1. Add registry for coco injection to allow for easier overriding of cococ injections 2. Coco loading currently is limited to certain keys. Adding option to allow for copying certain keys from the outputs. Reviewed By: zhanghang1989 Differential Revision: D33132517 fbshipit-source-id: 57ac4994a66f9c75457cada7e85fb15da4818f3e
-
- 18 Nov, 2021 1 commit
-
-
Ananth Subramaniam authored
Summary: ### New commit log messages fa0ed17f8 remove deprecated train_loop (#10482) Reviewed By: kandluis Differential Revision: D32454980 fbshipit-source-id: a35237dde06cc9ddac5373b75992ce88a6771c76
-
- 08 Nov, 2021 1 commit
-
-
Yanghan Wang authored
Reviewed By: sstsai-adl Differential Revision: D32216605 fbshipit-source-id: bebee1edae85e940c7dcc6a64dbe341a2fde36a2
-
- 28 Oct, 2021 1 commit
-
-
Kai Zhang authored
Summary: In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training. However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError. Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error. In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`. Reviewed By: wat3rBro Differential Revision: D31902368 fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
-
- 26 Oct, 2021 3 commits
-
-
Yanghan Wang authored
Summary: as title Reviewed By: Cysu Differential Revision: D31901433 fbshipit-source-id: 1749527c04c392c830e1a49bca8313ddf903d7b1
-
Yanghan Wang authored
Summary: FCOS is registered only because we make an import from `get_default_cfg`, if user don't call it (eg. using their own runner), they might find that the meta-arch is not registered. Reviewed By: ppwwyyxx Differential Revision: D31920026 fbshipit-source-id: 59eeeb3d1bf30d6b08463c2814930b1cadd7d549
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/130 We want to make sure that after importing `d2go.modeling` all the meta-arch is registered. Reviewed By: Maninae Differential Revision: D31904303 fbshipit-source-id: 3f32b65b764b2458e2fb9c4e0bbd99824b37ecfc
-
- 22 Oct, 2021 1 commit
-
-
Yuxin Wu authored
Summary: this utility function was added in D30272112 (https://github.com/facebookresearch/d2go/commit/737d099b0a8b0fb1f548435e73f95e1252442827) and is useful to all D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) users as well Differential Revision: D31833523 fbshipit-source-id: 0adfc612adb8b448fa7f3dbec1b1278c309554c5
-
- 20 Oct, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Supported learnable qat. * Added a config key `QUANTIZATION.QAT.FAKE_QUANT_METHOD` to specify the qat metod (`default` or `learnable`). * Added a config key `QUANTIZATION.QAT.ENABLE_LEARNABLE_OBSERVER_ITER` to specify the start iteration for learnable observers (before that it is using static observers). * Custom quantization code needs to call ` d2go.utils.qat_utils.get_qat_qconfig()` to get proper qconfig for learnable qat. An exception will raise if qat method is learnable but no learnable observers are used in the model. * Set the weight decay for scale/zero_point to 0 for the optimizer automatically. * The way to use larnable qat: enable static observers -> enable fake quant -> enable learnable observers -> freeze bn. Differential Revision: D31370822 fbshipit-source-id: a5a5044a539d0d7fe1cc6b36e6821fc411ce752a
-
Peizhao Zhang authored
Summary: Refactored qat related code. * Moved `_prepare_model_for_qat` related code to a function. * Moved `_setup_non_qat_to_qat_state_dict_map` related code to a function. * Moved QATHook related code to the quantization file and implemented as a class. Differential Revision: D31370819 fbshipit-source-id: 836550b2c8d68cd93a84d5877ad9cef6f0f0eb39
-
- 15 Oct, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Supported specifying customized parameter groups from model. * Allow model to specify customized parameter groups by implementing a function `model.get_optimizer_param_groups(cfg)` * Supported model with ddp. Reviewed By: zhanghang1989 Differential Revision: D31289315 fbshipit-source-id: c91ba8014508e9fd5f172601b9c1c83c188338fd
-
Peizhao Zhang authored
Summary: Refactor for get_optimizer_param_groups. * Split `get_default_optimizer_params()` into multiple functions: * `get_optimizer_param_groups_default()` * `get_optimizer_param_groups_lr()` * `get_optimizer_param_groups_weight_decay()` * Regroup the parameters to create the minimal amount of groups. * Print all parameter groups when the optimizer is created. Param group 0: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 10.0, params: 1, weight_decay: 1.0} Param group 1: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 1, weight_decay: 1.0} Param group 2: {amsgrad: False, betas: (0.9, 0.999), eps: 1e-08, lr: 1.0, params: 2, weight_decay: 0.0} * Add some unit tests. Reviewed By: zhanghang1989 Differential Revision: D31287783 fbshipit-source-id: e87df0ae0e67343bb2130db945d8faced44d7411
-
- 06 Oct, 2021 1 commit
-
-
Supriya Rao authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/124 Update callsites from torch.quantization to torch.ao.quantization Reviewed By: z-a-f, jerryzh168 Differential Revision: D31286125 fbshipit-source-id: ef24ca87d8db398c65bb5b89f035afe0423a5685
-
- 24 Sep, 2021 2 commits
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/117 Fix github ci failure due to lack of coco datset. It was cased by D31134064 (https://github.com/facebookresearch/d2go/commit/f018d4a7ceef437d8fc3ca8b2bba4b7321917e06) Reviewed By: mattcyu1, wat3rBro Differential Revision: D31179666 fbshipit-source-id: fe25129d167afcdcb577e5c8d82f3432ba939ca9
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D31134064 fbshipit-source-id: 825ca14477243a53f84b8521f4430a2b080324bd
-
- 15 Sep, 2021 1 commit
-
-
Valentin Andrei authored
Reviewed By: stephenyan1231, zhanghang1989 Differential Revision: D30903817 fbshipit-source-id: 578e6b02a1bd59b1bd841399fc60111d320ae9aa
-
- 09 Sep, 2021 1 commit
-
-
Yanghan Wang authored
Summary: https://fb.workplace.com/groups/pythonfoundation/posts/2990917737888352 Remove `mobile-vision` from opt-out list; leaving `mobile-vision/SNPE` opted out because of 3rd-party code. arc lint --take BLACK --apply-patches --paths-cmd 'hg files mobile-vision' allow-large-files Reviewed By: sstsai-adl Differential Revision: D30721093 fbshipit-source-id: 9e5c16d988b315b93a28038443ecfb92efd18ef8
-
- 31 Aug, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Enable the inference for boltnn (via running torchscript). - merge rcnn's boltnn test with other export types. - misc fixes. Differential Revision: D30610386 fbshipit-source-id: 7b78136f8ca640b5fc179cb47e3218e709418d71
-