- 22 Mar, 2022 1 commit
-
-
Owen Wang authored
Summary: Detectron2[Go]'s Visualizer and sem_seg_evaluation now updated with customization entrypoints for how to handle reading semantic seg masks. By default, PIL and PNG images are expected. However, some specific projects' datasets use .npy files and this customization allows providing an alternate Visualizer and evaluation function for reading them. Reviewed By: newstzpz Differential Revision: D33434948 fbshipit-source-id: 42af16d6708ffc5b2c03ec8507757313e23c8204
-
- 21 Mar, 2022 1 commit
-
-
Hang Zhang authored
Summary: rm TARGET in gitignore Reviewed By: newstzpz Differential Revision: D35014854 fbshipit-source-id: 4a28f797bd5277eb58df6921f3ae9b7debb65f71
-
- 18 Mar, 2022 1 commit
-
-
Owen Wang authored
Summary: Add documentation on the pre and post processing functions for segmentation. Reviewed By: XiaoliangDai Differential Revision: D34882165 fbshipit-source-id: 375c62d0ad632a40b6557065b3362e333df8c55f
-
- 17 Mar, 2022 1 commit
-
-
Yanghan Wang authored
Summary: - remove the `None` support for `merge_from_list` - fix logging when initializing diskcacje - Don't inherit `_FakeListObj` fron `list`, so looping over it can raise error. Reviewed By: sstsai-adl Differential Revision: D34952714 fbshipit-source-id: f636e408b79ed77904f257f189fcef216cb2efbc
-
- 16 Mar, 2022 4 commits
-
-
Tsahi Glik authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/190 Currently there is some fragmentation in export for how to apply convert logic in various mode. `prepare_for_quant_convert` is only called in non eager modes and the logic in eager mode is not customizable. This diff unify the `prepare_for_quant_convert` code path for all quantization modes. Also in this diff we rename `_non_qat_to_qat_state_dict_map`, that is use in qat checkpointer to be publish var `non_qat_to_qat_state_dict_map` and allow models to populate it with custom mapping. This is useful in cases where the param mapping between the non qat model and the qat model cannot be inferred definitely (see note in https://fburl.com/code/9rx172ht) and have some ambiguity that can only be resolved by the model logic. Reviewed By: wat3rBro Differential Revision: D34741217 fbshipit-source-id: 38edfec64200ec986ffe4f3d47f527cb6a3fb5e9
-
Yanghan Wang authored
Summary: D33301363 changes the signature of `update_cfg` from `update_cfg(cfg, *updaters)` to `update_cfg(cfg, updaters, new_allowed)`, while the call sites are not updated. Eg. https://www.internalfb.com/code/fbsource/[9e071979a62ba7fd3d7a71dee1f0809815cbaa43]/fbcode/fblearner/flow/projects/mobile_vision/detectron2go/core/workflow.py?lines=221-225, the `merge_from_list_updater(e2e_train.overwrite_opts),` is then not used. For the fix: - Since there're a lot of call sites for `update_cfg` it's better to keep the original signature. - ~~~The `new_allowed` can actually be passed to each individual updater instead of the `update_cfg`, this also gives finer control.~~~ - Make override the `merge_from_list` to make it respect `new_allowed`. - Preserve the `new_allowed` for all nodes (not only the root) in the FLOW Future calls. Reviewed By: zhanghang1989 Differential Revision: D34840001 fbshipit-source-id: 14aff6bec75a8b53d4109e6cd73d2494f68863b4
-
Chengjiang Long authored
Summary: Dataloader: Rewrote the data loader via build_stream_dataset_reader with the DATASET_DEFINITION of "peopleai_face_eng_inference_results". User Calibration Model (initial version): nn.Sequential( nn.Conv1d(72, 128, 1), nn.BatchNorm1d(128), nn.ReLU(), nn.Flatten(), nn.Linear(128, 72), ) Differential Revision: D34202009 fbshipit-source-id: 55a2c579e463ed19eac38b5dd12e11c09cbccc11 -
Ananth Subramaniam authored
Reviewed By: kazhang Differential Revision: D34669519 fbshipit-source-id: 8cfee968104c823a55960f2730d8e888ac1f298e
-
- 10 Mar, 2022 1 commit
-
-
Haroun Habeeb authored
Summary: see https://fb.workplace.com/notes/3006074566389155 ---- did the integration test not catch this? Reviewed By: ananthsub, tangbinh Differential Revision: D34665501 fbshipit-source-id: ff2cbfa9462f131455dce46a0c413c4c69105f48
-
- 09 Mar, 2022 1 commit
-
-
Binh Tang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/189 X-link: https://github.com/facebookresearch/recipes/pull/14 X-link: https://github.com/facebookresearch/ReAgent/pull/616 ### New commit log messages - [9b011606f Add callout items to the Docs landing page (#12196)](https://github.com/PyTorchLightning/pytorch-lightning/pull/12196) Reviewed By: edward-io Differential Revision: D34687261 fbshipit-source-id: 3ef6be5169a855582384f9097a962d2261625882
-
- 08 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/187 Reviewed By: ananthsub, zhanghang1989 Differential Revision: D34650467 fbshipit-source-id: b9518e5dd673b709320b87e57a43d053eca3aabe
-
Ananth Subramaniam authored
Reviewed By: tangbinh Differential Revision: D34669294 fbshipit-source-id: c87bc1d4c589518f7c9fc21e6dfe27b03e700b6d
-
- 07 Mar, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/186 Reviewed By: zhanghang1989 Differential Revision: D34648894 fbshipit-source-id: 51e4ea978b84a81f7e4dd91800b75a916da08faa
-
- 05 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: fix D34540275 (https://github.com/facebookresearch/d2go/commit/d8bdc633ec66e6ce73076d027f8e777791c2e067) Reviewed By: tglik Differential Revision: D34662745 fbshipit-source-id: 6fd67db041fab6f5810763702e4cc3f16a08c5df
-
Ananth Subramaniam authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/188 Reviewed By: tangbinh, wat3rBro Differential Revision: D34658350 fbshipit-source-id: 36e8c1e8c5dab97990b1d9a5b1a58667e0e3c455
-
- 04 Mar, 2022 4 commits
-
-
Binh Tang authored
Summary: ### New commit log messages - [7e2f9fbad Refactor codebase to use `trainer.loggers` over `trainer.logger` when needed (#11920)](https://github.com/PyTorchLightning/pytorch-lightning/pull/11920) Reviewed By: edward-io Differential Revision: D34583686 fbshipit-source-id: 98e557b761555c24ff296fff3ec6881d141fa777
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/185 The `DiskCachedDatasetFromList` was originally in the `d2go/data/utils.py`, so the class is declared by default. Therefore the clean up call (https://fburl.com/code/cu7hswhx) is always called even when the feature is not enabled. This diff move it to a new place and delay the import, so the clean up won't run. Reviewed By: tglik Differential Revision: D34601363 fbshipit-source-id: 734bb9b2c7957d7437ad40c4bfe60a441ec2f23a
-
Sam Tsai authored
Summary: Add option for controlling empty annotation filtering. Reviewed By: zhanghang1989 Differential Revision: D34365265 fbshipit-source-id: 261c6879636f19138de781098f47dee4909de9e7
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/179 Refactored extended coco to fix lint errors and also simpler error reporting. Differential Revision: D34365252 fbshipit-source-id: 8bf221eba5b8c5e63ddcf5ca19d7486726aff797
-
- 03 Mar, 2022 1 commit
-
-
Tsahi Glik authored
Summary: Add support in d2go.distributed for `env://` init method. Use env variables as specified in https://pytorch.org/docs/stable/distributed.html#environment-variable-initialization for initialized distributed params. Also change train_net cli function signature to accept args list instead of only using `sys.argv`. To allow calling this function from AIEnv launcher. Differential Revision: D34540275 fbshipit-source-id: 7f718aed4c010b0ac8347d43b5ca5b401210756c
-
- 01 Mar, 2022 1 commit
-
-
Tong Xiao authored
Summary: `Detectron2GoRunner` defaults to trigger an evaluation right after the last iteration in the `runner.do_train` method. This sometimes might be unnecessary, because there is a `runner.do_test` at the end of training anyways. It could also lead to some side effects. For example, it would cause the training and test data loader present at the same time, which led to an OOM issue in our use case. In this diff, we add an option `eval_after_train` in the `EvalHook` to allow users to disable the evaluation after the last training iteration. Reviewed By: wat3rBro Differential Revision: D34295685 fbshipit-source-id: 3612eb649bb50145346c56c072ae9ca91cb199f5
-
- 28 Feb, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/184 Reviewed By: zhanghang1989 Differential Revision: D34529248 fbshipit-source-id: f77882dae7de336da77ac9bb7c35cfc1e8d541af
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/183 Reviewed By: zhanghang1989 Differential Revision: D34492204 fbshipit-source-id: 7fd459172e83a5015ca9eee0e2018ce8b22c3096
-
- 25 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Summary: # TLDR: To use this feature, setting `D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)GO_DATA.DATASETS.DISK_CACHE.ENABLED` to `True`. To support larger datasets, one idea is to offload the DatasetFromList from RAM to disk to avoid OOM. `DiskCachedDatasetFromList` is a drop-in replacement for `DatasetFromList`, during `__init__`, it puts serialized list onto the disk and only stores the mapping in the RAM (the mapping could be represented by a list of addresses or even just a single number, eg. every N item is grouped together and N is the fixed number), then the `__getitem__` reads data from disk and deserializes the element. Some more details: - Originally the RAM cost is `O(s*G*N)` where `s` is average data size, `G` is #GPUs, `N` is dataset size. When diskcache is enabled, depending on the type of mapping, the final RAM cost is constant or O(N) with a very small coefficient; the final disk cost is `O(s*N)`. - The RAM usage is peaked at preparing stage, the cost is `O(s*N)`, if this becomes bottleneck, we probably need to think about modifying the data loading function (registered in DatasetCatalog). We also change the data loading function to only run on local master process, otherwise RAM will be peaked at `O(s*G*N)` if all processes are loading data at the same time. - The time overhead of initialization is linear to dataset size, this is capped by disk I/O speed and performance of diskcache library. Benchmark shows it can at least handle 1GB per minute if writing in chucks (much worse if not), which should be fine in most use cases. - There're also a bit time overhead when reading the data, but this is usually negligible compared with reading files from external storage like manifold. It's not very easy to integrate this into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)/D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go cleanly without patching the code, several approaches: - Integrate into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) directly (modifying D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)'s `DatasetFromList` and `get_detection_dataset_dicts`): might be the cleanest way, but D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8) doesn't depend on `diskcache` and this is a bit experimental right now. - D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go uses its own version of [_train_loader_from_config](https://fburl.com/code/0gig5tj2) that wraps the returned `dataset`. It has two issues: 1): it's hard to make the underlying `get_detection_dataset_dicts` only run on local master, partly because building sampler uses `comm.shared_random_seed()`, things can easily go out-of -sync 2): needs some duplicated code for test loader. - pass new arguments along the way, it requires touching D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)'s code as well, and we need to carry new arguments in lot of places. Lots of TODOs: - Automatically enable this when dataset is larger than certain threshold (need to figure out how to do this in multiple GPUs, some communication is needed if only local master is reading the dataset). - better cleanups - figure out the best way of integrating this (patching is a bit hacky) into D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)/D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go. - run more benchmarks - add unit test (maybe also enable integration tests using 2 nodes 2 GPUs for distributed settings) Reviewed By: sstsai-adl Differential Revision: D27451187 fbshipit-source-id: 7d329e1a3c3f9ec1fb9ada0298a52a33f2730e15
-
- 24 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Summary: It's possible to have `lib` under core `mobile-vision/d2go/{d2go,projects}`, exclude them from `.gitignore`. Reviewed By: zhanghang1989 Differential Revision: D34288538 fbshipit-source-id: 7094cdf4f52263fbf6ff6707d487bc3328fbbd8b
-
- 23 Feb, 2022 3 commits
-
-
Binh Tang authored
Summary: We proactively remove references to the deprecated DDP accelerator to prepare for the breaking changes following the release of PyTorch Lighting 1.6 (see T112240890). Differential Revision: D34295318 fbshipit-source-id: 7b2245ca9c7c2900f510722b33af8d8eeda49919
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/mobile-vision/pull/61 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/177 Adhoc datasets currently use default register functions. Changed to checking if it was registered in a look up table for injected coco and just using that instead. Differential Revision: D33489049 fbshipit-source-id: bcb12bba49749a875ea80ae61f4eecc4a5d1e31a
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/180 Distributed backend is deprecated. Switching to use "use_ddp" instead. Reviewed By: kazhang Differential Revision: D34394993 fbshipit-source-id: a5bfb22f8952d20c9a8d86322cd740534c25c689
-
- 14 Feb, 2022 1 commit
-
-
Tugrul Savran authored
Summary: Currently, the exporter method takes in a compare_accuracy parameter, which after all the compute (exporting etc.) raises an exception if it is set to True. This looks like an antipattern, and causes a waste of compute. Therefore, I am proposing to raise the exception at the very beginning of method call to let the client know in advance that this argument's functionality isn't implemented yet. NOTE: We might also choose to get rid of the entire parameter. I am open for suggestions. Differential Revision: D34186578 fbshipit-source-id: d7fbe7589dfe2d2f688b870885ca61e6829c9329
-
- 11 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Reviewed By: Maninae Differential Revision: D34097529 fbshipit-source-id: e3c860bb2374e694fd6ae54651a479c2398b2462
-
- 10 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/175 D33833203 adds `is_qat` argument to the fuser method, more details in https://fb.workplace.com/groups/2322282031156145/permalink/5026297484087906/. As results, MV's `fuse_utils.fuse_model` then becomes two functions: the original one is for non-qat; a new one `fuse_utils.fuse_model_qat` is for qat. For D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go in most cases, `is_qat` can be inferred from `cfg.QUANTIZATION.QAT.ENABLED`, therefore we can extend the `fuse_model` to also take `is_qat` as parameter, and set it accordingly. This diff updates all the call sites which is covered by unit tests. Those call sites include: - default quantization APIs in d2go/modeling/quantization.py - customized quantization APIs from individual meta-arch - unit test itself Reviewed By: tglik, jerryzh168 Differential Revision: D34112650 fbshipit-source-id: 026c309f603bee71d887e39aa4efee6477db731b
-
- 07 Feb, 2022 1 commit
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/169 Make d2go DETR exportable (torchscript compatible) Move generating masks to preprocessing Reviewed By: sstsai-adl Differential Revision: D33798073 fbshipit-source-id: d629b0c9cbdb67060982be717c7138a0e7e9adbc
-
- 03 Feb, 2022 1 commit
-
-
Ning Li (Seattle) authored
Summary: ### New commit log messages - [115a5d08e Decouple utilities from `LightningLoggerBase` (#11484)](https://github.com/PyTorchLightning/pytorch-lightning/pull/11484) Reviewed By: tangbinh, wat3rBro Differential Revision: D33960185 fbshipit-source-id: 6be72ad49f8433be6f238b36aa82d3f1b655e6f0
-
- 02 Feb, 2022 1 commit
-
-
Steven Troxler authored
Summary: Convert type comments in fbcode/mobile-vision Produced by running: ``` python -m libcst.tool codemod convert_type_comments.ConvertTypeComment fbcode/mobile-vision ``` from fbsource. See https://fb.workplace.com/groups/pythonfoundation/permalink/3106231549690303/ Reviewed By: grievejia Differential Revision: D33897026 fbshipit-source-id: e7666555e47a9abc769975f6db6b2e6eda792d72
-
- 29 Jan, 2022 1 commit
-
-
Tsahi Glik authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/168 Add a hook in D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go config for custom parsing so we can support custom objects in D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go config like the search space objects. Then adding SuperNet custom config processing to parse search space from arch_def when supernet is enabled, so it can be used in D2 (https://github.com/facebookresearch/d2go/commit/7992f91324aee6ae59795063a007c6837e60cdb8)Go SuperNet training. This is an alternative approach to D33191150. In this approach we parse the entire architecture as a search space which will not have the limitations that we have in parsing only the dynamic blocks parts. Reviewed By: zhanghang1989 Differential Revision: D33793423 fbshipit-source-id: 8acf5c5afb3c5c0005bdb0ca16847026e1b45e2c
-
- 27 Jan, 2022 3 commits
-
-
Hang Zhang authored
Summary: As in the tittle Reviewed By: XiaoliangDai Differential Revision: D33413849 fbshipit-source-id: b891849c175edc7b8916bff2fcc40c76c4658f14
-
Hang Zhang authored
Summary: Learnable query doesn't improve the results, but it helps DETR with reference points in D33420993 Reviewed By: XiaoliangDai Differential Revision: D33401417 fbshipit-source-id: 5296f2f969c04df18df292d61a7cf57107bc9b74
-
Hang Zhang authored
Summary: Add DETR_MODEL_REGISTRY registry to better support different variant of DETR (in later diff). Reviewed By: newstzpz Differential Revision: D32874194 fbshipit-source-id: f8e9a61417ec66bec9f2d98631260a2f4e2af4cf
-
- 20 Jan, 2022 1 commit
-
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/166 Pickling of transform functions seems to have changed (did not dig into it) in December, breaking the support for this augmentation. This error happens when training with multiple dataloaders. Using partial functions instead. Differential Revision: D33665177 fbshipit-source-id: 4dfd41b92f3a6fea549b6e7a79bf0bf14a3cceaa
-
- 18 Jan, 2022 1 commit
-
-
Miquel Jubert Hermoso authored
Summary: The type signature of create_runner is not accurate. We expect lightning runners to follow DefaultTask. Also change setup.py to not import directly, which was causing circular dependencies together with the change. Reviewed By: wat3rBro Differential Revision: D32792069 fbshipit-source-id: 0fbb55eb269dd681dbc8df49d71c9635f56293b8
-