- 05 Jan, 2023 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/455 The test can be flaky due to numerical mismatch if using `self.AssertEqual`, eg. https://www.internalfb.com/intern/testinfra/diagnostics/1688850007977704.562950031998292.1672749571/ ``` Traceback (most recent call last): File "/data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/buck-out/v2/gen/fbcode/104a4d5c3a690252/mobile-vision/d2go/tests/__modeling_test_modeling_distillation__/modeling_test_modeling_distillation#link-tree/d2go/tests/modeling/test_modeling_distillation.py", line 674, in test_da_train self.assertEqual( AssertionError: {'rea[14 chars]2894], grad_fn=<MulBackward0>), 'synthetic': t[85 chars]d0>)} != {'rea[14 chars]2894]), 'synthetic': tensor([1.4532]), 'add': [13 chars]64])} - {'add': tensor([18.0064], grad_fn=<MulBackward0>), - 'real': tensor([0.2894], grad_fn=<MulBackward0>), - 'synthetic': tensor([1.4532], grad_fn=<MulBackward0>)} + {'add': tensor([18.0064]), + 'real': tensor([0.2894]), + 'synthetic': tensor([1.4532])} ``` .Change to use `torch.testing.assert_close` instead for tensor comparison. Reviewed By: YanjunChen329 Differential Revision: D42352509 fbshipit-source-id: 8a647685d1347a9bd493f2faed7e066eb9159e14
-
- 04 Jan, 2023 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/453 Previous diffs updated the LRScheduler to public version (eg. https://github.com/facebookresearch/detectron2/pull/4709), this also requires newer version of pytorch-lightning. This diff upgrades the lightning version to 1.8.6, also fixes some deprecated call sites of old lightning versions. - `deepcopy` seems to be supported now, remove `_deepcopy` (there's now not allowed to access `trainer` attributed when it is `None`) - `dataloader_idx` is removed from `on_train_batch_start`. - stop using `_accelerator_connector` (the AcceleratorConnector doesn't have those attributes anymore). - deprecated `on_pretrain_routine_end` -> `on_fit_start` Reviewed By: YanjunChen329 Differential Revision: D42319019 fbshipit-source-id: ba46abbd98da96783e15d187a361fda47dc7d4d6
-
- 19 Dec, 2022 1 commit
-
-
Haroun Habeeb authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/438 Adding new fields to a config is only allowed if `new_allowed=True`. yacs `CfgNode` provides a `set_new_allowed(value: bool)` function. We create a context manager like `temp_defrost` but for new_allowed to use it. We also implement unit test for the same Reviewed By: yanglinfang, newstzpz, wat3rBro Differential Revision: D41748992 fbshipit-source-id: 71d048511476001ca96e6b36dde4d177b11268d7
-
- 09 Dec, 2022 1 commit
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/436 Renaming `model_ema.py` to `ema.py` (as `modeling` is already in the folder name. Fixing dependencies after rename Reviewed By: wat3rBro Differential Revision: D41685115 fbshipit-source-id: 006999a020a901ea8be4b71e072d688bd36cdce2
-
- 08 Dec, 2022 1 commit
-
-
Siddharth Shah authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/439 As title Reviewed By: mattcyu1 Differential Revision: D41759804 fbshipit-source-id: 929efa960be570f0fe8543600e012d1bf037ab3b
-
- 30 Nov, 2022 6 commits
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/432 We support caching of tuples since they behave similarly to lists Reviewed By: XiaoliangDai Differential Revision: D41483876 fbshipit-source-id: 9d741074f8e2335ddd737ae3f1bdb288910f5564
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/431 Add a generic domain adaptation algorithm. This algorithm: * gets domain0 data out of the dataloader * runs domain0 data into the model and saves target layer output * gets domain1 data of the dataloader * runs domain1 data into the model and saves target layer output * runs domain adaptation loss on domain0, domain1 outputs * combines losses using model training iteration This diffs adds `get_preprocess_domain0_input` and `get_preprocess_domain1_input` to the distillation helper. These are functions that the user can use to convert the dataloader output to something that will be used by the model (e.g., pull the domain0 or domain1 key out of a dataloader that returns a dict). Differential Revision: D40970724 fbshipit-source-id: fff050fbe864654fa6cb0df927f6843855ec1c14
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/430 We add losses in distillation by instantiating them in the distillation algorithm's init and then running them during the forward pass. However this has some issues: * the losses are not registered as a module in the model since they we organize them as a list of layerlossmetadata => this means that things like AMP do not behave as expected * the losses are not on the same device as the rest of the model since they are created potentially after the model is moved to a new device This diff solves both of these issues by including a helper function that registers and moves the losses to the same device as the model. `register_layer_losses_and_to_device` takes as input `List[LayerLossMetadata]`, moves the losses to the same device as the model and then registers these losses to the model. Differential Revision: D41296932 fbshipit-source-id: ae7ae0847bce1b5cc481d838b9cae69cea424f25
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/429 Add a teacher type called `no_teacher` which can be specified by the user in the case they ignore the teacher (e.g., domain adaptation). Building the teacher just returns a noop (`nn.Identity`) Differential Revision: D40971788 fbshipit-source-id: fc49ac44224c92806a7be253eefb8454305814eb
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/428 add an augmentation to pad image to square. * For example, image with shape (10, 7, 3) will become (10, 10, 3) and pad with value specified by `pad_value`. Reviewed By: tax313, wat3rBro Differential Revision: D41545182 fbshipit-source-id: 6d5fd9d16984a9904d44f22386920cdf130edda7
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/433 Distillation uses a module called `CachedLayer` to record the outputs of a layer to an internal dict. This dict is typically initialized by the object itself and any value is overwritten every time the model runs. However, sometimes we need more than one output run of the layer (e.g., domain adaptation => we run the model on real, then synthetic data and need to use both outputs). This diff adds a helper to set externally set the cache dict of a model. In other words, we can run `set_cache_dict` on some model to change the dict used by all `CachedLayer` in the model. This allows us to run the model and record some outputs, then change the cache dict and rerun the model to save different outputs. Differential Revision: D40970577 fbshipit-source-id: 49cb851af49ae193d0c8ac9218e02fdaf4e6587b
-
- 22 Nov, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/421 Add some reasonable defaults when running knowledge distillation * get_default_kd_image_classification_layer_losses => returns cross entropy loss on the output of the student classification layer and the teacher output (this is what the imagenet distillation uses) * DefaultLossCombiner => simple function to multiply the losses by some weights Unsure if these should go in `distillation.py` or a separate place (e.g., defaults or classification) Reviewed By: chihyaoma Differential Revision: D40330718 fbshipit-source-id: 5887566d88e3a96d01aca133c51041126b2692cc
-
- 19 Nov, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/420 Adds knowledge distillation as a generic algorithm that can be used by various projects. If eval, the algorithm just returns the result of the student model. If training, the algorithm feeds the input into both the student and teacher model. The user provides a list of `LayerLossMetadata` that provides the layers and losses run on these layers. The algorithm uses dynamic mixin to record the outputs of the relevant layers and compute the losses after both models are run. We provide student and teacher preprocessing as a placeholder before we support a more generic dataloader which can provide different inputs to the student and teacher (e.g., as of now, if you want to provide the teacher with a larger input then the dataloader should return a large input and the student preprocessing can downsample the input). We add the following functions as part of the user customizable distillation helper: * get_teacher => return a teacher that can be used directly by the KD algorithm * get_layer_losses => return a list of `LayerLossMetadata` that provides the layers and losses * get_preprocess_student_input => manipulate the output of the dataloader before passing to the student * get_preprocess_teacher_input => manipulate the output of the dataloader before passing to the teacher * get_combine_losses => since we may want to weight the student and distillation losses, return a function that can manipulate the loss_dict Reviewed By: chihyaoma Differential Revision: D40326412 fbshipit-source-id: 2fb0e818a7d5b120d62fb7aba314ff96cc7e10c5
-
- 17 Nov, 2022 2 commits
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/419 This diff adds a metadata class `LayerLossMetadata` to help keep track of the losses we want to compute over layers. The class contains the type of loss, loss name, and layer names. This diff adds a helper function to iterate over a list of `LayerLossMetadata` and return a dict containing the results. Reviewed By: chihyaoma Differential Revision: D40286564 fbshipit-source-id: b269dc63cc90a437ca279379d759c3106016327c
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/418 This diff adds a function that can be used to add `CachedLayers` to a model. Function iterates over named modules and dynamically mixes in `CachedLayer` to target modules. This diff adds a function to remove the cached layers. Reviewed By: Minione Differential Revision: D40285806 fbshipit-source-id: 3137d19927d8fb9ec924a77c9085aea29fe94d5e
-
- 16 Nov, 2022 2 commits
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/417 This diff adds a layer `CachedLayer` which is meant to be used with dynamic mixin. This layer runs the original module and clones the output into a dictionary provided by the user. The main use case is in distillation where we dynamically mixin these layers to the layers that the user wants to compute various losses. See subsequent diffs to get integration with distillation. Reviewed By: Minione Differential Revision: D40285573 fbshipit-source-id: 2058deff8b96f63aebd1e9b9933a5352b5197111
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/416 Distillation assumes teacher model has an attribute "device". Sometimes this attribute is actually a property (e.g., generalizedrcnn) but there is zero guarantee that it exists. We add a helper function to move the model to the device and add this attribute if needed. Reviewed By: chihyaoma Differential Revision: D40283954 fbshipit-source-id: 42921653eac8a79499e22edac29aa6aeac016e8a
-
- 15 Nov, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/415 The user can build a teacher by providing a trained config. However this model may have been trained using gpu whereas the user wants to load the model on cpu, this diff supports this use case by allowing the user to specify `cfg.DISTILLATION.TEACHER.DEVICE` as override. Reviewed By: sstsai-adl Differential Revision: D40125236 fbshipit-source-id: f1fd797a155e12b31bb7fcbc5e4997ee8eb23539
-
- 09 Nov, 2022 1 commit
-
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/413 Switch to using nearest pixel interpolation when warping and added unit test. Reviewed By: wat3rBro Differential Revision: D41042506 fbshipit-source-id: 92b817f21e862422428a0d0df969ec9e037f99fb
-
- 08 Nov, 2022 1 commit
-
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/410 1. Fix bounding box coordinate warping 2. Add apply_segmentation warning (will followup in another fix) Reviewed By: wat3rBro Differential Revision: D41013775 fbshipit-source-id: 3652b04c1622fe35fa9893dc22350f7d59b37c6e
-
- 03 Nov, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/405 - Use the non-hacky way (added in D40818736, https://github.com/facebookresearch/detectron2/pull/4626) to customize offloaded backend for DatasetFromList. - In `D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go`, switch to use `SharedList` (added in D40789062, https://github.com/facebookresearch/mobile-vision/pull/120) by default to save RAM and optionally use `DiskCachedList` to further save RAM. Local benchmarking results (using a ~2.4 GiB dataset) using dev mode: | RAM usage (RES, SHR) | No-dataset | Naive | NumpySerializedList | SharedList | DiskCachedList | | -- | -- | -- | -- | -- | -- | | Master GPU worker. | 8.0g, 2.8g | 21.4g, 2.8g | 11.6g, 2.8g | 11.5g, 5.2g | -- | | Non-master GPU worker | 7.5g, 2.8g | 21.0g, 2.8g | 11.5g, 2.8g | 8.0g, 2.8g | -- | | Per data loader worker | 2.0g, 1.0g | 14.0g, 1.0g | 4.4g, 1.0g | 2.1g, 1.0g | -- | - The memory usage (RES, SHR) is found from `top` command. `RES` is total memory used per process; `SHR` shows how much RAM can be shared inside `RES`. - experiments are done using 2 GPU and 2 data loader workers per GPU, so there're 6 processes in total, the **numbers are per-process**. - `No-dataset`: running the same job with tiny dataset (only 4.47 MiB after serialization), since RAM usage should be negligible, it shows the floor RAM usage. - other experiments are running using a dataset of the size of **2413.57 MiB** after serialization. - `Naive`: vanilla version if we don't offload the dataset to other storage. - `NumpySerializedList`: this optimization was added a long time ago in D19896490. I recalled that the RAM was indeed shared for data loader worker, but seems that there was a regression. Now basically all the processes have a copy of data. - `SharedList`: is enabled in this diff. It shows that only the master GPU needs extra RAM. It's interesting that it uses 3.5GB RAM more than other rank, while the data itself is 2.4GB. I'm not so sure if it's overhead of the storage itself or the overhead caused by sharing it with other processes, since non-master GPU using `NumpySerializedList` also uses 11.5g of RAM, we probably don't need to worry too much about it. - `DiskCachedList`: didn't benchmark, should have no extra RAM usage. Using the above number for a typical 8GPU, 4worker training, assuming the OS and other programs take 20-30GB RAM, the current training will use `11.6g * 8 + 4.4g * 8*4 = 233.6g` RAM, on the edge of causing OOM for a 256gb machine. This aligns with our experience that it supports ~2GB dataset. After the change, the training will use only `(11.5g * 7 + 8.0g) + 2.1g * 8*4 = 155.7g` RAM, which gives a much larger head room, we can thus train with much larger dataset (eg. 20GB) or use more DL workers (eg. 8 workers). Reviewed By: sstsai-adl Differential Revision: D40819959 fbshipit-source-id: fbdc9d2d1d440e14ae8496be65979a09f3ed3638
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/409 `assert_close` is preferred over `assert_allclose`: https://github.com/pytorch/pytorch/issues/61844 The `assert_allclose` was removed yesterday in https://github.com/pytorch/pytorch/pull/87974, causing test to fail, eg. https://github.com/facebookresearch/d2go/actions/runs/3389194553/jobs/5632021291 Reviewed By: sstsai-adl Differential Revision: D41000306 fbshipit-source-id: 7bd1cb9d5edf0a4609a909e2283df411bcabdf13
-
- 28 Oct, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/404 `get_default_cfg` is now class method since stack of D37294926 (https://github.com/facebookresearch/d2go/commit/b077a2c13845d4ef8481979d64345368864fe5ff), this diff updates call sites using biggrep to replace "Runner().get_default_cfg" with "Runner.get_default_cfg" Reviewed By: itomatik Differential Revision: D40707898 fbshipit-source-id: 2b56545769d930d34dad8814d5bfeba4c54224fd
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/392 1. Moved scale adjustment to a separate function and expose the option to disable it 2. Add option to keep the original image instead of creating a square image Reviewed By: wat3rBro Differential Revision: D40403705 fbshipit-source-id: 6c35a9a1fe3ef868e5f0b2204874fd028776e26a
-
- 26 Oct, 2022 4 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/400 Before the diff, the bootstrap can't handle the following gramma: https://www.internalfb.com/code/fbsource/[010f09214704]/fbcode/mobile-vision/d2go/tests/modeling/test_modeling_distillation.py?lines=231-236 The fix is recursively applying the truncating trick. Reviewed By: itomatik Differential Revision: D40701375 fbshipit-source-id: 946b6be47aa4b879e2e247b879a0d8b9ac13822b
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/389 Reviewed By: itomatik Differential Revision: D39631903 fbshipit-source-id: 1668a8b06260d02b40208b3dda3cbade0a12bc16
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/393 Add gaussian blur augmentation. Reviewed By: tglik Differential Revision: D40404772 fbshipit-source-id: d04774cc8aa9dff00f2b85e9c7feb1b8709edc9e
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/391 1. Add unit test for affine augmentation 2. Fix off by one affine scaling (Note: this changes augmentation behavior) Reviewed By: tglik Differential Revision: D40374538 fbshipit-source-id: ea037195b9a7dc1b4e254bf35216a8dac610bf29
-
- 20 Oct, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/394 Reviewed By: wrlife Differential Revision: D40533013 fbshipit-source-id: c4c0b08b8afb0c5c622a945bd2ef9c3e682f3039
-
- 06 Oct, 2022 1 commit
-
-
Zhanibek Datbayev authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/383 Our optimizer tests have become flaky due to often timing out: * https://www.internalfb.com/intern/test/281475048520501?ref_report_id=0 * https://www.internalfb.com/intern/test/281475048520502?ref_report_id=0 {F778290241}{F778290240} This diff splits tests that run multiple optimizer through training. Also reduced number of iterations and number of datapoints for evaluation. At the moment we aren't really verifying end result value, so I assume this reduction shouldn't matter. Reviewed By: tglik Differential Revision: D40124949 fbshipit-source-id: 5d8f309106dd5f1829f291784d36768dab2e9eca
-
- 03 Oct, 2022 1 commit
-
-
Francisc Bungiu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/378 Some hooks need access to cfg to be initialized correctly. Pass cfg down the hook registration method. Reviewed By: ertrue, miqueljubert Differential Revision: D39303862 fbshipit-source-id: 931c356c7045f95fc0af5b20c7782ea4d1aff138
-
- 01 Oct, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/379 This diff ~~~prototypes~~~ implements replacing the `custom_convert_fx` API with a callback. Reviewed By: LiamZhuuu Differential Revision: D39859228 fbshipit-source-id: 34719d1758c4afa7e47930c12d3443813d3f4546
-
- 28 Sep, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371 In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like: ``` # config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: MODEL: FBNET_V2: ... WEIGHTS: /path/to/teacher/weights ... ``` This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model. In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model: ``` # new_config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: CONFIG_FNAME: /path/to/teacher/config RUNNER_NAME: ... ``` This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go. Reviewed By: newstzpz Differential Revision: D37640041 fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
-
- 31 Aug, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/355 switch to use inference_on_dataset_with_checkpointing in default runner. Reviewed By: HarounH Differential Revision: D37215292 fbshipit-source-id: c006784ce0b31700bcbb1f79c303fd791f1561ff
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/354 Allow skipping inference when running evaluation. * `inference_on_dataset_with_checkpointing` works similar to `inference_on_dataset` in d2 but allows skipping the inference step if the evaluator has cached the results. * If the evaluator has a function `could_skip_process` and returns True, inference will be skipped and only `evaluator. reset()` and `evaluator.evaluate()` are called. Reviewed By: wat3rBro Differential Revision: D37213004 fbshipit-source-id: d12cc480589ff04fd8dbb42b22633ab34bc4bf63
-
- 12 Aug, 2022 1 commit
-
-
Pascual Martinez Gomez authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/359 Currently, D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go is missing the Adam optimizer. This Diff addresses the gap. Reviewed By: tglik, asanakoy Differential Revision: D38492151 fbshipit-source-id: 27791c23c73942b7a466f2ca91f6b3631733ba16
-
- 27 Jul, 2022 2 commits
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/344 we need access to the modeling hooks in EMA, e.g. build trainer. Reviewed By: wat3rBro Differential Revision: D37997773 fbshipit-source-id: bf4372cd310605fa35aa70f0604b084b047001d8
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/348 Add testcase to ensure loading from config in eval_only is covered. Reviewed By: wat3rBro Differential Revision: D38001319 fbshipit-source-id: e6a2edb5001ae87606a3bf48e1355037aee0f9a0
-
- 26 Jul, 2022 1 commit
-
-
Vasilis Vryniotis authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/322 TorchVision has recently added the AugMix Augmentantion. This diff adds support of the specific transform to D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)go Reviewed By: newstzpz Differential Revision: D37578243 fbshipit-source-id: b793715ccb24a3bd999a40c51d8c9a75f22110a3
-
- 19 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/339 add is_qat to lightning codepath Reviewed By: jerryzh168 Differential Revision: D37937336 fbshipit-source-id: 68debe57c7f7dcf8647fad6ab9e34eff2aaa851c
-