"docs/vscode:/vscode.git/clone" did not exist on "f00cd6efbd00b0273f58c393a617415b5d1d410e"
- 03 Nov, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/405 - Use the non-hacky way (added in D40818736, https://github.com/facebookresearch/detectron2/pull/4626) to customize offloaded backend for DatasetFromList. - In `D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go`, switch to use `SharedList` (added in D40789062, https://github.com/facebookresearch/mobile-vision/pull/120) by default to save RAM and optionally use `DiskCachedList` to further save RAM. Local benchmarking results (using a ~2.4 GiB dataset) using dev mode: | RAM usage (RES, SHR) | No-dataset | Naive | NumpySerializedList | SharedList | DiskCachedList | | -- | -- | -- | -- | -- | -- | | Master GPU worker. | 8.0g, 2.8g | 21.4g, 2.8g | 11.6g, 2.8g | 11.5g, 5.2g | -- | | Non-master GPU worker | 7.5g, 2.8g | 21.0g, 2.8g | 11.5g, 2.8g | 8.0g, 2.8g | -- | | Per data loader worker | 2.0g, 1.0g | 14.0g, 1.0g | 4.4g, 1.0g | 2.1g, 1.0g | -- | - The memory usage (RES, SHR) is found from `top` command. `RES` is total memory used per process; `SHR` shows how much RAM can be shared inside `RES`. - experiments are done using 2 GPU and 2 data loader workers per GPU, so there're 6 processes in total, the **numbers are per-process**. - `No-dataset`: running the same job with tiny dataset (only 4.47 MiB after serialization), since RAM usage should be negligible, it shows the floor RAM usage. - other experiments are running using a dataset of the size of **2413.57 MiB** after serialization. - `Naive`: vanilla version if we don't offload the dataset to other storage. - `NumpySerializedList`: this optimization was added a long time ago in D19896490. I recalled that the RAM was indeed shared for data loader worker, but seems that there was a regression. Now basically all the processes have a copy of data. - `SharedList`: is enabled in this diff. It shows that only the master GPU needs extra RAM. It's interesting that it uses 3.5GB RAM more than other rank, while the data itself is 2.4GB. I'm not so sure if it's overhead of the storage itself or the overhead caused by sharing it with other processes, since non-master GPU using `NumpySerializedList` also uses 11.5g of RAM, we probably don't need to worry too much about it. - `DiskCachedList`: didn't benchmark, should have no extra RAM usage. Using the above number for a typical 8GPU, 4worker training, assuming the OS and other programs take 20-30GB RAM, the current training will use `11.6g * 8 + 4.4g * 8*4 = 233.6g` RAM, on the edge of causing OOM for a 256gb machine. This aligns with our experience that it supports ~2GB dataset. After the change, the training will use only `(11.5g * 7 + 8.0g) + 2.1g * 8*4 = 155.7g` RAM, which gives a much larger head room, we can thus train with much larger dataset (eg. 20GB) or use more DL workers (eg. 8 workers). Reviewed By: sstsai-adl Differential Revision: D40819959 fbshipit-source-id: fbdc9d2d1d440e14ae8496be65979a09f3ed3638
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/409 `assert_close` is preferred over `assert_allclose`: https://github.com/pytorch/pytorch/issues/61844 The `assert_allclose` was removed yesterday in https://github.com/pytorch/pytorch/pull/87974, causing test to fail, eg. https://github.com/facebookresearch/d2go/actions/runs/3389194553/jobs/5632021291 Reviewed By: sstsai-adl Differential Revision: D41000306 fbshipit-source-id: 7bd1cb9d5edf0a4609a909e2283df411bcabdf13
-
- 28 Oct, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/404 `get_default_cfg` is now class method since stack of D37294926 (https://github.com/facebookresearch/d2go/commit/b077a2c13845d4ef8481979d64345368864fe5ff), this diff updates call sites using biggrep to replace "Runner().get_default_cfg" with "Runner.get_default_cfg" Reviewed By: itomatik Differential Revision: D40707898 fbshipit-source-id: 2b56545769d930d34dad8814d5bfeba4c54224fd
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/392 1. Moved scale adjustment to a separate function and expose the option to disable it 2. Add option to keep the original image instead of creating a square image Reviewed By: wat3rBro Differential Revision: D40403705 fbshipit-source-id: 6c35a9a1fe3ef868e5f0b2204874fd028776e26a
-
- 26 Oct, 2022 4 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/400 Before the diff, the bootstrap can't handle the following gramma: https://www.internalfb.com/code/fbsource/[010f09214704]/fbcode/mobile-vision/d2go/tests/modeling/test_modeling_distillation.py?lines=231-236 The fix is recursively applying the truncating trick. Reviewed By: itomatik Differential Revision: D40701375 fbshipit-source-id: 946b6be47aa4b879e2e247b879a0d8b9ac13822b
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/389 Reviewed By: itomatik Differential Revision: D39631903 fbshipit-source-id: 1668a8b06260d02b40208b3dda3cbade0a12bc16
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/393 Add gaussian blur augmentation. Reviewed By: tglik Differential Revision: D40404772 fbshipit-source-id: d04774cc8aa9dff00f2b85e9c7feb1b8709edc9e
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/391 1. Add unit test for affine augmentation 2. Fix off by one affine scaling (Note: this changes augmentation behavior) Reviewed By: tglik Differential Revision: D40374538 fbshipit-source-id: ea037195b9a7dc1b4e254bf35216a8dac610bf29
-
- 20 Oct, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/394 Reviewed By: wrlife Differential Revision: D40533013 fbshipit-source-id: c4c0b08b8afb0c5c622a945bd2ef9c3e682f3039
-
- 06 Oct, 2022 1 commit
-
-
Zhanibek Datbayev authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/383 Our optimizer tests have become flaky due to often timing out: * https://www.internalfb.com/intern/test/281475048520501?ref_report_id=0 * https://www.internalfb.com/intern/test/281475048520502?ref_report_id=0 {F778290241}{F778290240} This diff splits tests that run multiple optimizer through training. Also reduced number of iterations and number of datapoints for evaluation. At the moment we aren't really verifying end result value, so I assume this reduction shouldn't matter. Reviewed By: tglik Differential Revision: D40124949 fbshipit-source-id: 5d8f309106dd5f1829f291784d36768dab2e9eca
-
- 03 Oct, 2022 1 commit
-
-
Francisc Bungiu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/378 Some hooks need access to cfg to be initialized correctly. Pass cfg down the hook registration method. Reviewed By: ertrue, miqueljubert Differential Revision: D39303862 fbshipit-source-id: 931c356c7045f95fc0af5b20c7782ea4d1aff138
-
- 01 Oct, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/379 This diff ~~~prototypes~~~ implements replacing the `custom_convert_fx` API with a callback. Reviewed By: LiamZhuuu Differential Revision: D39859228 fbshipit-source-id: 34719d1758c4afa7e47930c12d3443813d3f4546
-
- 28 Sep, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371 In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like: ``` # config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: MODEL: FBNET_V2: ... WEIGHTS: /path/to/teacher/weights ... ``` This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model. In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model: ``` # new_config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: CONFIG_FNAME: /path/to/teacher/config RUNNER_NAME: ... ``` This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go. Reviewed By: newstzpz Differential Revision: D37640041 fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
-
- 31 Aug, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/355 switch to use inference_on_dataset_with_checkpointing in default runner. Reviewed By: HarounH Differential Revision: D37215292 fbshipit-source-id: c006784ce0b31700bcbb1f79c303fd791f1561ff
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/354 Allow skipping inference when running evaluation. * `inference_on_dataset_with_checkpointing` works similar to `inference_on_dataset` in d2 but allows skipping the inference step if the evaluator has cached the results. * If the evaluator has a function `could_skip_process` and returns True, inference will be skipped and only `evaluator. reset()` and `evaluator.evaluate()` are called. Reviewed By: wat3rBro Differential Revision: D37213004 fbshipit-source-id: d12cc480589ff04fd8dbb42b22633ab34bc4bf63
-
- 12 Aug, 2022 1 commit
-
-
Pascual Martinez Gomez authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/359 Currently, D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go is missing the Adam optimizer. This Diff addresses the gap. Reviewed By: tglik, asanakoy Differential Revision: D38492151 fbshipit-source-id: 27791c23c73942b7a466f2ca91f6b3631733ba16
-
- 27 Jul, 2022 2 commits
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/344 we need access to the modeling hooks in EMA, e.g. build trainer. Reviewed By: wat3rBro Differential Revision: D37997773 fbshipit-source-id: bf4372cd310605fa35aa70f0604b084b047001d8
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/348 Add testcase to ensure loading from config in eval_only is covered. Reviewed By: wat3rBro Differential Revision: D38001319 fbshipit-source-id: e6a2edb5001ae87606a3bf48e1355037aee0f9a0
-
- 26 Jul, 2022 1 commit
-
-
Vasilis Vryniotis authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/322 TorchVision has recently added the AugMix Augmentantion. This diff adds support of the specific transform to D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)go Reviewed By: newstzpz Differential Revision: D37578243 fbshipit-source-id: b793715ccb24a3bd999a40c51d8c9a75f22110a3
-
- 19 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/339 add is_qat to lightning codepath Reviewed By: jerryzh168 Differential Revision: D37937336 fbshipit-source-id: 68debe57c7f7dcf8647fad6ab9e34eff2aaa851c
-
- 14 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/335 Manually remove `example_input` from eager-mode-only `prepare_for_quant`. Reviewed By: jerryzh168 Differential Revision: D37838155 fbshipit-source-id: 2d98e0264fc0c40dcf1b6f28f7fc635c52acd75e
-
- 13 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/330 - re-enable the `test_qat` - remove `example_input` from `DetMetaArchForTest`, since its `custom_prepare_fx` just create a tensor for avgpool. Reviewed By: jerryzh168 Differential Revision: D37793260 fbshipit-source-id: ec7a825c61292d9c6d792f910a957c1c27832336
-
- 08 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/325 `prepare_for_quant_convert` is a confusing name because it only does `convert`, there's no "prepare" in it. It's actually for fx only, because eager mode always calls `torch.quantization.convert`, there's no way to customize it. So just call this `custom_convert_fx`. The new name is also unique in fbcode, so easy to do codemod later on. This diff simply does the renaming by biggrep + replace. Reviewed By: jerryzh168 Differential Revision: D37676717 fbshipit-source-id: e7d05eaafddc383dd432986267c945c8ebf94df4
-
- 06 Jul, 2022 1 commit
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/324 Fixes bug introduced in D37600026 (https://github.com/facebookresearch/d2go/commit/1a18ba3420e3c823accf731a11c0a91dc3babd85) -- forgot to fix imports after moving modelinghook to registry/builtin.py Differential Revision: D37646330 fbshipit-source-id: cb763d65e7bbfd07eea6eff61727a42a6fcfbc88
-
- 02 Jul, 2022 1 commit
-
-
Jerry Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/321 Following up the bc-breaking change from fx graph mode quantization: https://github.com/pytorch/pytorch/pull/76496 that added example_inputs to prepare_fx and prepare_qat_fx, we fixes the callsite related to mobile-vision and exposed extra example_inputs in some apis Reviewed By: wat3rBro Differential Revision: D37163018 fbshipit-source-id: 9f0bb56659345d174a39b6d3bb4408caa553b88d
-
- 29 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/318 Reviewed By: mcimpoi Differential Revision: D37501246 fbshipit-source-id: 6dbe5dcbaf7454f451d4a3bb3fa2d856cc87d5cc
-
- 24 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/312 As discussed, we decided to not use runner instance outside of `main`, previous diffs already solved the prerequisites, this diff mainly does the renaming. - Use runner name (str) in the fblearner, ML pipeline. - Use runner name (str) in FBL operator, MAST and binary operator. - Use runner class as the interface of main, it can be either the name of class (str) or actual class. The main usage should be using `str`, so that the importing of class happens inside `main`. But it's also a common use case to import runner class and call `main` for things like ad-hoc scripts or tests, supporting actual class makes it easier modify code for those cases (eg. some local test class doesn't have a name, so it's not feasible to use runner name). Reviewed By: newstzpz Differential Revision: D37060338 fbshipit-source-id: 879852d41902b87d6db6cb9d7b3e8dc55dc4b976
-
- 21 Jun, 2022 1 commit
-
-
Andrew Or authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/308 D37088095 made BC breaking changes in PyTorch, but missed fixing a few callsites. This is the second diff to fix the affected tests. Reviewed By: jerryzh168, wat3rBro Differential Revision: D37282853 fbshipit-source-id: 945c3628f53cefd2fbc2526630d1567a2b1cee25
-
- 20 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/305 One benefit of having separate registries for D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb) and D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's meta-arch is that there's no need to patch original D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)'s meta arch because we can just register new meta arch in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go directly. This diff removes the `patch_d2_meta_arch` and makes things simpler. Reviewed By: mcimpoi Differential Revision: D37246483 fbshipit-source-id: c8b7adef1fa7a5ff2f89c376c7e3b39bec8f19ee
-
- 17 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/298 Reviewed By: tglik, newstzpz Differential Revision: D37152248 fbshipit-source-id: 58a6899c5f6465f36961a2ebf60a64f20509cec2
-
- 16 Jun, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/299 This implements the first iteration of generalized distillation in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go. The functionality is separated into the following: => Adding distillation functionality without user changing their meta architecture: class DistillationModelingHook * This is an implementation detail that we hide from the user. * We use dynamic mixin to specify additional functionality to the user's model. In this way, the original (student) model retains all attributes but the mixin class will override the forward (and provide more functionality like teacher updates). * Teacher build currently only supports loading a torchscript model, pytorch compatiblity in later diffs => Implementing distillation methods class DistillationAlgorithm * The user can use some default algorithm (e.g., LabelDistillation) or create their own. This is where we specify the overrided forward func of the model and any other distillation requirements (updating the weights of the teacher model). * The basic LabelDistillation allows a user to use a teacher model during training to relabel the gt => User customization class DistillationHelper * This is what we expect the user to customize. As an example the user probably needs to write their own pseudo_labeler to take batched_inputs and relabel this with the teacher Both DistillationHelper and DistillationAlgorithm use a registry so that users can add their customization in their own code and use these customizations by specifying in the config Reviewed By: newstzpz Differential Revision: D36708227 fbshipit-source-id: bc427c5d42d0c7ff4d839bf10782efac24dea107
-
- 10 Jun, 2022 1 commit
-
-
John Reese authored
Summary: pyfmt now specifies a target Python version of 3.8 when formatting with black. With this new config, black adds trailing commas to all multiline function calls. This applies the new formatting as part of rolling out the linttool-integration for pyfmt. paintitblack Reviewed By: zertosh, lisroach Differential Revision: D37084377 fbshipit-source-id: 781a1b883a381a172e54d6e447137657977876b4
-
- 09 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/274 X-link: https://github.com/facebookresearch/mobile-vision/pull/76 TLDR: this diff consolidate the `distributed_helper` of `mobile_cv`, it (together with `mobile_cv`'s `comm` module) should be the TOGO library for dealing with DDP. D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's `distributed` is now built on-top of `mobile_cv`'s `distributed_helper`. Reviewed By: newstzpz Differential Revision: D36787336 fbshipit-source-id: 640c9dcff5eec534e7894c75cfdf0a12d21c297e
-
- 05 Jun, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/277 Support cfg_diff for mismatched keys. Reviewed By: wat3rBro Differential Revision: D36737254 fbshipit-source-id: a1c189c92a24f3c109d9a427f135e53876b91624
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/259 Added randaug and trivialaug to d2go. * Only apply to images. Reviewed By: wat3rBro Differential Revision: D36605753 fbshipit-source-id: f72ea6f3de65ab115e4c6612343e93a092cbd7ea
-
- 02 Jun, 2022 2 commits
-
-
Miquel Jubert Hermoso authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/238 *This diff is part of a stack which has the goal of "buckifying" D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go core and enabling autodeps and other tooling. The last diff in the stack introduces the TARGETS. The diffs earlier in the stack are resolving circular dependencies and other issues which prevent the buckification from occurring.* Following the comments in an abandoned diff, split the export code into two files, which will have their corresponding dependencies: exporter and api. api.py contains the components which have little dependencies, so it can be imported basically anywhere without circular dependencies. exporter.py contains the utilities, which are use for export operations, for example in the exporter binary. Reviewed By: mcimpoi Differential Revision: D36166603 fbshipit-source-id: 25ded0b3925464c05be4048472a4c2ddcdb17ecf
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/267 https://www.internalfb.com/intern/test/281475036363610?ref_report_id=0 is flaky, which is caused by running multiple tests at the same time, and clean up is not handled very well in that case. Reviewed By: tglik Differential Revision: D36787035 fbshipit-source-id: 6a478318fe011af936dd10fa564519c8c0615ed3
-
- 28 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/264 Reviewed By: tglik Differential Revision: D36427439 fbshipit-source-id: 3502d61ccb3c3f67d7ccfc1f55777c74f6c3b970
-
- 27 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/263 Re-use the code in D36651241 Reviewed By: tglik Differential Revision: D36682041 fbshipit-source-id: 1bf30afbbb69d10fc11f9161e29319f180c65e2f
-
- 25 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/261 X-link: https://github.com/facebookresearch/mobile-vision/pull/71 `is_oss` and `fb_overwritable` are also needed in `mobile_cv`, move them from d2go. Reviewed By: zhanghang1989 Differential Revision: D36655821 fbshipit-source-id: 421c4d22d4c4620678908fe13d6e47ab39604ae7
-