"vscode:/vscode.git/clone" did not exist on "522726228ad9ddf59a35af47acf5186a8eca7708"
- 31 Oct, 2022 1 commit
-
-
Francisc Bungiu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/403 `cfg.SOLVER.AMP.ENABLED` enabled mixed precision, but this only works for V100 GPUs. For A100s, the equivalent is to enable TF32. Reviewed By: tglik Differential Revision: D40675242 fbshipit-source-id: 5cc3d12cd3d7ec76665e0907ecc87fc5f64d73f0
-
- 28 Oct, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/404 `get_default_cfg` is now class method since stack of D37294926 (https://github.com/facebookresearch/d2go/commit/b077a2c13845d4ef8481979d64345368864fe5ff), this diff updates call sites using biggrep to replace "Runner().get_default_cfg" with "Runner.get_default_cfg" Reviewed By: itomatik Differential Revision: D40707898 fbshipit-source-id: 2b56545769d930d34dad8814d5bfeba4c54224fd
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/392 1. Moved scale adjustment to a separate function and expose the option to disable it 2. Add option to keep the original image instead of creating a square image Reviewed By: wat3rBro Differential Revision: D40403705 fbshipit-source-id: 6c35a9a1fe3ef868e5f0b2204874fd028776e26a
-
- 27 Oct, 2022 2 commits
-
-
Tsahi Glik authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/401 as followup on D40001329 (https://github.com/facebookresearch/d2go/commit/69bf820c64cd0ffb6a84f465199c9134814cf58e). The export is running main func without launching distributed workers, so it need to set the shared context explicitly. Reviewed By: wat3rBro Differential Revision: D40708631 fbshipit-source-id: 7689a45dff383ba2cce01d33d3be95d612269fbe
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/402 print out the optimizer for easier debug Reviewed By: newstzpz Differential Revision: D40701959 fbshipit-source-id: 7b610e8f5771409632ae056cb9d34138b331adbc
-
- 26 Oct, 2022 5 commits
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/399 Freezing the model before running quantization causes an issue with loading a saved checkpoint bc fusing does not support FrozenBatchNorm2d (which means that the checkpoint could have a fused weight conv.bn.weight whereas the model would have an unfused weight bn.weight). The longer term solution is to add FrozenBatchNorm2d to the fusing support but there are some subtle issues there that will take some time to fix: * need to move FrozenBatchNorm2d out of D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb) and into mobile_cv lib * current fuser has options to add new bn ops (e.g., FrozenBatchNorm2d) which we use with ops like SyncBN but this currently is only tested with inference so we need to write some additional checks on training The swap will make freezing compatible with QAT and should still work with standard models. One subtle potential issue is that the current BN swap assumes that BN is a leaf node. If a user runs QAT without fusing BN, the BN will no longer be the leaf node as it will obtain an activation_post_process module in order to record the output. The result is that BN will not be frozen in this specific instance. This should not occur as BN is usually fused. A small adjustment to the BN swap would just be to swap the BN regardless of whether it is a leaf node (but we have to check whether activation_post_process module is retained). Another long term consideration is moving both freezing and quant to modeling hooks so the user can decide the order. Reviewed By: wat3rBro Differential Revision: D40496052 fbshipit-source-id: 0d7e467b833821f7952cd2fce459ae1f76e1fa3b
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/400 Before the diff, the bootstrap can't handle the following gramma: https://www.internalfb.com/code/fbsource/[010f09214704]/fbcode/mobile-vision/d2go/tests/modeling/test_modeling_distillation.py?lines=231-236 The fix is recursively applying the truncating trick. Reviewed By: itomatik Differential Revision: D40701375 fbshipit-source-id: 946b6be47aa4b879e2e247b879a0d8b9ac13822b
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/389 Reviewed By: itomatik Differential Revision: D39631903 fbshipit-source-id: 1668a8b06260d02b40208b3dda3cbade0a12bc16
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/393 Add gaussian blur augmentation. Reviewed By: tglik Differential Revision: D40404772 fbshipit-source-id: d04774cc8aa9dff00f2b85e9c7feb1b8709edc9e
-
Sam Tsai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/391 1. Add unit test for affine augmentation 2. Fix off by one affine scaling (Note: this changes augmentation behavior) Reviewed By: tglik Differential Revision: D40374538 fbshipit-source-id: ea037195b9a7dc1b4e254bf35216a8dac610bf29
-
- 24 Oct, 2022 1 commit
-
-
Matteo Presutto authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/395 Having a module as instance variable prevents multiprocessing from spawning a children using Pickle and creates a deadlock, making Motion Blur incompatible with multiprocessing Reviewed By: wat3rBro Differential Revision: D40411481 LaMa Project: L1082110 fbshipit-source-id: b3e44b438367100044b948b5ea616c5c6fd41d3d
-
- 23 Oct, 2022 1 commit
-
-
Tsahi Glik authored
Summary: X-link: https://github.com/facebookresearch/mobile-vision/pull/116 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/398 D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go doesn't have per node initialization api, but only per worker initialization that happens per subprocess. Some projects (like IOBT) need to way to do shared initialization before spawning all the workers in subprocess and pass this initialized shared context to the workers. This diff adds API to create a shared context object before launching workers and then use this shared context by the runners inside the workers after launch. Reviewed By: wat3rBro Differential Revision: D40001329 fbshipit-source-id: 231a4e7e4da7b5db50849176c58b104c4565306a
-
- 20 Oct, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/394 Reviewed By: wrlife Differential Revision: D40533013 fbshipit-source-id: c4c0b08b8afb0c5c622a945bd2ef9c3e682f3039
-
Zecheng He authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/384 Random resize augmentation. Randomly pick a scale from the shape_list and resize to that scale. D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)GO_DATA: AUG_OPS: TRAIN: [ RandomResizeOp::{"shape_list": [[224, 224], [256, 256], [320, 320]]} ] Reviewed By: XiaoliangDai Differential Revision: D40230332 fbshipit-source-id: 60a48f85240aef673033d48db4662899dc90bef4
-
- 14 Oct, 2022 1 commit
-
-
Jiaxu Zhu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/386 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/382 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/364 As title, when `QUANTIZATION.BACKEND` is set to `turing`, call `odai.transforms` APIs instead of OSS quantization. Also updated `image_classification` as an example to get Turing quantized model via `custom_prepare_fx/custom_convert_fx` Reviewed By: wat3rBro Differential Revision: D40282390 fbshipit-source-id: 7d6509969cfe8537153e1d59f21967eeb7801fd1
-
- 06 Oct, 2022 1 commit
-
-
Zhanibek Datbayev authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/383 Our optimizer tests have become flaky due to often timing out: * https://www.internalfb.com/intern/test/281475048520501?ref_report_id=0 * https://www.internalfb.com/intern/test/281475048520502?ref_report_id=0 {F778290241}{F778290240} This diff splits tests that run multiple optimizer through training. Also reduced number of iterations and number of datapoints for evaluation. At the moment we aren't really verifying end result value, so I assume this reduction shouldn't matter. Reviewed By: tglik Differential Revision: D40124949 fbshipit-source-id: 5d8f309106dd5f1829f291784d36768dab2e9eca
-
- 05 Oct, 2022 2 commits
-
-
Yanghan Wang authored
Summary: X-link: https://github.com/facebookresearch/mobile-vision/pull/110 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/380 - remove `alias` - only annotate different implementation with `fb_overwrite` - fix lint Reviewed By: itomatik Differential Revision: D39981383 fbshipit-source-id: 9739b7026510b3f1a2e69fe1de5b3f721759a209
-
Artsiom Sanakoyeu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/381 Introduce extra parameter SOLVER.AMP.PRECISION which can be sued to control the mixed precision training when lightning backend is used. Previous value `precision: "mixed"` was worng and the training failed (See screenshot below) {F777576618} I had to make AMP.PRECISION as string and make sure that it can work with two values: "float16" and "bfloat16". Before feeding it to the Trainer we convert "float16" string to integer value 16. Such a workaround was unavoidable because D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's config value cannot be of int and str at the same time. Reviewed By: wat3rBro Differential Revision: D40035367 fbshipit-source-id: ed4f615ab29a2258164cbe179a9adba11559d804
-
- 03 Oct, 2022 1 commit
-
-
Francisc Bungiu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/378 Some hooks need access to cfg to be initialized correctly. Pass cfg down the hook registration method. Reviewed By: ertrue, miqueljubert Differential Revision: D39303862 fbshipit-source-id: 931c356c7045f95fc0af5b20c7782ea4d1aff138
-
- 01 Oct, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/376 Make sure the return result is consistent for ResultCache * When gather = True, it will always return a list. Reviewed By: itomatik Differential Revision: D39874274 fbshipit-source-id: ce042261432cdce5544d10b2d4b88f5e2d0d1b68
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/379 This diff ~~~prototypes~~~ implements replacing the `custom_convert_fx` API with a callback. Reviewed By: LiamZhuuu Differential Revision: D39859228 fbshipit-source-id: 34719d1758c4afa7e47930c12d3443813d3f4546
-
- 29 Sep, 2022 1 commit
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/377 Only automatically rescale the lr for sgd optimizers. * Seems that only sgd needs scaling the lr, so we change to not to scale lr automatically by default. This will work better for newly added optimizers (like adam). Reviewed By: itomatik, lg-zhang Differential Revision: D39899434 fbshipit-source-id: d6eebc5b07d4489b401c1fc3cea00f5a060fe19d
-
- 28 Sep, 2022 4 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/375 `patch_d2_meta_arch` had already been removed in D37246483 (https://github.com/facebookresearch/d2go/commit/d1518ff66205089115df56010e8eafcf94efb08d), need to also update the `beginner.ipynb`. Reviewed By: itomatik Differential Revision: D39861148 fbshipit-source-id: bac80efbff1f99a023d604a66c3667bc94f8c6f4
-
Artsiom Sanakoyeu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/374 AMP trained with mixed precision is implemented for the Native d2go Runner, but not for Lightning Tasks. Now we pass params SOLVER.AMP* and SOLVER.CLIP_GRADIENTS* to the lightning Trainer as well. Reviewed By: wat3rBro Differential Revision: D39798007 fbshipit-source-id: e48560a91d37c21c56d953eed141876d8c759329
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371 In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like: ``` # config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: MODEL: FBNET_V2: ... WEIGHTS: /path/to/teacher/weights ... ``` This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model. In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model: ``` # new_config.py MODEL: FBNET_V2: ... DISTILLATION: TEACHER: CONFIG_FNAME: /path/to/teacher/config RUNNER_NAME: ... ``` This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go. Reviewed By: newstzpz Differential Revision: D37640041 fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
-
Zhanibek Datbayev authored
Summary: X-link: https://github.com/facebookresearch/mobile-vision/pull/108 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/373 Asserting that methods replaced by fb_overwritable are also annotated with corresponding decorator. Reviewed By: wat3rBro Differential Revision: D39674347 fbshipit-source-id: a4cf007419760aa07d929ab1cf819ba54b11b9da
-
- 22 Sep, 2022 1 commit
-
-
Dark Knight authored
Summary: This diff is reverting D38436675 (https://github.com/facebookresearch/d2go/commit/d98f74aa467a6576c9905accbf4a2b6279599f9c) D38436675 (https://github.com/facebookresearch/d2go/commit/d98f74aa467a6576c9905accbf4a2b6279599f9c) has been identified to be causing the following test or build failures: Tests affected: - https://www.internalfb.com/intern/test/844425001950025/ - https://www.internalfb.com/intern/test/844425001950027/ Here's the Multisect link: https://www.internalfb.com/intern/testinfra/multisect/1259258 Here are the tasks that are relevant to this breakage: T120995919: 51 tests started failing for oncall d2go in the last 2 weeks We're generating a revert to back out the changes in this diff, please note the backout may land if someone accepts it. Reviewed By: wat3rBro Differential Revision: D39594147 fbshipit-source-id: 56c489bb9feea2d60a2a5f0e89941ed7c0f3f675
-
- 21 Sep, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/372 Reviewed By: itomatik Differential Revision: D39628884 fbshipit-source-id: bb1d5d77eeb965dff675c17a8fbc36e4da4e25cd
-
- 15 Sep, 2022 2 commits
-
-
Amy Kolesnichenko authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/369 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/368 Adding image subsampling in evaluation to avoid saving to TB sequential frames for video-like data. Reviewed By: wat3rBro Differential Revision: D39233843 fbshipit-source-id: a49bbfe1b9114c11565ed04db1f2f186675ea9ed
-
Jiaxu Zhu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/364 As title, when `QUANTIZATION.BACKEND` is set to `turing`, call `odai.transforms` APIs instead of OSS quantization. Also updated `image_classification` as an example to get Turing quantized model via `custom_prepare_fx/custom_convert_fx` Reviewed By: wat3rBro Differential Revision: D38436675 fbshipit-source-id: e4f0e02290512bce8b18c2369a67ed9b0f116825
-
- 10 Sep, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/367 EZ Reviewed By: xiecong Differential Revision: D39407416 fbshipit-source-id: d0e6fa09ff926780e98c210bfce955e6b8eec7f6
-
- 08 Sep, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/366 Users often time just use the default value without knowing they need to change this config, and getting low accuracy after PTQ. Therefore increase the default value. Reviewed By: miqueljubert Differential Revision: D39331680 fbshipit-source-id: 6b05773c3c4cd48e6298d341d77a0e373d5439f6
-
- 31 Aug, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/355 switch to use inference_on_dataset_with_checkpointing in default runner. Reviewed By: HarounH Differential Revision: D37215292 fbshipit-source-id: c006784ce0b31700bcbb1f79c303fd791f1561ff
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/354 Allow skipping inference when running evaluation. * `inference_on_dataset_with_checkpointing` works similar to `inference_on_dataset` in d2 but allows skipping the inference step if the evaluator has cached the results. * If the evaluator has a function `could_skip_process` and returns True, inference will be skipped and only `evaluator. reset()` and `evaluator.evaluate()` are called. Reviewed By: wat3rBro Differential Revision: D37213004 fbshipit-source-id: d12cc480589ff04fd8dbb42b22633ab34bc4bf63
-
- 23 Aug, 2022 1 commit
-
-
Simon Hollis authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/362 X-link: https://github.com/facebookresearch/detectron2/pull/4491 Recently landed D35518556 (https://github.com/facebookresearch/d2go/commit/1ffc801bf5a1d4fe926b815ba93f21632f0980f9) / Github: 36a65a0907d90ed591479b2ebaa8b61cfa0b4ef0 throws an exception with older versions of PyTorch, due to a missing library for import. This has been reported by multiple members of the PyTorch community at https://github.com/facebookresearch/detectron2/commit/36a65a0907d90ed591479b2ebaa8b61cfa0b4ef0 This change uses `try/except` to check for libraries and set flags on presence/absence to later guard code that would use them. Reviewed By: wat3rBro Differential Revision: D38879134 fbshipit-source-id: 72f5a7a8d350eb82be87567f006368bf207f5a74
-
- 20 Aug, 2022 1 commit
-
-
Xiaofang Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/358 Avoid calling scheduler.step() after the last training iteration is done Reviewed By: wat3rBro Differential Revision: D38605135 fbshipit-source-id: 87a55309bf6d1f7e598b567cc2372b00b8885c7c
-
- 18 Aug, 2022 1 commit
-
-
Simon Hollis authored
Enable torch tracing by changing assertions in d2go forwards to allow for torch.fx.proxy.Proxy type. Summary: X-link: https://github.com/facebookresearch/detectron2/pull/4227 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/241 Torch FX tracing propagates a type of `torch.fx.proxy.Proxy` through the graph. Existing type assertions in the d2go code base trigger during torch FX tracing, causing tracing to fail. This adds a check for FX tracing in progress and adds a helper function `assert_fx_safe()`, that can be used in place of a standard assertion. This function only applies the assertion if one is not tracing, allowing d2go assertion tests to be compatible with FX tracing. Reviewed By: wat3rBro Differential Revision: D35518556 fbshipit-source-id: a9b5d3d580518ca74948544973ae89f8b9de3282
-
- 12 Aug, 2022 1 commit
-
-
Pascual Martinez Gomez authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/359 Currently, D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go is missing the Adam optimizer. This Diff addresses the gap. Reviewed By: tglik, asanakoy Differential Revision: D38492151 fbshipit-source-id: 27791c23c73942b7a466f2ca91f6b3631733ba16
-
- 10 Aug, 2022 1 commit
-
-
Xiaoliang Dai authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/352 Reviewed By: newstzpz Differential Revision: D37872639 fbshipit-source-id: 61acdaa669bc541dcb715af1172926efb53c0b2b
-
- 09 Aug, 2022 1 commit
-
-
Mik Vyatskov authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/357 This change makes it possible to unpickle TrainNetOutput which is currently cannot be unpickled because it's a part of main module which can be different for the binary that's unpickling this dataclass. Reviewed By: miqueljubert Differential Revision: D38536040 fbshipit-source-id: 856594251b2eca7630d69c7917bc4746859dab9f
-