- 19 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/339 add is_qat to lightning codepath Reviewed By: jerryzh168 Differential Revision: D37937336 fbshipit-source-id: 68debe57c7f7dcf8647fad6ab9e34eff2aaa851c
-
- 13 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/330 - re-enable the `test_qat` - remove `example_input` from `DetMetaArchForTest`, since its `custom_prepare_fx` just create a tensor for avgpool. Reviewed By: jerryzh168 Differential Revision: D37793260 fbshipit-source-id: ec7a825c61292d9c6d792f910a957c1c27832336
-
- 08 Jul, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/325 `prepare_for_quant_convert` is a confusing name because it only does `convert`, there's no "prepare" in it. It's actually for fx only, because eager mode always calls `torch.quantization.convert`, there's no way to customize it. So just call this `custom_convert_fx`. The new name is also unique in fbcode, so easy to do codemod later on. This diff simply does the renaming by biggrep + replace. Reviewed By: jerryzh168 Differential Revision: D37676717 fbshipit-source-id: e7d05eaafddc383dd432986267c945c8ebf94df4
-
- 02 Jul, 2022 1 commit
-
-
Jerry Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/321 Following up the bc-breaking change from fx graph mode quantization: https://github.com/pytorch/pytorch/pull/76496 that added example_inputs to prepare_fx and prepare_qat_fx, we fixes the callsite related to mobile-vision and exposed extra example_inputs in some apis Reviewed By: wat3rBro Differential Revision: D37163018 fbshipit-source-id: 9f0bb56659345d174a39b6d3bb4408caa553b88d
-
- 21 May, 2022 1 commit
-
-
Jerry Zhang authored
Summary: X-link: https://github.com/pytorch/pytorch/pull/77608 X-link: https://github.com/pytorch/fx2trt/pull/76 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/249 X-link: https://github.com/fairinternal/ClassyVision/pull/104 X-link: https://github.com/pytorch/benchmark/pull/916 X-link: https://github.com/facebookresearch/ClassyVision/pull/791 X-link: https://github.com/facebookresearch/mobile-vision/pull/68 FX Graph Mode Quantization needs to know whether an fx node is a floating point Tensor before it can decide whether to insert observer/fake_quantize module or not, since we only insert observer/fake_quantize module for floating point Tensors. Currently we have some hacks to support this by defining some rules like NON_OBSERVABLE_ARG_DICT (https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/fx/utils.py#L496), but this approach is fragile and we do not plan to maintain it long term in the pytorch code base. As we discussed in the design review, we'd need to ask users to provide sample args and sample keyword args so that we can infer the type in a more robust way. This PR starts with changing the prepare_fx and prepare_qat_fx api to require user to either provide example arguments thrugh example_inputs, Note this api doesn't support kwargs, kwargs can make https://github.com/pytorch/pytorch/pull/76496#discussion_r861230047 (comment) simpler, but it will be rare, and even then we can still workaround with positional arguments, also torch.jit.trace(https://pytorch.org/docs/stable/generated/torch.jit.trace.html) and ShapeProp: https://github.com/pytorch/pytorch/blob/master/torch/fx/passes/shape_prop.py#L140 just have single positional args, we'll just use a single example_inputs argument for now. If needed, we can extend the api with an optional example_kwargs. e.g. in case when there are a lot of arguments for forward and it makes more sense to pass the arguments by keyword BC-breaking Note: Before: ```python m = resnet18(...) m = prepare_fx(m, qconfig_dict) # or m = prepare_qat_fx(m, qconfig_dict) ``` After: ```python m = resnet18(...) m = prepare_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),)) # or m = prepare_qat_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),)) ``` Reviewed By: vkuzo, andrewor14 Differential Revision: D35984526 fbshipit-source-id: 706c8df71722c9aa5082a6491734f0144f0dd670
-
- 15 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies new import merging and sorting from µsort v1.0. When merging imports, µsort will make a best-effort to move associated comments to match merged elements, but there are known limitations due to the diynamic nature of Python and developer tooling. These changes should not produce any dangerous runtime changes, but may require touch-ups to satisfy linters and other tooling. Note that µsort uses case-insensitive, lexicographical sorting, which results in a different ordering compared to isort. This provides a more consistent sorting order, matching the case-insensitive order used when sorting import statements by module name, and ensures that "frog", "FROG", and "Frog" always sort next to each other. For details on µsort's sorting and merging semantics, see the user guide: https://usort.readthedocs.io/en/stable/guide.html#sorting Reviewed By: lisroach Differential Revision: D36402205 fbshipit-source-id: a4efc688d02da80c6e96685aa8eb00411615a366
-
- 28 Feb, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/184 Reviewed By: zhanghang1989 Differential Revision: D34529248 fbshipit-source-id: f77882dae7de336da77ac9bb7c35cfc1e8d541af
-
- 28 Oct, 2021 1 commit
-
-
Kai Zhang authored
Summary: In quantization callback, we prepare the model with FX quantization API and only use the prepared model in training. However, when training in DDP, the parameters in the origin model still require grad, causing unused parameters RuntimeError. Previously, Lightning trainer train the model with find_unused_param flag, but if user manually disable it, they will get the runtime error. In this diff, the parameters in the origin model are frozen. We could consider deleting the origin model after preparation to save memory, but we might have to make some assumption on Lightning module structure, for example, `.model` is the origin model, so that we could `delattr(pl_module, "model")`. Reviewed By: wat3rBro Differential Revision: D31902368 fbshipit-source-id: 56eabb6b2296278529dd2b94d6aa4c9ec9e9ca6b
-
- 06 Oct, 2021 1 commit
-
-
Supriya Rao authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/124 Update callsites from torch.quantization to torch.ao.quantization Reviewed By: z-a-f, jerryzh168 Differential Revision: D31286125 fbshipit-source-id: ef24ca87d8db398c65bb5b89f035afe0423a5685
-
- 09 Sep, 2021 1 commit
-
-
Yanghan Wang authored
Summary: https://fb.workplace.com/groups/pythonfoundation/posts/2990917737888352 Remove `mobile-vision` from opt-out list; leaving `mobile-vision/SNPE` opted out because of 3rd-party code. arc lint --take BLACK --apply-patches --paths-cmd 'hg files mobile-vision' allow-large-files Reviewed By: sstsai-adl Differential Revision: D30721093 fbshipit-source-id: 9e5c16d988b315b93a28038443ecfb92efd18ef8
-
- 30 Jun, 2021 1 commit
-
-
Kai Zhang authored
Summary: "fb" -> "fn" Reviewed By: ananthsub Differential Revision: D29480559 fbshipit-source-id: 78a0cd3ddd25df2c877514d4a5c0c29c248267a2
-
- 26 Jun, 2021 1 commit
-
-
Kai Zhang authored
Summary: # Context In post training quantization callback, we make a deepcopy of the Lightning module before validation start and prepare the copy with FX quantization API. The callback keeps the prepared model inside it. # The problem By the second time we run the validation epoch, we try to make a copy of the Lightning module, which has a reference to trainer, which has a reference to quantization callback, which has a prepared model, which is not deepcopiable. # Mitigation Delete the trainer before making a deepcopy. We're already doing that in stl/callbacks/quantization, but the changes were not ported into D2 (https://github.com/facebookresearch/d2go/commit/4169abc18ec539a24081b179fcbbc5a5754d102b)Go. Reviewed By: zhanghang1989 Differential Revision: D29409085 fbshipit-source-id: 24550124181673b2e567b2a04563bcdfb440e145
-
- 17 Apr, 2021 2 commits
-
-
Kai Zhang authored
Summary: Delegate FX quantization callback's customization to model. Reviewed By: wat3rBro Differential Revision: D27669212 fbshipit-source-id: 2715546cf03134896da6f95ecddaf8503ff95d0b
-
Kai Zhang authored
Summary: As per title and sanity test E2E QAT workflow on Lightning Trainer. - add `post_training_opts`. This is required to use `all_steps_qat.json` with Lightning. We don't actually support the post_training_opts in this diff though - we leave it part of T83437359. - Update .yaml to specify the Quantize-able modules. - Update `lightning_train_net.py` to use the QuantizationAwareTraining callback. Reviewed By: kandluis Differential Revision: D26304879 fbshipit-source-id: 948bef4817d385d8a0969e4990d7f17ecd6994b7
-
- 31 Mar, 2021 1 commit
-
-
Kai Zhang authored
Reviewed By: newstzpz Differential Revision: D27255960 fbshipit-source-id: 1699ff23d2bc610dffc0215a90a7c1c17e3783c3
-
- 03 Mar, 2021 1 commit
-
-
Kai Zhang authored
Summary: As titled. Make a copy of quantization callback to unblock D2go OSS. Reviewed By: zhanghang1989 Differential Revision: D26735525 fbshipit-source-id: 12b77f04cfa1361e856b26ea218a262da1fadd88
-