- 17 Apr, 2021 2 commits
-
-
Kai Zhang authored
Summary: Delegate FX quantization callback's customization to model. Reviewed By: wat3rBro Differential Revision: D27669212 fbshipit-source-id: 2715546cf03134896da6f95ecddaf8503ff95d0b
-
Kai Zhang authored
Summary: As per title and sanity test E2E QAT workflow on Lightning Trainer. - add `post_training_opts`. This is required to use `all_steps_qat.json` with Lightning. We don't actually support the post_training_opts in this diff though - we leave it part of T83437359. - Update .yaml to specify the Quantize-able modules. - Update `lightning_train_net.py` to use the QuantizationAwareTraining callback. Reviewed By: kandluis Differential Revision: D26304879 fbshipit-source-id: 948bef4817d385d8a0969e4990d7f17ecd6994b7
-
- 15 Apr, 2021 1 commit
-
-
Alexander Pivovarov authored
Summary: Fix typos in exporter Pull Request resolved: https://github.com/facebookresearch/d2go/pull/45 Reviewed By: wat3rBro Differential Revision: D27779963 Pulled By: zhanghang1989 fbshipit-source-id: bcf7922afe6d4cccc074615069538eb5a6098b98
-
- 09 Apr, 2021 1 commit
-
-
Ananth Subramaniam authored
Summary: Before: this test would assume only 2 checkpoints were stored: `last.ckpt`, and `FINAL_MODEL_CKPT` Now: this test asserts that at least these 2 checkpoints are stored. In case the config specifies `save_top_k=-1` for instance, we'd save more checkpoints, causing this test to fail Since this test is only loading the last and the final outputs, I'm changing the behavior to assert that these checkpoints must be saved and ignoring other checkpoint files that could be generated. Reviewed By: kazhang Differential Revision: D27671284 fbshipit-source-id: 0419fb46856d048e7b6eba3ff1dc65b7280a9a90
-
- 24 Mar, 2021 2 commits
-
-
Kai Zhang authored
Summary: Evaluate the predictor generated by previous step. This diff modify the lightning_train_net to reuse the evaluation logic by adding a `predictor_path` param. This diff also makes Lightning training backend depends on `cfg.MODEL.DEVICE` so that in evaluate_predictor step, user could set backend by changing model device. This is useful for evaluating int8 quantized model. Reviewed By: newstzpz Differential Revision: D27150609 fbshipit-source-id: fb72da3e81db932c0fa479350150720143e09a3e
-
Kai Zhang authored
Summary: Given that the way to create D2 (https://github.com/facebookresearch/d2go/commit/465cdb842513eb910aa20fcedea1d2edd15dc7b7)go runner and Lightning task are different, get_class was introduced so that in application we could do: ``` if is Lightning: task_cls = get_class(classname) task = task_cls(cfg) else: runner = create_runner(classname) ``` It turns out that we could need to do that in many places: workflow, binaries. This diff revert `get_class` and return class in `create_runner` if the class is a Lightning module. Reviewed By: newstzpz Differential Revision: D26676595 fbshipit-source-id: c3ce2016d09fe073af4c2dd9f98eea4e59ca621b
-
- 17 Mar, 2021 1 commit
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/24 Reviewed By: wat3rBro Differential Revision: D27127642 Pulled By: zhanghang1989 fbshipit-source-id: 18bc3c2fa05232cacc778925db6b7dcea99b108c
-
- 09 Mar, 2021 1 commit
-
-
Yanghan Wang authored
Reviewed By: newstzpz Differential Revision: D26072333 fbshipit-source-id: 6727b34458d410e904045aa58f81c3e09111882a
-
- 07 Mar, 2021 1 commit
-
-
Hang Zhang authored
Summary: fixes https://github.com/facebookresearch/d2go/issues/9 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/13 Reviewed By: wat3rBro Differential Revision: D26870048 Pulled By: zhanghang1989 fbshipit-source-id: 29298bca7a59aad214976aaa37461e3d316132d8
-
- 04 Mar, 2021 1 commit
-
-
RangiLyu authored
Summary: Change depoyment to deployment in README.md. Change datasest to datasets in tools/exporter.py. Pull Request resolved: https://github.com/facebookresearch/d2go/pull/7 Reviewed By: newstzpz Differential Revision: D26821039 Pulled By: zhanghang1989 fbshipit-source-id: 5056d15c877c4b3d771d33267139e73f1527da21
-
- 03 Mar, 2021 2 commits
-
-
Kai Zhang authored
Summary: As titled. The OSS version only use PyTorch Lightning while internal version leverages some features(e.g. Manifold integration, every_n_step checkpointing). This diff splits train_net.main into smaller functions so that they could be shared across OSS and internal versions. Reviewed By: zhanghang1989 Differential Revision: D26752701 fbshipit-source-id: 7f68e2a81e78193e117517a0ff668ab14b76ea65
-
facebook-github-bot authored
fbshipit-source-id: f4a8ba78691d8cf46e003ef0bd2e95f170932778
-