- 17 Apr, 2021 2 commits
-
-
Kai Zhang authored
Summary: Delegate FX quantization callback's customization to model. Reviewed By: wat3rBro Differential Revision: D27669212 fbshipit-source-id: 2715546cf03134896da6f95ecddaf8503ff95d0b
-
Kai Zhang authored
Summary: As per title and sanity test E2E QAT workflow on Lightning Trainer. - add `post_training_opts`. This is required to use `all_steps_qat.json` with Lightning. We don't actually support the post_training_opts in this diff though - we leave it part of T83437359. - Update .yaml to specify the Quantize-able modules. - Update `lightning_train_net.py` to use the QuantizationAwareTraining callback. Reviewed By: kandluis Differential Revision: D26304879 fbshipit-source-id: 948bef4817d385d8a0969e4990d7f17ecd6994b7
-
- 15 Apr, 2021 3 commits
-
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27710199 fbshipit-source-id: 178a28972dcc06350e99263f4b38f284cf10c890
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27783989 fbshipit-source-id: f05c11e396a2f62366721b365929b29f05d5bc02
-
Alexander Pivovarov authored
Summary: Fix typos in exporter Pull Request resolved: https://github.com/facebookresearch/d2go/pull/45 Reviewed By: wat3rBro Differential Revision: D27779963 Pulled By: zhanghang1989 fbshipit-source-id: bcf7922afe6d4cccc074615069538eb5a6098b98
-
- 14 Apr, 2021 2 commits
-
-
Ananth Subramaniam authored
Synchronize PyTorchLightning/pytorch-lightning (revision 0b843848@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ### New commit log messages ## [UnReleased] - 2021-MM-DD ### Added - Added more explicit exception message when trying to execute `trainer.test()` or `trainer.validate()` with `fast_dev_run=True` ([#6667](https://github.com/PyTorchLightning/pytorch-lightning/pull/6667)) - Added `LightningCLI` class to provide simple reproducibility with minimum boilerplate training cli. ([#4492](https://github.com/PyTorchLightning/pytorch-lightning/pull/4492)) - Trigger warning when non-metric logged value with multi processes hasn't been reduced ([#6417](https://github.com/PyTorchLightning/pytorch-lightning/pull/6417)) - Added `gradient_clip_algorithm` argument to Trainer for gradient clipping by value ([#6123](https://github.com/PyTorchLightning/pytorch-lightning/pull/6123)). - Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470)) - Added support to checkpoint after training steps in `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072)) - Added `RunningStage.SANITY_CHECKING` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Added `TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Added `Trainer.validate()` method to perform one evaluation epoch over the validation set ([#4948](https://github.com/PyTorchLightning/pytorch-lightning/pull/4948)) - Added `LightningEnvironment` for Lightning-specific DDP ([#5915](https://github.com/PyTorchLightning/pytorch-lightning/pull/5915)) - Added `teardown()` hook to LightningDataModule ([#4673](https://github.com/PyTorchLightning/pytorch-lightning/pull/4673)) - Added `auto_insert_metric_name` parameter to `ModelCheckpoint` ([#6277](https://github.com/PyTorchLightning/pytorch-lightning/pull/6277)) - Added arg to `self.log` that enables users to give custom names when dealing with multiple dataloaders ([#6274](https://github.com/PyTorchLightning/pytorch-lightning/pull/6274)) - Added `teardown` method to `BaseProfiler` to enable subclasses defining post-profiling steps outside of `__del__` ([#6370](https://github.com/PyTorchLightning/pytorch-lightning/pull/6370)) - Added `setup` method to `BaseProfiler` to enable subclasses defining pre-profiling steps for every process ([#6633](https://github.com/PyTorchLightning/pytorch-lightning/pull/6633)) - Added no return warning to predict ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139)) - Added `Trainer.predict` config validation ([#6543](https://github.com/PyTorchLightning/pytorch-lightning/pull/6543)) - Added `AbstractProfiler` interface ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Added support for including module names for forward in the autograd trace of `PyTorchProfiler` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Added support for the PyTorch 1.8.1 autograd profiler ([#6618](https://github.com/PyTorchLightning/pytorch-lightning/pull/6618)) - Added `outputs` parameter to callback's `on_validation_epoch_end` & `on_test_epoch_end` hooks ([#6120](https://github.com/PyTorchLightning/pytorch-lightning/pull/6120)) - Added `configure_sharded_model` hook ([#6679](https://github.com/PyTorchLightning/pytorch-lightning/pull/6679)) - Added support for `precision=64`, enabling training with double precision ([#6595](https://github.com/PyTorchLightning/pytorch-lightning/pull/6595)) - Added support for DDP communication hooks ([#6736](https://github.com/PyTorchLightning/pytorch-lightning/issues/6736)) - Added `artifact_location` argument to `MLFlowLogger` which will be passed to the `MlflowClient.create_experiment` call ([#6677](https://github.com/PyTorchLightning/pytorch-lightning/pull/6677)) - Added `model` parameter to precision plugins' `clip_gradients` signature ([#6764](https://github.com/PyTorchLightning/pytorch-lightning/pull/6764)) ### Changed - Renamed `pytorch_lightning.callbacks.swa` to `pytorch_lightning.callbacks.stochastic_weight_avg` ([#6259](https://github.com/PyTorchLightning/pytorch-lightning/pull/6259)) - Refactor `RunningStage` and `TrainerState` usage ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Changed `trainer.evaluating` to return `True` if validating or testing ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Changed `setup()` and `teardown()` stage argument to take any of `{fit,validate,test,predict}` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Changed profilers to save separate report files per state and rank ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Changed `PyTorchProfiler` to use `torch.autograd.profiler.record_function` to record functions ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) ### Deprecated - `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Deprecated metrics in favor of `torchmetrics` ([#6505](https://github.com/PyTorchLightning/pytorch-lightning/pull/6505), [#6530](https://github.com/PyTorchLightning/pytorch-lightning/pull/6530), [#6540](https://github.com/PyTorchLightning/pytorch-lightning/pull/6540), [#6547](https://github.com/PyTorchLightning/pytorch-lightning/pull/6547), [#6515](https://github.com/PyTorchLightning/pytorch-lightning/pull/6515), [#6572](https://github.com/PyTorchLightning/pytorch-lightning/pull/6572), [#6573](https://github.com/PyTorchLightning/pytorch-lightning/pull/6573), [#6584](https://github.com/PyTorchLightning/pytorch-lightning/pull/6584), [#6636](https://github.com/PyTorchLightning/pytorch-lightning/pull/6636), [#6637](https://github.com/PyTorchLightning/pytorch-lightning/pull/6637), [#6649](https://github.com/PyTorchLightning/pytorch-lightning/pull/6649), [#6659](https://github.com/PyTorchLightning/pytorch-lightning/pull/6659), ) ### Removed - Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164)) - Removed no return warning from val/test step ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139)) - Removed passing a `ModelCheckpoint` instance to `Trainer(checkpoint_callback)` ([#6166](https://github.com/PyTorchLightning/pytorch-lightning/pull/6166)) - Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163)) - Removed deprecated metrics ([#6161](https://github.com/PyTorchLightning/pytorch-lightning/pull/6161)) * from `pytorch_lightning.metrics.functional.classification` removed `to_onehot`, `to_categorical`, `get_num_classes`, `roc`, `multiclass_roc`, `average_precision`, `precision_recall_curve`, `multiclass_precision_recall_curve` * from `pytorch_lightning.metrics.functional.reduction` removed `reduce`, `class_reduce` - Removed deprecated `ModelCheckpoint` arguments `prefix`, `mode="auto"` ([#6162](https://github.com/PyTorchLightning/pytorch-lightning/pull/6162)) - Removed `mode='auto'` from `EarlyStopping` ([#6167](https://github.com/PyTorchLightning/pytorch-lightning/pull/6167)) - Removed legacy references for magic keys in the `Result` object ([#6016](https://github.com/PyTorchLightning/pytorch-lightning/pull/6016)) - Removed deprecated `LightningModule` `hparams` setter ([#6207](https://github.com/PyTorchLightning/pytorch-lightning/pull/6207)) - Removed legacy code to log or include metrics in the progress bar by returning them in a dict with the `"log"/"progress_bar"` magic keys. Use `self.log` instead ([#6734](https://github.com/PyTorchLightning/pytorch-lightning/pull/6734)) - Removed `optimizer_idx` argument from `training_step` in manual optimization ([#6093](https://github.com/PyTorchLightning/pytorch-lightning/pull/6093)) ### Fixed - Set better defaults for `rank_zero_only.rank` when training is launched with SLURM and torchelastic ([#6802](https://github.com/PyTorchLightning/pytorch-lightning/pull/6802/)) - Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011)) - Move lightning module to correct device type when using LightningDistributedWrapper ([#6070](https://github.com/PyTorchLightning/pytorch-lightning/pull/6070)) - Do not print top-k verbose log with `ModelCheckpoint(monitor=None)` ([#6109](https://github.com/PyTorchLightning/pytorch-lightning/pull/6109)) - Fixed csv extension check ([#6436](https://github.com/PyTorchLightning/pytorch-lightning/pull/6436)) - Fixed `ModelCheckpoint(monitor=None, save_last=True)` not saving checkpoints ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136)) - Fixed `ModelCheckpoint(save_top_k=0, save_last=True)` not saving the `last` checkpoint ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136)) - Fixed `.teardown(stage='fit')` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Fixed `.on_fit_{start,end}()` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416)) - Fixed torch distributed not available in setup hook for DDP ([#6506](https://github.com/PyTorchLightning/pytorch-lightning/pull/6506)) - Fixed `EarlyStopping` logic when `min_epochs` or `min_steps` requirement is not met ([#6705](https://github.com/PyTorchLightning/pytorch-lightning/pull/6705)) ## [1.2.7] - 2021-04-06 ### Fixed - Fixed resolve a bug with omegaconf and xm.save ([#6741](https://github.com/PyTorchLightning/pytorch-lightning/pull/6741)) - Fixed an issue with IterableDataset when __len__ is not defined ([#6828](https://github.com/PyTorchLightning/pytorch-lightning/pull/6828)) - Sanitize None params during pruning ([#6836](https://github.com/PyTorchLightning/pytorch-lightning/pull/6836)) - Enforce an epoch scheduler interval when using SWA ([#6588](https://github.com/PyTorchLightning/pytorch-lightning/pull/6588)) - Fixed TPU Colab hang issue, post training ([#6816](https://github.com/PyTorchLightning/pytorch-lightning/pull/6816)) - Fixed a bug where `TensorBoardLogger` would give a warning and not log correctly to a symbolic link `save_dir` ([#6730](https://github.com/PyTorchLightning/pytorch-lightning/pull/6730)) ## [1.2.6] - 2021-03-30 ### Changed - Changed the behavior of `on_epoch_start` to run at the beginning of validation & test epoch ([#6498](https://github.com/PyTorchLightning/pytorch-lightning/pull/6498)) ### Removed - Removed legacy code to include `step` dictionary returns in `callback_metrics`. Use `self.log_dict` instead. ([#6682](https://github.com/PyTorchLightning/pytorch-lightning/pull/6682)) ### Fixed - Fixed `DummyLogger.log_hyperparams` raising a `TypeError` when running with `fast_dev_run=True` ([#6398](https://github.com/PyTorchLightning/pytorch-lightning/pull/6398)) - Fixed error on TPUs when there was no `ModelCheckpoint` ([#6654](https://github.com/PyTorchLightning/pytorch-lightning/pull/6654)) - Fixed `trainer.test` freeze on TPUs ([#6654](https://github.com/PyTorchLightning/pytorch-lightning/pull/6654)) - Fixed a bug where gradients were disabled after calling `Trainer.predict` ([#6657](https://github.com/PyTorchLightning/pytorch-lightning/pull/6657)) - Fixed bug where no TPUs were detected in a TPU pod env ([#6719](https://github.com/PyTorchLightning/pytorch-lightning/pull/6719)) ## [1.2.5] - 2021-03-23 ### Changed - Update Gradient Clipping for the TPU Accelerator ([#6576](https://github.com/PyTorchLightning/pytorch-lightning/pull/6576)) - Refactored setup for typing friendly ([#6590](https://github.com/PyTorchLightning/pytorch-lightning/pull/6590)) ### Fixed - Fixed a bug where `all_gather` would not work correctly with `tpu_cores=8` ([#6587](https://github.com/PyTorchLightning/pytorch-lightning/pull/6587)) - Fixed comparing required versions ([#6434](https://github.com/PyTorchLightning/pytorch-lightning/pull/6434)) - Fixed duplicate logs appearing in console when using the python logging module ([#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275)) - Added Autocast in validation, test and predict modes for Native AMP ([#6565](https://github.com/PyTorchLightning/pytorch-lightning/pull/6565)) Reviewed By: shuyingsunshine21 Differential Revision: D27528929 fbshipit-source-id: 311c88f71461c2c79bbf185e28d7a6d683ccc26f
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/43 - move some of `test_meta_arch_rcnn.py` to oss - `create_fake_detection_data_loader` doesn't give correct resolution, fix it - set pooler resolution for faster test Reviewed By: zhanghang1989 Differential Revision: D27726476 fbshipit-source-id: 063c0a01e95df10f91b58b0692da0546e21c115c
-
- 13 Apr, 2021 2 commits
-
-
Yanghan Wang authored
Summary: - store expiration in meta data when loading dat - use before_train_hook to rebuild data loader when expiration condition is met. Reviewed By: zisting Differential Revision: D27683164 fbshipit-source-id: e3e3c6c15eee7c02c7a1bfed5f4d4d0e67d61a4f
-
Sam Tsai authored
Summary: 1. Add changes to support variation of datasets 2. Fix runner to support torchscript export Reviewed By: wat3rBro Differential Revision: D26871461 fbshipit-source-id: ec46f7e0d2c14c9b802aec22d78b2a089e962a2f
-
- 09 Apr, 2021 2 commits
-
-
Ananth Subramaniam authored
Summary: `checkpoint_callback` now only accepts boolean values: https://github.com/PyTorchLightning/pytorch-lightning/blob/19e67d18c472c3a03dec4dd9bfcef031e9ca8719/pytorch_lightning/trainer/connectors/callback_connector.py#L65-L73 Reviewed By: shuyingsunshine21 Differential Revision: D27682178 fbshipit-source-id: 9e863aad7a23a76dee8ae5df9f5a78e7a94bfe8a
-
Ananth Subramaniam authored
Summary: Before: this test would assume only 2 checkpoints were stored: `last.ckpt`, and `FINAL_MODEL_CKPT` Now: this test asserts that at least these 2 checkpoints are stored. In case the config specifies `save_top_k=-1` for instance, we'd save more checkpoints, causing this test to fail Since this test is only loading the last and the final outputs, I'm changing the behavior to assert that these checkpoints must be saved and ignoring other checkpoint files that could be generated. Reviewed By: kazhang Differential Revision: D27671284 fbshipit-source-id: 0419fb46856d048e7b6eba3ff1dc65b7280a9a90
-
- 08 Apr, 2021 1 commit
-
-
Yanghan Wang authored
Summary: fbgs register_uri_image_loader, UniversalResourceLoader, _IMAGE_LOADER_REGISTRY returns no results other than this file Reviewed By: newstzpz Differential Revision: D27639902 fbshipit-source-id: 52e3bb77dbb547334938b8537d6e1c173405d12d
-
- 06 Apr, 2021 2 commits
-
-
Hang Zhang authored
Summary: we need to load the [config file](https://github.com/facebookresearch/d2go/blob/master/tests/misc/test_configs.py#L35) during the unittest. The `get_resource_path` need to use `d2go.tests` https://fburl.com/whn41ma0 The test fail at https://github.com/facebookresearch/d2go/runs/2272258836 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/35 Reviewed By: wat3rBro Differential Revision: D27578008 Pulled By: zhanghang1989 fbshipit-source-id: 5fa24c7f74bc3e59ffee98d57f02a2a558c2a4b0
-
Hang Zhang authored
Summary: TorchVision recently upgrade their version to 0.10.0 which causes issues in the version check in detr. Reviewed By: wat3rBro Differential Revision: D27575085 fbshipit-source-id: 75f459fe7a711161e908609fcf2f2d28a01a6c74
-
- 05 Apr, 2021 2 commits
-
-
Owen Wang authored
Summary: Prediction count evaluator needs to gather it's state before computing metrics, otherwise when parallelized across N GPUs, we only get metrics computed from 1/N of the dataset, increasing our eval signal's variance. Reviewed By: wat3rBro Differential Revision: D27416864 fbshipit-source-id: b2c5334cd5a38bebcd06c6ace1627a6b71645fdd
-
Sam Tsai authored
Summary: Add typing to transform. Reviewed By: wat3rBro Differential Revision: D27145140 fbshipit-source-id: 8556427b421bf91a05692a590db175c68c4d6890
-
- 03 Apr, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Make data and evaluation visualization optional. * could return None. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D27316632 fbshipit-source-id: 2a85db4815cbf3407a20a74c125dcd52d75167fa
-
Peizhao Zhang authored
Summary: Format changes. * [Option] + [Shift] + [F] Reviewed By: mattcyu1, zhanghang1989, wat3rBro Differential Revision: D27316555 fbshipit-source-id: 0fc3396eb34d964478cb3551dc73b47412089ccb
-
- 02 Apr, 2021 1 commit
-
-
Yanghan Wang authored
Summary: #Facebook: `build_d2go_train_loader` will replace `runner.build_detection_train_loader`, currently we call `build_d2go_train_loader` from `runner.build_detection_train_loader` since some runner has there own implementation, we will solve those cases and remove the `runner.build_detection_train_loader` API. Currently `build_d2go_train_loader` uses `_MAPPED_TRAIN_LOADER_BUILDER_REGISTRY` to support different versions between OSS and FB, not sure if this is a good pattern or not, please comment in the diff if you have better idea. Reviewed By: zhanghang1989 Differential Revision: D27505681 fbshipit-source-id: b5caf7280a88c2ebccb498097c0b7af51c966fc6
-
- 31 Mar, 2021 3 commits
-
-
Kai Zhang authored
Reviewed By: newstzpz Differential Revision: D27255960 fbshipit-source-id: 1699ff23d2bc610dffc0215a90a7c1c17e3783c3
-
Sam Tsai authored
Summary: Fixing unit test that was not listed due to rebase error. Reviewed By: newstzpz, wat3rBro Differential Revision: D27456322 fbshipit-source-id: 519c5c086adfb19104ed99234f4f476eb34a79bc
-
Tao Xu authored
Summary: Train a pix2pix model on the paired dataset. During inference, it can transfer an source image to the target image. Reviewed By: newstzpz Differential Revision: D27371290 fbshipit-source-id: 3141bc6d9e4fe0013f6ea3de3cf998163d286168
-
- 30 Mar, 2021 3 commits
-
-
Sam Tsai authored
Summary: Separate unit tests into individual folder based on functionality. Reviewed By: wat3rBro Differential Revision: D27132567 fbshipit-source-id: 9a8200be530ca14c7ef42191d59795b05b9800cc
-
Hang Zhang authored
Summary: fixes https://github.com/facebookresearch/d2go/issues/27 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/28 Reviewed By: newstzpz Differential Revision: D27214440 Pulled By: zhanghang1989 fbshipit-source-id: da538ad1e29faa9c36065db89138b1cc97045a28
-
Kapil Krishnakumar authored
Summary: On datasets that don't contain the dataset name / mapping, initialization using the parent visualizer class breaks. Split this into it's own function so that the functionality can be overridden in a subclass. Reviewed By: wat3rBro Differential Revision: D27412314 fbshipit-source-id: a91db47615b14ba982285ce819901b8db27e5693
-
- 29 Mar, 2021 5 commits
-
-
TannerGilbert authored
Summary: In 9d238344, the test utils were moved to the core library, but the import for the create_fake_detection_data_loader inside the d2go_beginner.ipynb wasn't updated. Pull Request resolved: https://github.com/facebookresearch/d2go/pull/29 Reviewed By: newstzpz Differential Revision: D27239846 Pulled By: zhanghang1989 fbshipit-source-id: e39df32746b1d1081026f9969bda84e73ac7df55
-
Yanghan Wang authored
Summary: all utils code are moved to d2go.utils.testing Reviewed By: newstzpz Differential Revision: D27209943 fbshipit-source-id: 6c5cb14858155a8ed13478d65ee8e02ef74616d7
-
Sanjeev Kumar authored
Summary: - Added support for running evaluation for models where the number of subclasses in the model output is less than the number of subclases in the annotated dataset Reviewed By: vivekn Differential Revision: D27090466 fbshipit-source-id: 704c438c1bbca333648c0477c412bf3ed79f04e7
-
Yanghan Wang authored
Summary: Add `build_auto_stream_train_loader` Reviewed By: newstzpz Differential Revision: D27343030 fbshipit-source-id: a79d3ed1ac41fc159d10bb6ff1db74549b645a1c
-
Yanghan Wang authored
Summary: The default mapper may load "file_name" and "sem_seg_file_name" from `dataset_dict`, when prefetching them from manifold, we no longer need to load them because they're already fetched. This diff adds two more fields for holding those pre-fetched data, and make the mapper work in both cases. Reviewed By: newstzpz Differential Revision: D26972340 fbshipit-source-id: 63f6dc809d321e149aa5adf9f92c3ace07cbf2a7
-
- 24 Mar, 2021 3 commits
-
-
Kai Zhang authored
Summary: Evaluate the predictor generated by previous step. This diff modify the lightning_train_net to reuse the evaluation logic by adding a `predictor_path` param. This diff also makes Lightning training backend depends on `cfg.MODEL.DEVICE` so that in evaluate_predictor step, user could set backend by changing model device. This is useful for evaluating int8 quantized model. Reviewed By: newstzpz Differential Revision: D27150609 fbshipit-source-id: fb72da3e81db932c0fa479350150720143e09a3e
-
Kai Zhang authored
Summary: As titled. Reviewed By: newstzpz Differential Revision: D27074737 fbshipit-source-id: 72f2535fc730a37f5ea8f58aaff88005c28ffc5b
-
Kai Zhang authored
Summary: Given that the way to create D2 (https://github.com/facebookresearch/d2go/commit/465cdb842513eb910aa20fcedea1d2edd15dc7b7)go runner and Lightning task are different, get_class was introduced so that in application we could do: ``` if is Lightning: task_cls = get_class(classname) task = task_cls(cfg) else: runner = create_runner(classname) ``` It turns out that we could need to do that in many places: workflow, binaries. This diff revert `get_class` and return class in `create_runner` if the class is a Lightning module. Reviewed By: newstzpz Differential Revision: D26676595 fbshipit-source-id: c3ce2016d09fe073af4c2dd9f98eea4e59ca621b
-
- 21 Mar, 2021 1 commit
-
-
Tao Xu authored
Summary: Prepare the launch script for IG, which support registering new dataset for GANs on the fly Reviewed By: newstzpz Differential Revision: D27211763 fbshipit-source-id: f79978ceae246ab4f27a8083d25dd50c62dcefab
-
- 20 Mar, 2021 1 commit
-
-
Yanghan Wang authored
Summary: Not d2go.tests is not a library for oss, move utils code to d2go.utils.testing Reviewed By: zhanghang1989 Differential Revision: D26706933 fbshipit-source-id: 85767b66bbb6c67db05e11823beb4840220b2aa3
-
- 18 Mar, 2021 2 commits
-
-
Ananth Subramaniam authored
Summary: `checkpoint_callback` is being phased out. Initially, it was a special way to configure checkpoints, but it makes more sense for those callbacks to be included in the general `callbacks` trainer argument. In 1.2.X, `checkpoint_callback` is expected to be a boolean value only. If `checkpoint_callback=False` **and** an instance of `ModelCheckpoint` is passed in the trainer's `callbacks` arguments, Lightning raises a [misconfiguration error](https://github.com/PyTorchLightning/pytorch-lightning/blob/2f6ce1ae7fff34d16d3707571f6a9a7b0fb0c50a/pytorch_lightning/trainer/connectors/callback_connector.py#L66-L70) Reviewed By: newstzpz Differential Revision: D27139315 fbshipit-source-id: 07ad5ea520583a2e46a9cb2a938f98968265c932
-
Owen Wang authored
Summary: Add option to specify a custom subclass id mapping. Allows for flexibility when training models with different outputs needed. Reviewed By: sanjeevk42 Differential Revision: D26826986 fbshipit-source-id: 9dba4f0f2f2afebd2f152ddd9aebd46cf4c86a0d
-
- 17 Mar, 2021 2 commits
-
-
Hang Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/24 Reviewed By: wat3rBro Differential Revision: D27127642 Pulled By: zhanghang1989 fbshipit-source-id: 18bc3c2fa05232cacc778925db6b7dcea99b108c
-
Kai Zhang authored
Summary: As titled. The logs will be stored in scuba_caffe2_pytorch_usage_stats and can be queried like ``` WITH events AS ( SELECT DISTINCT workflow_run_id, REGEXP_EXTRACT(event, 'D2 (https://github.com/facebookresearch/d2go/commit/465cdb842513eb910aa20fcedea1d2edd15dc7b7)Go.Runner\.([a-zA-Z0-9_]*)', 1) AS runner_name, ds FROM scuba_caffe2_pytorch_usage_stats WHERE ds BETWEEN '$START_DATE$' AND '$END_DATE$' AND event LIKE '%D2 (https://github.com/facebookresearch/d2go/commit/465cdb842513eb910aa20fcedea1d2edd15dc7b7)Go.Runner%' AND flow_is_test = 0 AND flow_is_local_run = 0 AND workflow_run_id > 0 ) SELECT COUNT(1) AS total_runs, runner_name, ds FROM events GROUP BY ds, runner_name ORDER BY total_runs DESC ``` Reviewed By: colin2328 Differential Revision: D26032225 fbshipit-source-id: ab1e06f3b1af200baf530506be9b3894ddf77126
-
- 16 Mar, 2021 1 commit
-
-
Sam Tsai authored
Summary: Extend conversion to support ids beyond cocotext format where ids are strings. Reviewed By: newstzpz Differential Revision: D27018211 fbshipit-source-id: 7282fd4b9a7e9cd19323235ed1a3c3e7b33cb6b4
-