- 30 Apr, 2021 1 commit
-
-
Sam Tsai authored
Summary: 1. Add a keypoint metadata registry for registering different keypoint metadata 2. Add option to inject_coco_dataset for adding keypoint metadata Reviewed By: newstzpz Differential Revision: D27730541 fbshipit-source-id: c6ba97f60664fce4dcbb0de80222df7490bc6d5d
-
- 29 Apr, 2021 2 commits
-
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D28083131 fbshipit-source-id: 8bad642800d3923db3f42047d1b1d85625c01bd9
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D28081681 fbshipit-source-id: 3722f5db668c36c4f23c3fd0c10657a3cf14ad3c
-
- 28 Apr, 2021 2 commits
-
-
Hang Zhang authored
Summary: PointRend mask doesn't work for quantization. Add a patch to disable it. Reviewed By: wat3rBro Differential Revision: D27800349 fbshipit-source-id: ae0268ee78b000245ebdb2edbfc679a62c85a59a
-
Ananth Subramaniam authored
Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ### New commit log messages 7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053) bab72255 [fix] Add barriers before and after setup hook is run (#7202) f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228) e147127c [feat] Add better support for predict + ddp 2/3 (#7215) ca6c87ff Add back `clip_gradients(model)` (#7231) 3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014) 5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213) 52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937) dd5ec75e Deprecate save_function from model checkpoint callback (#7201) ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194) 6be0a859 Update teardown for TPU acc (#7211) bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814) 68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130) 44d775fc Update Error message for ProfileConnector (#7204) 31fcd7d0 Deprecate write_predictions on the LightningModule (#7066) 591b9cee make bug_report_model minimal (#7191) b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180) f58865aa Properly set `LightningModule.device` after model replacement (#7188) 8439aead Update FairScale on CI (#7017) 92af3632 Fix `lr_finder` suggesting too high learning rates (#7076) d534e53e add missing predict docs (#7150) Reviewed By: kazhang Differential Revision: D28032962 fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
-
- 27 Apr, 2021 1 commit
-
-
Jacob Szwejbka authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/54 This arg is being deprecated, and its use case was really only for modules that use functions besides forward for inference. The new plan is just to optimize every function. Since this script was just created I'm hoping I can edit this without throwing lots of stuff out of wack. Reviewed By: wat3rBro Differential Revision: D27954176 fbshipit-source-id: fbe178fcc0404e5d2524d8edb4052e2cd17f43ba
-
- 23 Apr, 2021 3 commits
-
-
Yanghan Wang authored
Summary: Customization via `export_predictor` is now deprecated, this diff move the functionality to `FacegenExportMethod`. Reviewed By: danthe3rd Differential Revision: D27935492 fbshipit-source-id: 4f35cff7f3709eff290edefce570cfeea47e687d
-
Yanghan Wang authored
Summary: This diff cleans up the process of exporting RCNN to predictor by tracing. - Implement a new `D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4)TorchscriptTracingExport` which utilizes D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4)'s `TracingAdapter`. It's capable to handle more complicated input/output data structures, for example the `MultiDictInMultiDictOut` in unit test. Some duplicated code for serializing can also be removed. - Later on we'll move `DefaultTorchscriptExport` to `mobile_cv.predictor` which doesn't have D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4) dependency, while keep `D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4)TorchscriptTracingExport` in D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4)Go as a more advanced version. - Using `D2 (https://github.com/facebookresearch/d2go/commit/d86ecc92eb97f14fcd97d626185f61c6817351e4)TorchscriptTracingExport` we can simply the `prepare_for_export` quite a bit and remove hacky code. Reviewed By: zhanghang1989 Differential Revision: D27931029 fbshipit-source-id: 4a8d5e5ee3f10e29d98fca63e0e1c68bbda22745
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27916281 fbshipit-source-id: 7ea01e99e9c2a9b19992f458abc786713ba5a4cd
-
- 22 Apr, 2021 1 commit
-
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27805428 fbshipit-source-id: c588bdb456e606ca333c2f99eb5c3668edddcbfa
-
- 21 Apr, 2021 4 commits
-
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27898376 fbshipit-source-id: 87549b0cc24bd38f114977503f4eba97e9166ab8
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/46 As titled. The test is flaky because the tensorboard logger might still be writing to temporary folder when we tear down the folder. Reviewed By: ananthsub Differential Revision: D27844504 fbshipit-source-id: 3987f9ec3cd05b2f193e75cd4d85109a46f4ee71
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/51 As titled. Group the D2 (https://github.com/facebookresearch/d2go/commit/788cf41206fcd761da1974747f0c2c6c671ce871)go runner methods together. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D27777726 fbshipit-source-id: f300bce444a401b61ff2adfb45b0c640b1f14855
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/50 Found a few custom runners override this method of default runner. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D27777505 fbshipit-source-id: 0463cc36bad4af4cbfbe09ab46962cfc1dafbd5d
-
- 20 Apr, 2021 2 commits
-
-
Yanghan Wang authored
Summary: model to export is either `nn.Module` and or dict of `nn.Module` Reviewed By: zhanghang1989 Differential Revision: D27835097 fbshipit-source-id: 869446b36d3e8cc30d6d947f1fc8970cc9ba6c12
-
Kai Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/49 Reviewed By: wat3rBro Differential Revision: D27875007 fbshipit-source-id: 2f61a4a3de29f3583a54adc914ee5a7eb605a823
-
- 19 Apr, 2021 2 commits
-
-
Yue (R) Zhao authored
Summary: Add the API to log graph in tensorboard Reviewed By: wat3rBro Differential Revision: D27855774 fbshipit-source-id: 415c469c5de0c56fc828d1b95f4be697e0acac84
-
Peizhao Zhang authored
Summary: * Added a registry to register functions that could be used to register hooks for training. * TRAINER_HOOKS_REGISTRY: List of functions to add hooks for trainer, all functions in the registry will be called to add hooks * `func(hooks: List[HookBase]) -> None` Reviewed By: zhanghang1989 Differential Revision: D27560806 fbshipit-source-id: fcfa02623bfd08508b6083db2d318d08f7e3c0b8
-
- 17 Apr, 2021 2 commits
-
-
Kai Zhang authored
Summary: Delegate FX quantization callback's customization to model. Reviewed By: wat3rBro Differential Revision: D27669212 fbshipit-source-id: 2715546cf03134896da6f95ecddaf8503ff95d0b
-
Kai Zhang authored
Summary: As per title and sanity test E2E QAT workflow on Lightning Trainer. - add `post_training_opts`. This is required to use `all_steps_qat.json` with Lightning. We don't actually support the post_training_opts in this diff though - we leave it part of T83437359. - Update .yaml to specify the Quantize-able modules. - Update `lightning_train_net.py` to use the QuantizationAwareTraining callback. Reviewed By: kandluis Differential Revision: D26304879 fbshipit-source-id: 948bef4817d385d8a0969e4990d7f17ecd6994b7
-
- 15 Apr, 2021 3 commits
-
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27710199 fbshipit-source-id: 178a28972dcc06350e99263f4b38f284cf10c890
-
Yanghan Wang authored
Reviewed By: zhanghang1989 Differential Revision: D27783989 fbshipit-source-id: f05c11e396a2f62366721b365929b29f05d5bc02
-
Alexander Pivovarov authored
Summary: Fix typos in exporter Pull Request resolved: https://github.com/facebookresearch/d2go/pull/45 Reviewed By: wat3rBro Differential Revision: D27779963 Pulled By: zhanghang1989 fbshipit-source-id: bcf7922afe6d4cccc074615069538eb5a6098b98
-
- 14 Apr, 2021 2 commits
-
-
Ananth Subramaniam authored
Synchronize PyTorchLightning/pytorch-lightning (revision 0b843848@master) to github/third-party/PyTorchLightning/pytorch-lightning Summary: ### New commit log messages ## [UnReleased] - 2021-MM-DD ### Added - Added more explicit exception message when trying to execute `trainer.test()` or `trainer.validate()` with `fast_dev_run=True` ([#6667](https://github.com/PyTorchLightning/pytorch-lightning/pull/6667)) - Added `LightningCLI` class to provide simple reproducibility with minimum boilerplate training cli. ([#4492](https://github.com/PyTorchLightning/pytorch-lightning/pull/4492)) - Trigger warning when non-metric logged value with multi processes hasn't been reduced ([#6417](https://github.com/PyTorchLightning/pytorch-lightning/pull/6417)) - Added `gradient_clip_algorithm` argument to Trainer for gradient clipping by value ([#6123](https://github.com/PyTorchLightning/pytorch-lightning/pull/6123)). - Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470)) - Added support to checkpoint after training steps in `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072)) - Added `RunningStage.SANITY_CHECKING` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Added `TrainerState.{FITTING,VALIDATING,TESTING,PREDICTING,TUNING}` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Added `Trainer.validate()` method to perform one evaluation epoch over the validation set ([#4948](https://github.com/PyTorchLightning/pytorch-lightning/pull/4948)) - Added `LightningEnvironment` for Lightning-specific DDP ([#5915](https://github.com/PyTorchLightning/pytorch-lightning/pull/5915)) - Added `teardown()` hook to LightningDataModule ([#4673](https://github.com/PyTorchLightning/pytorch-lightning/pull/4673)) - Added `auto_insert_metric_name` parameter to `ModelCheckpoint` ([#6277](https://github.com/PyTorchLightning/pytorch-lightning/pull/6277)) - Added arg to `self.log` that enables users to give custom names when dealing with multiple dataloaders ([#6274](https://github.com/PyTorchLightning/pytorch-lightning/pull/6274)) - Added `teardown` method to `BaseProfiler` to enable subclasses defining post-profiling steps outside of `__del__` ([#6370](https://github.com/PyTorchLightning/pytorch-lightning/pull/6370)) - Added `setup` method to `BaseProfiler` to enable subclasses defining pre-profiling steps for every process ([#6633](https://github.com/PyTorchLightning/pytorch-lightning/pull/6633)) - Added no return warning to predict ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139)) - Added `Trainer.predict` config validation ([#6543](https://github.com/PyTorchLightning/pytorch-lightning/pull/6543)) - Added `AbstractProfiler` interface ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Added support for including module names for forward in the autograd trace of `PyTorchProfiler` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Added support for the PyTorch 1.8.1 autograd profiler ([#6618](https://github.com/PyTorchLightning/pytorch-lightning/pull/6618)) - Added `outputs` parameter to callback's `on_validation_epoch_end` & `on_test_epoch_end` hooks ([#6120](https://github.com/PyTorchLightning/pytorch-lightning/pull/6120)) - Added `configure_sharded_model` hook ([#6679](https://github.com/PyTorchLightning/pytorch-lightning/pull/6679)) - Added support for `precision=64`, enabling training with double precision ([#6595](https://github.com/PyTorchLightning/pytorch-lightning/pull/6595)) - Added support for DDP communication hooks ([#6736](https://github.com/PyTorchLightning/pytorch-lightning/issues/6736)) - Added `artifact_location` argument to `MLFlowLogger` which will be passed to the `MlflowClient.create_experiment` call ([#6677](https://github.com/PyTorchLightning/pytorch-lightning/pull/6677)) - Added `model` parameter to precision plugins' `clip_gradients` signature ([#6764](https://github.com/PyTorchLightning/pytorch-lightning/pull/6764)) ### Changed - Renamed `pytorch_lightning.callbacks.swa` to `pytorch_lightning.callbacks.stochastic_weight_avg` ([#6259](https://github.com/PyTorchLightning/pytorch-lightning/pull/6259)) - Refactor `RunningStage` and `TrainerState` usage ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Changed `trainer.evaluating` to return `True` if validating or testing ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Changed `setup()` and `teardown()` stage argument to take any of `{fit,validate,test,predict}` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Changed profilers to save separate report files per state and rank ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Changed `PyTorchProfiler` to use `torch.autograd.profiler.record_function` to record functions ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) ### Deprecated - `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146)) - Deprecated `trainer.running_sanity_check` in favor of `trainer.sanity_checking` ([#4945](https://github.com/PyTorchLightning/pytorch-lightning/pull/4945)) - Deprecated `Profiler(output_filename)` in favor of `dirpath` and `filename` ([#6621](https://github.com/PyTorchLightning/pytorch-lightning/pull/6621)) - Deprecated `PytorchProfiler(profiled_functions)` in favor of `record_functions` ([#6349](https://github.com/PyTorchLightning/pytorch-lightning/pull/6349)) - Deprecated metrics in favor of `torchmetrics` ([#6505](https://github.com/PyTorchLightning/pytorch-lightning/pull/6505), [#6530](https://github.com/PyTorchLightning/pytorch-lightning/pull/6530), [#6540](https://github.com/PyTorchLightning/pytorch-lightning/pull/6540), [#6547](https://github.com/PyTorchLightning/pytorch-lightning/pull/6547), [#6515](https://github.com/PyTorchLightning/pytorch-lightning/pull/6515), [#6572](https://github.com/PyTorchLightning/pytorch-lightning/pull/6572), [#6573](https://github.com/PyTorchLightning/pytorch-lightning/pull/6573), [#6584](https://github.com/PyTorchLightning/pytorch-lightning/pull/6584), [#6636](https://github.com/PyTorchLightning/pytorch-lightning/pull/6636), [#6637](https://github.com/PyTorchLightning/pytorch-lightning/pull/6637), [#6649](https://github.com/PyTorchLightning/pytorch-lightning/pull/6649), [#6659](https://github.com/PyTorchLightning/pytorch-lightning/pull/6659), ) ### Removed - Removed support for passing a bool value to `profiler` argument of Trainer ([#6164](https://github.com/PyTorchLightning/pytorch-lightning/pull/6164)) - Removed no return warning from val/test step ([#6139](https://github.com/PyTorchLightning/pytorch-lightning/pull/6139)) - Removed passing a `ModelCheckpoint` instance to `Trainer(checkpoint_callback)` ([#6166](https://github.com/PyTorchLightning/pytorch-lightning/pull/6166)) - Removed deprecated Trainer argument `enable_pl_optimizer` and `automatic_optimization` ([#6163](https://github.com/PyTorchLightning/pytorch-lightning/pull/6163)) - Removed deprecated metrics ([#6161](https://github.com/PyTorchLightning/pytorch-lightning/pull/6161)) * from `pytorch_lightning.metrics.functional.classification` removed `to_onehot`, `to_categorical`, `get_num_classes`, `roc`, `multiclass_roc`, `average_precision`, `precision_recall_curve`, `multiclass_precision_recall_curve` * from `pytorch_lightning.metrics.functional.reduction` removed `reduce`, `class_reduce` - Removed deprecated `ModelCheckpoint` arguments `prefix`, `mode="auto"` ([#6162](https://github.com/PyTorchLightning/pytorch-lightning/pull/6162)) - Removed `mode='auto'` from `EarlyStopping` ([#6167](https://github.com/PyTorchLightning/pytorch-lightning/pull/6167)) - Removed legacy references for magic keys in the `Result` object ([#6016](https://github.com/PyTorchLightning/pytorch-lightning/pull/6016)) - Removed deprecated `LightningModule` `hparams` setter ([#6207](https://github.com/PyTorchLightning/pytorch-lightning/pull/6207)) - Removed legacy code to log or include metrics in the progress bar by returning them in a dict with the `"log"/"progress_bar"` magic keys. Use `self.log` instead ([#6734](https://github.com/PyTorchLightning/pytorch-lightning/pull/6734)) - Removed `optimizer_idx` argument from `training_step` in manual optimization ([#6093](https://github.com/PyTorchLightning/pytorch-lightning/pull/6093)) ### Fixed - Set better defaults for `rank_zero_only.rank` when training is launched with SLURM and torchelastic ([#6802](https://github.com/PyTorchLightning/pytorch-lightning/pull/6802/)) - Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011)) - Move lightning module to correct device type when using LightningDistributedWrapper ([#6070](https://github.com/PyTorchLightning/pytorch-lightning/pull/6070)) - Do not print top-k verbose log with `ModelCheckpoint(monitor=None)` ([#6109](https://github.com/PyTorchLightning/pytorch-lightning/pull/6109)) - Fixed csv extension check ([#6436](https://github.com/PyTorchLightning/pytorch-lightning/pull/6436)) - Fixed `ModelCheckpoint(monitor=None, save_last=True)` not saving checkpoints ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136)) - Fixed `ModelCheckpoint(save_top_k=0, save_last=True)` not saving the `last` checkpoint ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136)) - Fixed `.teardown(stage='fit')` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Fixed `.on_fit_{start,end}()` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386)) - Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416)) - Fixed torch distributed not available in setup hook for DDP ([#6506](https://github.com/PyTorchLightning/pytorch-lightning/pull/6506)) - Fixed `EarlyStopping` logic when `min_epochs` or `min_steps` requirement is not met ([#6705](https://github.com/PyTorchLightning/pytorch-lightning/pull/6705)) ## [1.2.7] - 2021-04-06 ### Fixed - Fixed resolve a bug with omegaconf and xm.save ([#6741](https://github.com/PyTorchLightning/pytorch-lightning/pull/6741)) - Fixed an issue with IterableDataset when __len__ is not defined ([#6828](https://github.com/PyTorchLightning/pytorch-lightning/pull/6828)) - Sanitize None params during pruning ([#6836](https://github.com/PyTorchLightning/pytorch-lightning/pull/6836)) - Enforce an epoch scheduler interval when using SWA ([#6588](https://github.com/PyTorchLightning/pytorch-lightning/pull/6588)) - Fixed TPU Colab hang issue, post training ([#6816](https://github.com/PyTorchLightning/pytorch-lightning/pull/6816)) - Fixed a bug where `TensorBoardLogger` would give a warning and not log correctly to a symbolic link `save_dir` ([#6730](https://github.com/PyTorchLightning/pytorch-lightning/pull/6730)) ## [1.2.6] - 2021-03-30 ### Changed - Changed the behavior of `on_epoch_start` to run at the beginning of validation & test epoch ([#6498](https://github.com/PyTorchLightning/pytorch-lightning/pull/6498)) ### Removed - Removed legacy code to include `step` dictionary returns in `callback_metrics`. Use `self.log_dict` instead. ([#6682](https://github.com/PyTorchLightning/pytorch-lightning/pull/6682)) ### Fixed - Fixed `DummyLogger.log_hyperparams` raising a `TypeError` when running with `fast_dev_run=True` ([#6398](https://github.com/PyTorchLightning/pytorch-lightning/pull/6398)) - Fixed error on TPUs when there was no `ModelCheckpoint` ([#6654](https://github.com/PyTorchLightning/pytorch-lightning/pull/6654)) - Fixed `trainer.test` freeze on TPUs ([#6654](https://github.com/PyTorchLightning/pytorch-lightning/pull/6654)) - Fixed a bug where gradients were disabled after calling `Trainer.predict` ([#6657](https://github.com/PyTorchLightning/pytorch-lightning/pull/6657)) - Fixed bug where no TPUs were detected in a TPU pod env ([#6719](https://github.com/PyTorchLightning/pytorch-lightning/pull/6719)) ## [1.2.5] - 2021-03-23 ### Changed - Update Gradient Clipping for the TPU Accelerator ([#6576](https://github.com/PyTorchLightning/pytorch-lightning/pull/6576)) - Refactored setup for typing friendly ([#6590](https://github.com/PyTorchLightning/pytorch-lightning/pull/6590)) ### Fixed - Fixed a bug where `all_gather` would not work correctly with `tpu_cores=8` ([#6587](https://github.com/PyTorchLightning/pytorch-lightning/pull/6587)) - Fixed comparing required versions ([#6434](https://github.com/PyTorchLightning/pytorch-lightning/pull/6434)) - Fixed duplicate logs appearing in console when using the python logging module ([#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275)) - Added Autocast in validation, test and predict modes for Native AMP ([#6565](https://github.com/PyTorchLightning/pytorch-lightning/pull/6565)) Reviewed By: shuyingsunshine21 Differential Revision: D27528929 fbshipit-source-id: 311c88f71461c2c79bbf185e28d7a6d683ccc26f
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/43 - move some of `test_meta_arch_rcnn.py` to oss - `create_fake_detection_data_loader` doesn't give correct resolution, fix it - set pooler resolution for faster test Reviewed By: zhanghang1989 Differential Revision: D27726476 fbshipit-source-id: 063c0a01e95df10f91b58b0692da0546e21c115c
-
- 13 Apr, 2021 2 commits
-
-
Yanghan Wang authored
Summary: - store expiration in meta data when loading dat - use before_train_hook to rebuild data loader when expiration condition is met. Reviewed By: zisting Differential Revision: D27683164 fbshipit-source-id: e3e3c6c15eee7c02c7a1bfed5f4d4d0e67d61a4f
-
Sam Tsai authored
Summary: 1. Add changes to support variation of datasets 2. Fix runner to support torchscript export Reviewed By: wat3rBro Differential Revision: D26871461 fbshipit-source-id: ec46f7e0d2c14c9b802aec22d78b2a089e962a2f
-
- 09 Apr, 2021 2 commits
-
-
Ananth Subramaniam authored
Summary: `checkpoint_callback` now only accepts boolean values: https://github.com/PyTorchLightning/pytorch-lightning/blob/19e67d18c472c3a03dec4dd9bfcef031e9ca8719/pytorch_lightning/trainer/connectors/callback_connector.py#L65-L73 Reviewed By: shuyingsunshine21 Differential Revision: D27682178 fbshipit-source-id: 9e863aad7a23a76dee8ae5df9f5a78e7a94bfe8a
-
Ananth Subramaniam authored
Summary: Before: this test would assume only 2 checkpoints were stored: `last.ckpt`, and `FINAL_MODEL_CKPT` Now: this test asserts that at least these 2 checkpoints are stored. In case the config specifies `save_top_k=-1` for instance, we'd save more checkpoints, causing this test to fail Since this test is only loading the last and the final outputs, I'm changing the behavior to assert that these checkpoints must be saved and ignoring other checkpoint files that could be generated. Reviewed By: kazhang Differential Revision: D27671284 fbshipit-source-id: 0419fb46856d048e7b6eba3ff1dc65b7280a9a90
-
- 08 Apr, 2021 1 commit
-
-
Yanghan Wang authored
Summary: fbgs register_uri_image_loader, UniversalResourceLoader, _IMAGE_LOADER_REGISTRY returns no results other than this file Reviewed By: newstzpz Differential Revision: D27639902 fbshipit-source-id: 52e3bb77dbb547334938b8537d6e1c173405d12d
-
- 06 Apr, 2021 2 commits
-
-
Hang Zhang authored
Summary: we need to load the [config file](https://github.com/facebookresearch/d2go/blob/master/tests/misc/test_configs.py#L35) during the unittest. The `get_resource_path` need to use `d2go.tests` https://fburl.com/whn41ma0 The test fail at https://github.com/facebookresearch/d2go/runs/2272258836 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/35 Reviewed By: wat3rBro Differential Revision: D27578008 Pulled By: zhanghang1989 fbshipit-source-id: 5fa24c7f74bc3e59ffee98d57f02a2a558c2a4b0
-
Hang Zhang authored
Summary: TorchVision recently upgrade their version to 0.10.0 which causes issues in the version check in detr. Reviewed By: wat3rBro Differential Revision: D27575085 fbshipit-source-id: 75f459fe7a711161e908609fcf2f2d28a01a6c74
-
- 05 Apr, 2021 2 commits
-
-
Owen Wang authored
Summary: Prediction count evaluator needs to gather it's state before computing metrics, otherwise when parallelized across N GPUs, we only get metrics computed from 1/N of the dataset, increasing our eval signal's variance. Reviewed By: wat3rBro Differential Revision: D27416864 fbshipit-source-id: b2c5334cd5a38bebcd06c6ace1627a6b71645fdd
-
Sam Tsai authored
Summary: Add typing to transform. Reviewed By: wat3rBro Differential Revision: D27145140 fbshipit-source-id: 8556427b421bf91a05692a590db175c68c4d6890
-
- 03 Apr, 2021 2 commits
-
-
Peizhao Zhang authored
Summary: Make data and evaluation visualization optional. * could return None. Reviewed By: zhanghang1989, wat3rBro Differential Revision: D27316632 fbshipit-source-id: 2a85db4815cbf3407a20a74c125dcd52d75167fa
-
Peizhao Zhang authored
Summary: Format changes. * [Option] + [Shift] + [F] Reviewed By: mattcyu1, zhanghang1989, wat3rBro Differential Revision: D27316555 fbshipit-source-id: 0fc3396eb34d964478cb3551dc73b47412089ccb
-
- 02 Apr, 2021 1 commit
-
-
Yanghan Wang authored
Summary: #Facebook: `build_d2go_train_loader` will replace `runner.build_detection_train_loader`, currently we call `build_d2go_train_loader` from `runner.build_detection_train_loader` since some runner has there own implementation, we will solve those cases and remove the `runner.build_detection_train_loader` API. Currently `build_d2go_train_loader` uses `_MAPPED_TRAIN_LOADER_BUILDER_REGISTRY` to support different versions between OSS and FB, not sure if this is a good pattern or not, please comment in the diff if you have better idea. Reviewed By: zhanghang1989 Differential Revision: D27505681 fbshipit-source-id: b5caf7280a88c2ebccb498097c0b7af51c966fc6
-
- 31 Mar, 2021 3 commits
-
-
Kai Zhang authored
Reviewed By: newstzpz Differential Revision: D27255960 fbshipit-source-id: 1699ff23d2bc610dffc0215a90a7c1c17e3783c3
-
Sam Tsai authored
Summary: Fixing unit test that was not listed due to rebase error. Reviewed By: newstzpz, wat3rBro Differential Revision: D27456322 fbshipit-source-id: 519c5c086adfb19104ed99234f4f476eb34a79bc
-
Tao Xu authored
Summary: Train a pix2pix model on the paired dataset. During inference, it can transfer an source image to the target image. Reviewed By: newstzpz Differential Revision: D27371290 fbshipit-source-id: 3141bc6d9e4fe0013f6ea3de3cf998163d286168
-