Commit a95c7983 authored by Ananth Subramaniam's avatar Ananth Subramaniam Committed by Facebook GitHub Bot
Browse files

Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to...

Synchronize PyTorchLightning/pytorch-lightning (revision 7fe8d184@master) to github/third-party/PyTorchLightning/pytorch-lightning

Summary:
### New commit log messages
  7fe8d184 Do not `shuffle` in `LightningDataModule.from_datasets` for `IterableDataset` (#7053)
  bab72255 [fix] Add barriers before and after setup hook is run (#7202)
  f920ba29 [bugfix] Metric not logged properly in manual optimization (#7228)
  e147127c [feat] Add better support for predict + ddp 2/3 (#7215)
  ca6c87ff Add back `clip_gradients(model)` (#7231)
  3b36d81c Fixed `num_sanity_val_steps` affecting reproducibility of training data shuffling (#7014)
  5cf9afa1 Add fairscale install msg for Sharded Plugins (#7213)
  52a5cee0 Set smarter default for DDP sharded for performance optimization (#6937)
  dd5ec75e Deprecate save_function from model checkpoint callback (#7201)
  ac7d6a35 Fix `NeptuneLogger.log_text(step=None)` (#7194)
  6be0a859 Update teardown for TPU acc (#7211)
  bc3f08b0 [fix] Add barrier to accelerator's teardown (#6814)
  68eac4d9 Enforce Lightning module as source of truth for automatic optimization (#7130)
  44d775fc Update Error message for ProfileConnector (#7204)
  31fcd7d0 Deprecate write_predictions on the LightningModule (#7066)
  591b9cee make bug_report_model minimal (#7191)
  b3fe8366 Move metrics_to_scalars to a dedicated utilities file (#7180)
  f58865aa Properly set `LightningModule.device` after model replacement (#7188)
  8439aead Update FairScale on CI (#7017)
  92af3632 Fix `lr_finder` suggesting too high learning rates (#7076)
  d534e53e add missing predict docs (#7150)

Reviewed By: kazhang

Differential Revision: D28032962

fbshipit-source-id: 18cd01e8ecc13fe25f0890ac0f4b20c3c3e1fed3
parent c04ef895
...@@ -484,6 +484,7 @@ class TestPostTrainingQuantization(unittest.TestCase): ...@@ -484,6 +484,7 @@ class TestPostTrainingQuantization(unittest.TestCase):
callbacks=[static_quantization], callbacks=[static_quantization],
max_epochs=num_epochs, max_epochs=num_epochs,
logger=False, logger=False,
num_sanity_val_steps=0,
) )
trainer.fit(model) trainer.fit(model)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment