Commit 670b4c4a authored by Luis Perez's avatar Luis Perez Committed by Facebook GitHub Bot
Browse files

Synchronize PyTorchLightning/pytorch-lightning (revision f7459f53@master) to...

Synchronize PyTorchLightning/pytorch-lightning (revision f7459f53@master) to github/third-party/PyTorchLightning/pytorch-lightning

Summary:
## OSS
Note these issues are being solved in OSS here: https://github.com/PyTorchLightning/pytorch-lightning/pull/7994/files#

## Manual
- `speed_monitor.py` - `Result.unpack_batch_size` has been removed, moved to new implementation.
- `fully_sharded.py` - There was a refactor for plugins, so updated corresponding function to keep reduced memory usage.
- `hive_writing_classy.py`, `hive_writing_faim.py`, `hive_writing_xrayvideo.py` - Same as `speed_monitor.py`.
- [Temporary] Uncommented misconfiguration exception. See https://github.com/PyTorchLightning/pytorch-lightning/pull/7882#pullrequestreview-683282719.
- Update `TestModel` to detach appropriately.
- Manually `detach` metrics stored in ResultStore.

## Automatic
### New commit log messages
  f7459f53 DeepSpeed Infinity Update (#7234)
  03e7bdf8 Improve `LightningModule` hook tests (#7944)
  3a0ed02b Properly handle parent modules w/ parameters in `BaseFinetuning` callback (#7931)
  ce93d8bc Handle errors due to uninitailized parameters (#7642)
  cca0e753 remove parsing comments (#7958)
  898fb56b added on_test_start() documentation (#7962)
  22d82661 Seed all workers when using DDP (#7942)
  436fc53c Improve `LightningDataModule` hook test and fix `dataloader_idx` argument (#7941)
  6b7b4047 deprecate hpc_load() and integrate it with restore() (#7955)
  20a5e09e fix myst-parser warning blocking docs ci (#7967)
  f15ea601 update chlog + legacy chpt (#7954)
  59d0c656 Add dataclass support to `apply_to_collection` (#7935)
  cdd01f32 LightningCLI support for argument links applied on instantiation (#7895)
  6856cced Remove rank_zero_only on DataModule prepare_data (#7945)
  96433d03 IPU Integration 5/5 (#7867)
  42c7f272 refactor checkpoint loading for training type plugins (#7928)
  ac4eb0a0 `is_overridden` improvements (#7918)
  9e932f4d Delete `on_after_backward` unused argument (#7925)
  8b738693 Deprecate the default `EarlyStopping` callback monitor value (#7907)
  c1eac483 split `restore_training_state` into logical parts [2 / 2] (#7900)
  d209b689 split `restore_training_state` into logical parts [1 / 2] (#7901)
  111287b4 add pre-commit hooks (#7906)
  839019a3 Remove legacy teardown check in train loop (#7917)
  b45a89a2 Clean-up after logger connector redesign 2/2 (#7631)
  07b69231 Remove fn check for ipu output (#7915)
  580a3b5e Remove dead code (#7910)
  df812398 Clean-up after logger connector redesign 1/2 (#7909)
  ec4f8856 Enable logger connector re-design (#7891)
  15be9865 add logger to __all__ (#6854)
  6fee9262 Deprecate `LightningDataModule` lifecycle properties (#7657)
  764d2c77 refactor CheckpointConnector.restore_weights  (#7862)
  7f4ef6d1 Fix logs overwriting issue for remote fs (#7889)
  c310ce66 Logger connector re-design `_Metadata.reduce_fx` fixes. (#7890)
  b214442e New logger connector code (#7882)

Reviewed By: yifuwang

Differential Revision: D29105294

fbshipit-source-id: 990b2a4a7333908d676de193f5ec930cb50b8a19
parent 14b25e8d
...@@ -45,17 +45,17 @@ class TestModule(LightningModule): ...@@ -45,17 +45,17 @@ class TestModule(LightningModule):
def training_step(self, batch, batch_idx): def training_step(self, batch, batch_idx):
output = self.forward(batch) output = self.forward(batch)
loss = self.loss(batch, output) loss = self.loss(batch, output)
return {"output": output, "loss": loss, "checkpoint_on": loss} return {"output": output.detach(), "loss": loss, "checkpoint_on": loss.detach()}
def validation_step(self, batch, batch_idx): def validation_step(self, batch, batch_idx):
output = self.forward(batch) output = self.forward(batch)
loss = self.loss(batch, output) loss = self.loss(batch, output)
return {"output": output, "loss": loss, "checkpoint_on": loss} return {"output": output.detach(), "loss": loss, "checkpoint_on": loss.detach()}
def test_step(self, batch, batch_idx): def test_step(self, batch, batch_idx):
output = self.forward(batch) output = self.forward(batch)
loss = self.loss(batch, output) loss = self.loss(batch, output)
return {"output": output, "loss": loss} return {"output": output.detach(), "loss": loss}
def training_epoch_end(self, outputs) -> None: def training_epoch_end(self, outputs) -> None:
avg_loss = torch.stack([x["loss"] for x in outputs]).mean() avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment