Commit 1894f8a3 authored by Kai Zhang's avatar Kai Zhang Committed by Facebook GitHub Bot
Browse files

Fix quantization test failure

Summary:
# Context
In post training quantization callback, we make a deepcopy of the Lightning module before validation start and prepare the copy with FX quantization API. The callback keeps the prepared model inside it.

# The problem
By the second time we run the validation epoch, we try to make a copy of the Lightning module, which has a reference to trainer, which has a reference to quantization callback, which has a prepared model, which is not deepcopiable.

# Mitigation
Delete the trainer before making a deepcopy.
We're already doing that in stl/callbacks/quantization, but the changes were not ported into D2 (https://github.com/facebookresearch/d2go/commit/4169abc18ec539a24081b179fcbbc5a5754d102b)Go.

Reviewed By: zhanghang1989

Differential Revision: D29409085

fbshipit-source-id: 24550124181673b2e567b2a04563bcdfb440e145
parent 4169abc1
......@@ -54,18 +54,14 @@ def rhasattr(obj: Any, attr: str, *args) -> bool:
def _deepcopy(pl_module: LightningModule) -> LightningModule:
"""Copy a LightningModule. Some properties need to be ignored. """
# Remove _result before call to deepcopy since it store non-leaf Tensors.
# If not removed, you'll see this error on deepcopy() attempts: P150283141.
if hasattr(pl_module, "_results"):
result = pl_module._results
delattr(pl_module, "_results")
copy = deepcopy(pl_module)
# Set back.
pl_module._results = result
else:
"""Copy a LightningModule. Some properties need to be ignored."""
# Remove trainer reference.
trainer = pl_module.trainer
try:
pl_module.trainer = None
copy = deepcopy(pl_module)
finally:
pl_module.trainer = trainer
return copy
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment