Commit 490789a7 authored by Yanghan Wang's avatar Yanghan Wang Committed by Facebook GitHub Bot
Browse files

apply fuse_utils.swap_modules for FX PTQ

Summary:
Pull Request resolved: https://github.com/facebookresearch/d2go/pull/333

Follow D36916149.

Reviewed By: jerryzh168

Differential Revision: D37830568

fbshipit-source-id: dbeb204ccf96dd2e90a6509f24a2864503083f60
parent 6fc8f066
...@@ -306,6 +306,13 @@ def convert_to_fake_quant_model(cfg, model, is_qat, example_input=None): ...@@ -306,6 +306,13 @@ def convert_to_fake_quant_model(cfg, model, is_qat, example_input=None):
torch.ao.quantization.prepare(model, inplace=True) torch.ao.quantization.prepare(model, inplace=True)
else: else:
# FX graph mode requires the model to be symbolically traceable, swap common
# modules like SyncBN to FX-friendly version.
if not is_qat:
# NOTE: we only do this for PTQ, because we want to keep using unmodified
# model during QAT.
model = fuse_utils.swap_modules(model)
if hasattr(model, "custom_prepare_fx"): if hasattr(model, "custom_prepare_fx"):
model = model.custom_prepare_fx(cfg, is_qat, example_input) model = model.custom_prepare_fx(cfg, is_qat, example_input)
# TODO: remove this branch after completely separating the eager and FX APIs # TODO: remove this branch after completely separating the eager and FX APIs
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment