Commit adf223bd authored by Tao Xu's avatar Tao Xu Committed by Facebook GitHub Bot
Browse files

workaround observer shapes seem to be important for EMA QAT model to have correct min_val/max_val

Summary:
Before this fix, the EMA GAN model will have inf min_val/max_val when converting QAT models to int8 torchscript model (as shown in f290518237).

https://pxl.cl/1MNx0

Reviewed By: yc-fb

Differential Revision: D23387923

fbshipit-source-id: 5c2119e2c5170e30c6059e7374c22e367fcd2b26
parent fd79c680
...@@ -255,6 +255,14 @@ class Detectron2GoRunner(BaseRunner): ...@@ -255,6 +255,14 @@ class Detectron2GoRunner(BaseRunner):
if cfg.QUANTIZATION.QAT.ENABLED: if cfg.QUANTIZATION.QAT.ENABLED:
# Disable fake_quant and observer so that the model will be trained normally # Disable fake_quant and observer so that the model will be trained normally
# before QAT being turned on (controlled by QUANTIZATION.QAT.START_ITER). # before QAT being turned on (controlled by QUANTIZATION.QAT.START_ITER).
if hasattr(model, "get_rand_input"):
model = setup_qat_model(
cfg, model, enable_fake_quant=eval_only, enable_observer=True
)
imsize = cfg.INPUT.MAX_SIZE_TRAIN
rand_input = model.get_rand_input(imsize)
model(rand_input, {})
else:
model = setup_qat_model( model = setup_qat_model(
cfg, model, enable_fake_quant=eval_only, enable_observer=False cfg, model, enable_fake_quant=eval_only, enable_observer=False
) )
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment