Commit f42221da authored by Peizhao Zhang's avatar Peizhao Zhang Committed by Facebook GitHub Bot
Browse files

Added model fusing for fp32 models for export.

Summary:
Added model fusing for fp32 models for export.
* We should fuse the fp32 model as well.

Reviewed By: wat3rBro

Differential Revision: D26785487

fbshipit-source-id: 6c14f746fd9eeb307b8ae465edbd4ef1335c9dd1
parent 00e8a4f0
......@@ -110,6 +110,11 @@ def convert_and_export_predictor(
pytorch_model = torch.quantization.quantize_fx.convert_fx(pytorch_model)
logger.info("Quantized Model:\n{}".format(pytorch_model))
else:
pytorch_model = fuse_utils.fuse_model(pytorch_model)
logger.info("Fused Model:\n{}".format(pytorch_model))
if fuse_utils.count_bn_exist(pytorch_model) > 0:
logger.warning("BN existed in pytorch model after fusing.")
return export_predictor(cfg, pytorch_model, predictor_type, output_dir, data_loader)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment