support force exporting gpu model for rcnn meta_arch
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/191 When exporting model to torchscript (using `MODEL.DEVICE = "cpu"`), mean/std are constant instead of model parameters. Therefore after casting the torchscript to CUDA, the mean/std remains on cpu. This will cause problem when running inference on GPU. The fix is exporting the model with `MODEL.DEVICE = "cuda"`. However D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go internally uses "cpu" during export (via cli: https://fburl.com/code/4mpk153i, via workflow: https://fburl.com/code/zcj5ud4u) by default. For CLI, user can manually set `--device`, but for workflow it's hard to do so. Further more it's hard to support mixed model using single `--device` option. So this diff adds a special handling in the RCNN's `default_prepare_for_export` to bypass the `--device` option. Reviewed By: zhanghang1989 Differential Revision: D35097613 fbshipit-source-id: df9f44f49af1f0fd4baf3d7ccae6c31e341f3ef6
Showing
Please register or sign in to comment