- 06 Jul, 2022 1 commit
-
-
Mircea Cimpoi authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/324 Fixes bug introduced in D37600026 (https://github.com/facebookresearch/d2go/commit/1a18ba3420e3c823accf731a11c0a91dc3babd85) -- forgot to fix imports after moving modelinghook to registry/builtin.py Differential Revision: D37646330 fbshipit-source-id: cb763d65e7bbfd07eea6eff61727a42a6fcfbc88
-
- 02 Jul, 2022 1 commit
-
-
Jerry Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/321 Following up the bc-breaking change from fx graph mode quantization: https://github.com/pytorch/pytorch/pull/76496 that added example_inputs to prepare_fx and prepare_qat_fx, we fixes the callsite related to mobile-vision and exposed extra example_inputs in some apis Reviewed By: wat3rBro Differential Revision: D37163018 fbshipit-source-id: 9f0bb56659345d174a39b6d3bb4408caa553b88d
-
- 29 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/318 Reviewed By: mcimpoi Differential Revision: D37501246 fbshipit-source-id: 6dbe5dcbaf7454f451d4a3bb3fa2d856cc87d5cc
-
- 24 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/312 As discussed, we decided to not use runner instance outside of `main`, previous diffs already solved the prerequisites, this diff mainly does the renaming. - Use runner name (str) in the fblearner, ML pipeline. - Use runner name (str) in FBL operator, MAST and binary operator. - Use runner class as the interface of main, it can be either the name of class (str) or actual class. The main usage should be using `str`, so that the importing of class happens inside `main`. But it's also a common use case to import runner class and call `main` for things like ad-hoc scripts or tests, supporting actual class makes it easier modify code for those cases (eg. some local test class doesn't have a name, so it's not feasible to use runner name). Reviewed By: newstzpz Differential Revision: D37060338 fbshipit-source-id: 879852d41902b87d6db6cb9d7b3e8dc55dc4b976
-
- 21 Jun, 2022 1 commit
-
-
Andrew Or authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/308 D37088095 made BC breaking changes in PyTorch, but missed fixing a few callsites. This is the second diff to fix the affected tests. Reviewed By: jerryzh168, wat3rBro Differential Revision: D37282853 fbshipit-source-id: 945c3628f53cefd2fbc2526630d1567a2b1cee25
-
- 20 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/305 One benefit of having separate registries for D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb) and D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's meta-arch is that there's no need to patch original D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)'s meta arch because we can just register new meta arch in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go directly. This diff removes the `patch_d2_meta_arch` and makes things simpler. Reviewed By: mcimpoi Differential Revision: D37246483 fbshipit-source-id: c8b7adef1fa7a5ff2f89c376c7e3b39bec8f19ee
-
- 17 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/298 Reviewed By: tglik, newstzpz Differential Revision: D37152248 fbshipit-source-id: 58a6899c5f6465f36961a2ebf60a64f20509cec2
-
- 16 Jun, 2022 1 commit
-
-
Matthew Yu authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/299 This implements the first iteration of generalized distillation in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go. The functionality is separated into the following: => Adding distillation functionality without user changing their meta architecture: class DistillationModelingHook * This is an implementation detail that we hide from the user. * We use dynamic mixin to specify additional functionality to the user's model. In this way, the original (student) model retains all attributes but the mixin class will override the forward (and provide more functionality like teacher updates). * Teacher build currently only supports loading a torchscript model, pytorch compatiblity in later diffs => Implementing distillation methods class DistillationAlgorithm * The user can use some default algorithm (e.g., LabelDistillation) or create their own. This is where we specify the overrided forward func of the model and any other distillation requirements (updating the weights of the teacher model). * The basic LabelDistillation allows a user to use a teacher model during training to relabel the gt => User customization class DistillationHelper * This is what we expect the user to customize. As an example the user probably needs to write their own pseudo_labeler to take batched_inputs and relabel this with the teacher Both DistillationHelper and DistillationAlgorithm use a registry so that users can add their customization in their own code and use these customizations by specifying in the config Reviewed By: newstzpz Differential Revision: D36708227 fbshipit-source-id: bc427c5d42d0c7ff4d839bf10782efac24dea107
-
- 10 Jun, 2022 1 commit
-
-
John Reese authored
Summary: pyfmt now specifies a target Python version of 3.8 when formatting with black. With this new config, black adds trailing commas to all multiline function calls. This applies the new formatting as part of rolling out the linttool-integration for pyfmt. paintitblack Reviewed By: zertosh, lisroach Differential Revision: D37084377 fbshipit-source-id: 781a1b883a381a172e54d6e447137657977876b4
-
- 09 Jun, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/274 X-link: https://github.com/facebookresearch/mobile-vision/pull/76 TLDR: this diff consolidate the `distributed_helper` of `mobile_cv`, it (together with `mobile_cv`'s `comm` module) should be the TOGO library for dealing with DDP. D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's `distributed` is now built on-top of `mobile_cv`'s `distributed_helper`. Reviewed By: newstzpz Differential Revision: D36787336 fbshipit-source-id: 640c9dcff5eec534e7894c75cfdf0a12d21c297e
-
- 05 Jun, 2022 2 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/277 Support cfg_diff for mismatched keys. Reviewed By: wat3rBro Differential Revision: D36737254 fbshipit-source-id: a1c189c92a24f3c109d9a427f135e53876b91624
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/259 Added randaug and trivialaug to d2go. * Only apply to images. Reviewed By: wat3rBro Differential Revision: D36605753 fbshipit-source-id: f72ea6f3de65ab115e4c6612343e93a092cbd7ea
-
- 02 Jun, 2022 2 commits
-
-
Miquel Jubert Hermoso authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/238 *This diff is part of a stack which has the goal of "buckifying" D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go core and enabling autodeps and other tooling. The last diff in the stack introduces the TARGETS. The diffs earlier in the stack are resolving circular dependencies and other issues which prevent the buckification from occurring.* Following the comments in an abandoned diff, split the export code into two files, which will have their corresponding dependencies: exporter and api. api.py contains the components which have little dependencies, so it can be imported basically anywhere without circular dependencies. exporter.py contains the utilities, which are use for export operations, for example in the exporter binary. Reviewed By: mcimpoi Differential Revision: D36166603 fbshipit-source-id: 25ded0b3925464c05be4048472a4c2ddcdb17ecf
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/267 https://www.internalfb.com/intern/test/281475036363610?ref_report_id=0 is flaky, which is caused by running multiple tests at the same time, and clean up is not handled very well in that case. Reviewed By: tglik Differential Revision: D36787035 fbshipit-source-id: 6a478318fe011af936dd10fa564519c8c0615ed3
-
- 28 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/264 Reviewed By: tglik Differential Revision: D36427439 fbshipit-source-id: 3502d61ccb3c3f67d7ccfc1f55777c74f6c3b970
-
- 27 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/263 Re-use the code in D36651241 Reviewed By: tglik Differential Revision: D36682041 fbshipit-source-id: 1bf30afbbb69d10fc11f9161e29319f180c65e2f
-
- 25 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/261 X-link: https://github.com/facebookresearch/mobile-vision/pull/71 `is_oss` and `fb_overwritable` are also needed in `mobile_cv`, move them from d2go. Reviewed By: zhanghang1989 Differential Revision: D36655821 fbshipit-source-id: 421c4d22d4c4620678908fe13d6e47ab39604ae7
-
- 21 May, 2022 1 commit
-
-
Jerry Zhang authored
Summary: X-link: https://github.com/pytorch/pytorch/pull/77608 X-link: https://github.com/pytorch/fx2trt/pull/76 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/249 X-link: https://github.com/fairinternal/ClassyVision/pull/104 X-link: https://github.com/pytorch/benchmark/pull/916 X-link: https://github.com/facebookresearch/ClassyVision/pull/791 X-link: https://github.com/facebookresearch/mobile-vision/pull/68 FX Graph Mode Quantization needs to know whether an fx node is a floating point Tensor before it can decide whether to insert observer/fake_quantize module or not, since we only insert observer/fake_quantize module for floating point Tensors. Currently we have some hacks to support this by defining some rules like NON_OBSERVABLE_ARG_DICT (https://github.com/pytorch/pytorch/blob/master/torch/ao/quantization/fx/utils.py#L496), but this approach is fragile and we do not plan to maintain it long term in the pytorch code base. As we discussed in the design review, we'd need to ask users to provide sample args and sample keyword args so that we can infer the type in a more robust way. This PR starts with changing the prepare_fx and prepare_qat_fx api to require user to either provide example arguments thrugh example_inputs, Note this api doesn't support kwargs, kwargs can make https://github.com/pytorch/pytorch/pull/76496#discussion_r861230047 (comment) simpler, but it will be rare, and even then we can still workaround with positional arguments, also torch.jit.trace(https://pytorch.org/docs/stable/generated/torch.jit.trace.html) and ShapeProp: https://github.com/pytorch/pytorch/blob/master/torch/fx/passes/shape_prop.py#L140 just have single positional args, we'll just use a single example_inputs argument for now. If needed, we can extend the api with an optional example_kwargs. e.g. in case when there are a lot of arguments for forward and it makes more sense to pass the arguments by keyword BC-breaking Note: Before: ```python m = resnet18(...) m = prepare_fx(m, qconfig_dict) # or m = prepare_qat_fx(m, qconfig_dict) ``` After: ```python m = resnet18(...) m = prepare_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),)) # or m = prepare_qat_fx(m, qconfig_dict, example_inputs=(torch.randn(1, 3, 224, 224),)) ``` Reviewed By: vkuzo, andrewor14 Differential Revision: D35984526 fbshipit-source-id: 706c8df71722c9aa5082a6491734f0144f0dd670
-
- 20 May, 2022 1 commit
-
-
Miquel Jubert Hermoso authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/245 At the moment D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go's runner still uses the OSS pattern 1 (see wiki), where the files get remapped. This does not work with D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go, and makes it necessary to use some renaming tricks. This diff refactors the runner setup, to reduce the number of classes, and rely on fb_overwrite to add the correct fields to the config. Reviewed By: wat3rBro Differential Revision: D36316955 fbshipit-source-id: 4aaaece121b8df802f9395648c97a647fa7db857
-
- 17 May, 2022 3 commits
-
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/208 Support unapplying modeling hooks. Reviewed By: tglik Differential Revision: D35540649 fbshipit-source-id: 60cc5e214282e30b39fc98ba4d58dad2fc6ea086
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/207 Apply modeling hook when building the model from cfg. Differential Revision: D35535571 fbshipit-source-id: e80dd3912911e49c6ed60477f3ba52f74a220dec
-
Peizhao Zhang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/205 Added basic modeling hook. Reviewed By: tglik Differential Revision: D35535213 fbshipit-source-id: 662b08a905dd45f09737ca9c2d275b0324bcc134
-
- 15 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies new import merging and sorting from µsort v1.0. When merging imports, µsort will make a best-effort to move associated comments to match merged elements, but there are known limitations due to the diynamic nature of Python and developer tooling. These changes should not produce any dangerous runtime changes, but may require touch-ups to satisfy linters and other tooling. Note that µsort uses case-insensitive, lexicographical sorting, which results in a different ordering compared to isort. This provides a more consistent sorting order, matching the case-insensitive order used when sorting import statements by module name, and ensures that "frog", "FROG", and "Frog" always sort next to each other. For details on µsort's sorting and merging semantics, see the user guide: https://usort.readthedocs.io/en/stable/guide.html#sorting Reviewed By: lisroach Differential Revision: D36402205 fbshipit-source-id: a4efc688d02da80c6e96685aa8eb00411615a366
-
- 14 May, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/242 Reviewed By: newstzpz Differential Revision: D36297282 fbshipit-source-id: 8efb19b3186f6978283f4e17e0628b55c2ec816e
-
- 12 May, 2022 1 commit
-
-
John Reese authored
Summary: Applies the black-fbsource codemod with the new build of pyfmt. paintitblack Reviewed By: lisroach Differential Revision: D36324783 fbshipit-source-id: 280c09e88257e5e569ab729691165d8dedd767bc
-
- 29 Apr, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/228 This diff solves https://github.com/facebookresearch/d2go/issues/226 Reviewed By: tglik Differential Revision: D36026321 fbshipit-source-id: 216b0bf7bc48c45deb093c238d70de2b40bc37a3
-
- 26 Apr, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/221 Reviewed By: tglik Differential Revision: D35855051 fbshipit-source-id: f742dfbc91bb7a20f632a508743fa93e3a7e9aa9
-
Jonathan Zeltser authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/202 This diff print the diff between the default config and the full config at the start of the run Reviewed By: wat3rBro Differential Revision: D35346096 fbshipit-source-id: 1ce9b58a8d613d1dd572358ce1e51462c90cb337
-
- 25 Apr, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/220 Differential Revision: D35853885 fbshipit-source-id: 08d5188d8cd7310f306d07e22cee96bc4d7e06c8
-
- 19 Apr, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/210 Reviewed By: kimishpatel Differential Revision: D35631192 fbshipit-source-id: a713d86734c6937c16c7ced705171db9ea2f0894
-
Lisa Roach authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/212 Applies new import merging and sorting from µsort v1.0. When merging imports, µsort will make a best-effort to move associated comments to match merged elements, but there are known limitations due to the diynamic nature of Python and developer tooling. These changes should not produce any dangerous runtime changes, but may require touch-ups to satisfy linters and other tooling. Note that µsort uses case-insensitive, lexicographical sorting, which results in a different ordering compared to isort. This provides a more consistent sorting order, matching the case-insensitive order used when sorting import statements by module name, and ensures that "frog", "FROG", and "Frog" always sort next to each other. For details on µsort's sorting and merging semantics, see the user guide: https://usort.readthedocs.io/en/stable/guide.html#sorting Reviewed By: jreese, wat3rBro Differential Revision: D35559673 fbshipit-source-id: feeae2465ac2b62c44a0e92dc566e9a386567c9d
-
- 15 Apr, 2022 1 commit
-
-
Yanghan Wang authored
Summary: X-link: https://github.com/facebookresearch/detectron2/pull/4132 X-link: https://github.com/fairinternal/detectron2/pull/568 Pull Request resolved: https://github.com/facebookresearch/d2go/pull/203 For full discussion: https://fb.workplace.com/groups/1405155842844877/posts/5744470455580039 Tracing the `.to(device)` will cause problem when moving the traced torchscript to another device (eg. from cpu to gpu, or even, from `cuda:0` to `cuda:1`). The reason is that `device` is not a `torch.Tensor`, so the tracer just hardcode the value during tracing. The solution is scripting the casting operation. Here's the code snippet illustrating this: ``` # define the MyModel similar to GeneralizedRCNN, which casts the input to the model's device class MyModel(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): # cast the input to the same device as this model, this makes it possible to # take a cpu tensor as input when the model is on GPU. x = x.to(self.conv1.weight.device) x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) # export the model by tracing model = MyModel() x = torch.zeros([1, 3, 32, 32]) ts = torch.jit.trace(model, x) print(ts.graph) # ===================================================== graph(%self.1 : __torch__.MyModel, %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)): %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_0.Conv2d = prim::GetAttr[name="conv2"](%self.1) %conv1 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv1"](%self.1) %14 : int = prim::Constant[value=6]() # <ipython-input-2-5abde0efc36f>:11:0 %15 : int = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0 %16 : Device = prim::Constant[value="cpu"]() # <ipython-input-2-5abde0efc36f>:11:0 %17 : NoneType = prim::Constant() %18 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0 %19 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0 %20 : NoneType = prim::Constant() %input.1 : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu) = aten::to(%x, %14, %15, %16, %17, %18, %19, %20) # <ipython-input-2-5abde0efc36f>:11:0 %72 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1) %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%72) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0 %73 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5) %61 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%73) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0 return (%61) # ===================================================== # PyTorch cuda works model = copy.deepcopy(model) model.to("cuda") y = model(x) # torchscript cpu works y = ts(x) # torchscript cuda doesn't work ts = ts.to("cuda") y = ts(x) # ===================================================== RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-2aece3ad6c9a> in <module> 7 # torchscript cuda doesn't work 8 ts = ts.to("cuda") ----> 9 y = ts(x) /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] RuntimeError: The following operation failed in the TorchScript interpreter. # ===================================================== # One solution is scripting the casting instead of tracing it, the folloing code demonstrate how to do it. We need to use mixed scripting/tracing torch.jit.script_if_tracing def cast_device_like(src: torch.Tensor, dst: torch.Tensor) -> torch.Tensor: return src.to(dst.device) class MyModel2(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(3, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): # cast the input to the same device as this model, this makes it possible to # take a cpu tensor as input when the model is on GPU. x = cast_device_like(x, self.conv1.weight) x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) # export the model by tracing model = MyModel2() x = torch.zeros([1, 3, 32, 32]) ts = torch.jit.trace(model, x) print(ts.graph) # ===================================================== graph(%self.1 : __torch__.MyModel2, %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)): %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_5.Conv2d = prim::GetAttr[name="conv2"](%self.1) %conv1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1) %conv1.1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1) %weight.5 : Tensor = prim::GetAttr[name="weight"](%conv1.1) %14 : Function = prim::Constant[name="cast_device_like"]() %input.1 : Tensor = prim::CallFunction(%14, %x, %weight.5) %68 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1) %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%68) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0 %69 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5) %55 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%69) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0 return (%55) # ===================================================== # PyTorch cuda works model = copy.deepcopy(model) model.to("cuda") y = model(x) # torchscript cpu works y = ts(x) # Note that now torchscript cuda works ts = ts.to("cuda") y = ts(x) print(y.device) # ===================================================== cuda:0 # ===================================================== ``` For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb), this diff creates a `move_tensor_device_same_as_another(A, B)` function to replace `A.to(B.device)`. This diff updates the `rcnn.py` and all its utils. For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go, since the exported model will become device-agnostic, we can remove the "_gpu" from predictor-type. Update (April 11): Add test to cover tracing on one device and move traced model to another device for inference. When GPU is available, it'll trace on `cuda:0` and run inference on `cpu`, `cuda:0` (and `cuda:N-1` if available). Summary of the device related patterns - The usage of `.to(dtype=another_dype)` won't affect device. - Explicit device casting like `.to(device)` can be generally replaced by `move_device_like`. - For creating variable directly on device (eg. `torch.zeros`, `torch.arange`), we can replace then with ScriptModule to avoid first create on CPU and then move to new device. - Creating things on tracing device and then moving to new device is dangerous, because tracing device (eg. `cuda:0`) might not be available (eg. running on CPU-only machine). - It's hard to write `image_list.py` in this pattern because the size behaves differently during tracing (int vs. scalar tensor), in this diff, still create on CPU first and then move to target device. Reviewed By: tglik Differential Revision: D35367772 fbshipit-source-id: 02d07e3d96da85f4cfbeb996e3c14c2a6f619beb
-
- 07 Apr, 2022 1 commit
-
-
Owen Wang authored
Summary: Allow string name of export type to indicate which mobile opt backend user wants to trigger. Reviewed By: wat3rBro Differential Revision: D35375928 fbshipit-source-id: dc3f91564681344e1d43862423ab5dc63b6644d3
-
- 05 Apr, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/200 Currently when exporting the RCNN model, we call it with `self.model.inference(inputs, do_postprocess=False)[0]`, therefore the output of exported model is not post-processed, eg. the mask is in the squared shape. This diff adds the option to include postprocess in the exported model. Worth noting that since the input is a single tensor, the post-process doesn't resize the output to original resolution, and we can't apply the post-process twice to further resize it in the Predictor's PostProcessFunc, add an assertion to raise error in this case. But this is fine for most production use cases where the input is not resized. Set `RCNN_EXPORT.INCLUDE_POSTPROCESS` to `True` to enable this. Reviewed By: tglik Differential Revision: D34904058 fbshipit-source-id: 65f120eadc9747e9918d26ce0bd7dd265931cfb5
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/199 - `create_fake_detection_data_loader` currently doesn't take `cfg` as input, sometimes we need to test the augmentation that needs more complicated different cfg. - name is a bit bad, rename it to `create_detection_data_loader_on_toy_dataset`. - width/height were the resized size previously, we want to change it to the size of data source (image files) and use `cfg` to control resized size. Update V3: In V2 there're some test failures, the reason is that V2 is building data loader (via GeneralizedRCNN runner) using actual test config instead of default config before this diff + dataset name change. In V3 we uses the test's runner instead of default runner for the consistency. This reveals some real bugs that we didn't test before. Reviewed By: omkar-fb Differential Revision: D35238890 fbshipit-source-id: 28a6037374e74f452f91b494bd455b38d3a48433
-
- 24 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/191 When exporting model to torchscript (using `MODEL.DEVICE = "cpu"`), mean/std are constant instead of model parameters. Therefore after casting the torchscript to CUDA, the mean/std remains on cpu. This will cause problem when running inference on GPU. The fix is exporting the model with `MODEL.DEVICE = "cuda"`. However D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go internally uses "cpu" during export (via cli: https://fburl.com/code/4mpk153i, via workflow: https://fburl.com/code/zcj5ud4u) by default. For CLI, user can manually set `--device`, but for workflow it's hard to do so. Further more it's hard to support mixed model using single `--device` option. So this diff adds a special handling in the RCNN's `default_prepare_for_export` to bypass the `--device` option. Reviewed By: zhanghang1989 Differential Revision: D35097613 fbshipit-source-id: df9f44f49af1f0fd4baf3d7ccae6c31e341f3ef6
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/192 Nowadays lightning will initialize process group when using ddp strategy, since `TestLightningTrainNet` does a training with ddp strategy (https://fburl.com/code/a9yp0kzy), the process group ended up initialized after running the test. However there're other tests that will also set up ddp and thus expect non-initialized process group, this is not a problem on sandcastle since the tests run separately, however in OSS env, the tests are running together, so the error happens (eg. https://github.com/facebookresearch/d2go/runs/5668912203?check_suite_focus=true). This diff adds a clean up step in `TestLightningTrainNet`. Reviewed By: tglik Differential Revision: D35099944 fbshipit-source-id: f5b42b2a87d4efd9aa0ed97e6bd2140d80ab9522
-
- 16 Mar, 2022 2 commits
-
-
Yanghan Wang authored
Summary: D33301363 changes the signature of `update_cfg` from `update_cfg(cfg, *updaters)` to `update_cfg(cfg, updaters, new_allowed)`, while the call sites are not updated. Eg. https://www.internalfb.com/code/fbsource/[9e071979a62ba7fd3d7a71dee1f0809815cbaa43]/fbcode/fblearner/flow/projects/mobile_vision/detectron2go/core/workflow.py?lines=221-225, the `merge_from_list_updater(e2e_train.overwrite_opts),` is then not used. For the fix: - Since there're a lot of call sites for `update_cfg` it's better to keep the original signature. - ~~~The `new_allowed` can actually be passed to each individual updater instead of the `update_cfg`, this also gives finer control.~~~ - Make override the `merge_from_list` to make it respect `new_allowed`. - Preserve the `new_allowed` for all nodes (not only the root) in the FLOW Future calls. Reviewed By: zhanghang1989 Differential Revision: D34840001 fbshipit-source-id: 14aff6bec75a8b53d4109e6cd73d2494f68863b4
-
Ananth Subramaniam authored
Reviewed By: kazhang Differential Revision: D34669519 fbshipit-source-id: 8cfee968104c823a55960f2730d8e888ac1f298e
-
- 08 Mar, 2022 1 commit
-
-
Yanghan Wang authored
Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/187 Reviewed By: ananthsub, zhanghang1989 Differential Revision: D34650467 fbshipit-source-id: b9518e5dd673b709320b87e57a43d053eca3aabe
-