1. 21 Mar, 2023 1 commit
  2. 05 Jan, 2023 1 commit
    • Yanghan Wang's avatar
      use torch.testing.assert_close in test_modeling_distillation · c088c257
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/455
      
      The test can be flaky due to numerical mismatch if using `self.AssertEqual`, eg. https://www.internalfb.com/intern/testinfra/diagnostics/1688850007977704.562950031998292.1672749571/
      
      ```
      Traceback (most recent call last):
        File "/data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/buck-out/v2/gen/fbcode/104a4d5c3a690252/mobile-vision/d2go/tests/__modeling_test_modeling_distillation__/modeling_test_modeling_distillation#link-tree/d2go/tests/modeling/test_modeling_distillation.py", line 674, in test_da_train
          self.assertEqual(
      AssertionError: {'rea[14 chars]2894], grad_fn=<MulBackward0>), 'synthetic': t[85 chars]d0>)} != {'rea[14 chars]2894]), 'synthetic': tensor([1.4532]), 'add': [13 chars]64])}
      - {'add': tensor([18.0064], grad_fn=<MulBackward0>),
      -  'real': tensor([0.2894], grad_fn=<MulBackward0>),
      -  'synthetic': tensor([1.4532], grad_fn=<MulBackward0>)}
      + {'add': tensor([18.0064]),
      +  'real': tensor([0.2894]),
      +  'synthetic': tensor([1.4532])}
      ```
      
      .Change to use `torch.testing.assert_close` instead for tensor comparison.
      
      Reviewed By: YanjunChen329
      
      Differential Revision: D42352509
      
      fbshipit-source-id: 8a647685d1347a9bd493f2faed7e066eb9159e14
      c088c257
  3. 09 Dec, 2022 1 commit
  4. 08 Dec, 2022 1 commit
  5. 30 Nov, 2022 5 commits
    • Matthew Yu's avatar
      support caching tuples · dece58ba
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/432
      
      We support caching of tuples since they behave similarly to lists
      
      Reviewed By: XiaoliangDai
      
      Differential Revision: D41483876
      
      fbshipit-source-id: 9d741074f8e2335ddd737ae3f1bdb288910f5564
      dece58ba
    • Matthew Yu's avatar
      algorithm · 150db2d1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/431
      
      Add a generic domain adaptation algorithm. This algorithm:
      * gets domain0 data out of the dataloader
      * runs domain0 data into the model and saves target layer output
      * gets domain1 data of the dataloader
      * runs domain1 data into the model and saves target layer output
      * runs domain adaptation loss on domain0, domain1 outputs
      * combines losses using model training iteration
      
      This diffs adds `get_preprocess_domain0_input` and `get_preprocess_domain1_input` to the distillation helper. These are functions that the user can use to convert the dataloader output to something that will be used by the model (e.g., pull the domain0 or domain1 key out of a dataloader that returns a dict).
      
      Differential Revision: D40970724
      
      fbshipit-source-id: fff050fbe864654fa6cb0df927f6843855ec1c14
      150db2d1
    • Matthew Yu's avatar
      support registering layer losses to model · c4860c5b
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/430
      
      We add losses in distillation by instantiating them in the distillation algorithm's init and then running them during the forward pass.
      
      However this has some issues:
      * the losses are not registered as a module in the model since they we organize them as a list of layerlossmetadata => this means that things like AMP do not behave as expected
      * the losses are not on the same device as the rest of the model since they are created potentially after the model is moved to a new device
      
      This diff solves both of these issues by including a helper function that registers and moves the losses to the same device as the model. `register_layer_losses_and_to_device` takes as input `List[LayerLossMetadata]`, moves the losses to the same device as the model and then registers these losses to the model.
      
      Differential Revision: D41296932
      
      fbshipit-source-id: ae7ae0847bce1b5cc481d838b9cae69cea424f25
      c4860c5b
    • Matthew Yu's avatar
      support ignoring teacher · 909de50d
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/429
      
      Add a teacher type called `no_teacher` which can be specified by the user in the case they ignore the teacher (e.g., domain adaptation). Building the teacher just returns a noop (`nn.Identity`)
      
      Differential Revision: D40971788
      
      fbshipit-source-id: fc49ac44224c92806a7be253eefb8454305814eb
      909de50d
    • Matthew Yu's avatar
      set cache in recorded layers · 30ac5858
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/433
      
      Distillation uses a module called `CachedLayer` to record the outputs of a layer to an internal dict. This dict is typically initialized by the object itself and any value is overwritten every time the model runs.
      
      However, sometimes we need more than one output run of the layer (e.g., domain adaptation => we run the model on real, then synthetic data and need to use both outputs).
      
      This diff adds a helper to set externally set the cache dict of a model. In other words, we can run `set_cache_dict` on some model to change the dict used by all `CachedLayer` in the model. This allows us to run the model and record some outputs, then change the cache dict and rerun the model to save different outputs.
      
      Differential Revision: D40970577
      
      fbshipit-source-id: 49cb851af49ae193d0c8ac9218e02fdaf4e6587b
      30ac5858
  6. 22 Nov, 2022 1 commit
    • Matthew Yu's avatar
      add default layer losses and loss combiner · 419974bb
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/421
      
      Add some reasonable defaults when running knowledge distillation
      * get_default_kd_image_classification_layer_losses => returns cross entropy loss on the output of the student classification layer and the teacher output (this is what the imagenet distillation uses)
      * DefaultLossCombiner => simple function to multiply the losses by some weights
      
      Unsure if these should go in `distillation.py` or a separate place (e.g., defaults or classification)
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40330718
      
      fbshipit-source-id: 5887566d88e3a96d01aca133c51041126b2692cc
      419974bb
  7. 19 Nov, 2022 1 commit
    • Matthew Yu's avatar
      kd algorithm · 9ec4f2bf
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/420
      
      Adds knowledge distillation as a generic algorithm that can be used by various projects.
      
      If eval, the algorithm just returns the result of the student model.
      
      If training, the algorithm feeds the input into both the student and teacher model. The user provides a list of `LayerLossMetadata` that provides the layers and losses run on these layers. The algorithm uses dynamic mixin to record the outputs of the relevant layers and compute the losses after both models are run.
      
      We provide student and teacher preprocessing as a placeholder before we support a more generic dataloader which can provide different inputs to the student and teacher (e.g., as of now, if you want to provide the teacher with a larger input then the dataloader should return a large input and the student preprocessing can downsample the input).
      
      We add the following functions as part of the user customizable distillation helper:
      * get_teacher => return a teacher that can be used directly by the KD algorithm
      * get_layer_losses => return a list of `LayerLossMetadata` that provides the layers and losses
      * get_preprocess_student_input => manipulate the output of the dataloader before passing to the student
      * get_preprocess_teacher_input => manipulate the output of the dataloader before passing to the teacher
      * get_combine_losses => since we may want to weight the student and distillation losses, return a function that can manipulate the loss_dict
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40326412
      
      fbshipit-source-id: 2fb0e818a7d5b120d62fb7aba314ff96cc7e10c5
      9ec4f2bf
  8. 17 Nov, 2022 2 commits
    • Matthew Yu's avatar
      add class to keep track of loss metadata and function to compute losses · 0316fed4
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/419
      
      This diff adds a metadata class `LayerLossMetadata` to help keep track of the losses we want to compute over layers. The class contains the type of loss, loss name, and layer names.
      
      This diff adds a helper function to iterate over a list of `LayerLossMetadata` and return a dict containing the results.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40286564
      
      fbshipit-source-id: b269dc63cc90a437ca279379d759c3106016327c
      0316fed4
    • Matthew Yu's avatar
      add a helper to record layers in a model · 53c4c2c1
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/418
      
      This diff adds a function that can be used to add `CachedLayers` to a model. Function iterates over named modules and dynamically mixes in `CachedLayer` to target modules.
      
      This diff adds a function to remove the cached layers.
      
      Reviewed By: Minione
      
      Differential Revision: D40285806
      
      fbshipit-source-id: 3137d19927d8fb9ec924a77c9085aea29fe94d5e
      53c4c2c1
  9. 16 Nov, 2022 2 commits
    • Matthew Yu's avatar
      support a layer that saves outputs · 120b463c
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/417
      
      This diff adds a layer `CachedLayer` which is meant to be used with dynamic mixin. This layer runs the original module and clones the output into a dictionary provided by the user.
      
      The main use case is in distillation where we dynamically mixin these layers to the layers that the user wants to compute various losses.
      
      See subsequent diffs to get integration with distillation.
      
      Reviewed By: Minione
      
      Differential Revision: D40285573
      
      fbshipit-source-id: 2058deff8b96f63aebd1e9b9933a5352b5197111
      120b463c
    • Matthew Yu's avatar
      update teacher to support models where device is a property · 0f27e90f
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/416
      
      Distillation assumes teacher model has an attribute "device". Sometimes this attribute is actually a property (e.g., generalizedrcnn) but there is zero guarantee that it exists. We add a helper function to move the model to the device and add this attribute if needed.
      
      Reviewed By: chihyaoma
      
      Differential Revision: D40283954
      
      fbshipit-source-id: 42921653eac8a79499e22edac29aa6aeac016e8a
      0f27e90f
  10. 15 Nov, 2022 1 commit
  11. 03 Nov, 2022 1 commit
  12. 28 Oct, 2022 1 commit
  13. 06 Oct, 2022 1 commit
  14. 28 Sep, 2022 1 commit
    • Matthew Yu's avatar
      support pytorch checkpoint as teacher model using config · dc176d58
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/371
      
      In a previous iteration of this diff, we were specifying the teacher model in the same config as the student model, something like:
      ```
      # config.py
      MODEL:
        FBNET_V2:
        ...
      DISTILLATION:
        TEACHER:
          MODEL:
            FBNET_V2:
            ...
            WEIGHTS: /path/to/teacher/weights
      ...
      ```
      
      This leads to some oddities in the code, like we have to have a default config that adds all the required keys in the distillation teacher model.
      
      In this diff, we just let the user supply a teacher config (and optionally runner_name and overwrite opts) and use the supplied runner to build the model:
      ```
      # new_config.py
      MODEL:
        FBNET_V2:
      ...
      DISTILLATION:
        TEACHER:
          CONFIG_FNAME: /path/to/teacher/config
          RUNNER_NAME:
      ...
      ```
      
      This should make it very easy to specify the teacher as the user could potentially just reuse the trained_config generated in d2go.
      
      Reviewed By: newstzpz
      
      Differential Revision: D37640041
      
      fbshipit-source-id: 088a636c96f98279c9a04e32d1674f703451aec3
      dc176d58
  15. 12 Aug, 2022 1 commit
  16. 27 Jul, 2022 1 commit
  17. 06 Jul, 2022 1 commit
  18. 29 Jun, 2022 1 commit
  19. 24 Jun, 2022 1 commit
    • Yanghan Wang's avatar
      use runner class instead of instance outside of main · 8051775c
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/312
      
      As discussed, we decided to not use runner instance outside of `main`, previous diffs already solved the prerequisites, this diff mainly does the renaming.
      - Use runner name (str) in the fblearner, ML pipeline.
      - Use runner name (str) in FBL operator, MAST and binary operator.
      - Use runner class as the interface of main, it can be either the name of class (str) or actual class. The main usage should be using `str`, so that the importing of class happens inside `main`. But it's also a common use case to import runner class and call `main` for things like ad-hoc scripts or tests, supporting actual class makes it easier modify code for those cases (eg. some local test class doesn't have a name, so it's not feasible to use runner name).
      
      Reviewed By: newstzpz
      
      Differential Revision: D37060338
      
      fbshipit-source-id: 879852d41902b87d6db6cb9d7b3e8dc55dc4b976
      8051775c
  20. 20 Jun, 2022 1 commit
  21. 17 Jun, 2022 1 commit
  22. 16 Jun, 2022 1 commit
    • Matthew Yu's avatar
      add modeling hook algo and helper · f3fc01aa
      Matthew Yu authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/299
      
      This implements the first iteration of generalized distillation in D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go. The functionality is separated into the following:
      
      => Adding distillation functionality without user changing their meta architecture:
      
      class DistillationModelingHook
      * This is an implementation detail that we hide from the user.
      * We use dynamic mixin to specify additional functionality to the user's model. In this way, the original (student) model retains all attributes but the mixin class will override the forward (and provide more functionality like teacher updates).
      * Teacher build currently only supports loading a torchscript model, pytorch compatiblity in later diffs
      
      => Implementing distillation methods
      
      class DistillationAlgorithm
      * The user can use some default algorithm (e.g., LabelDistillation) or create their own. This is where we specify the overrided forward func of the model and any other distillation requirements (updating the weights of the teacher model).
      * The basic LabelDistillation allows a user to use a teacher model during training to relabel the gt
      
      => User customization
      
      class DistillationHelper
      * This is what we expect the user to customize. As an example the user probably needs to write their own pseudo_labeler to take batched_inputs and relabel this with the teacher
      
      Both DistillationHelper and DistillationAlgorithm use a registry so that users can add their customization in their own code and use these customizations by specifying in the config
      
      Reviewed By: newstzpz
      
      Differential Revision: D36708227
      
      fbshipit-source-id: bc427c5d42d0c7ff4d839bf10782efac24dea107
      f3fc01aa
  23. 02 Jun, 2022 1 commit
    • Miquel Jubert Hermoso's avatar
      Separate into API and Exporter · 24da990f
      Miquel Jubert Hermoso authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/238
      
      *This diff is part of a stack which has the goal of "buckifying" D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go core and enabling autodeps and other tooling. The last diff in the stack introduces the TARGETS. The diffs earlier in the stack are resolving circular dependencies and other issues which prevent the buckification from occurring.*
      
      Following the comments in an abandoned diff, split the export code into two files, which will have their corresponding dependencies: exporter and api. api.py contains the components which have little dependencies, so it can be imported basically anywhere without circular dependencies.
      
      exporter.py contains the utilities, which are use for export operations, for example in the exporter binary.
      
      Reviewed By: mcimpoi
      
      Differential Revision: D36166603
      
      fbshipit-source-id: 25ded0b3925464c05be4048472a4c2ddcdb17ecf
      24da990f
  24. 25 May, 2022 1 commit
  25. 17 May, 2022 3 commits
  26. 15 May, 2022 1 commit
    • John Reese's avatar
      apply import merging for fbcode (7 of 11) · b3a9204c
      John Reese authored
      Summary:
      Applies new import merging and sorting from µsort v1.0.
      
      When merging imports, µsort will make a best-effort to move associated
      comments to match merged elements, but there are known limitations due to
      the diynamic nature of Python and developer tooling. These changes should
      not produce any dangerous runtime changes, but may require touch-ups to
      satisfy linters and other tooling.
      
      Note that µsort uses case-insensitive, lexicographical sorting, which
      results in a different ordering compared to isort. This provides a more
      consistent sorting order, matching the case-insensitive order used when
      sorting import statements by module name, and ensures that "frog", "FROG",
      and "Frog" always sort next to each other.
      
      For details on µsort's sorting and merging semantics, see the user guide:
      https://usort.readthedocs.io/en/stable/guide.html#sorting
      
      Reviewed By: lisroach
      
      Differential Revision: D36402205
      
      fbshipit-source-id: a4efc688d02da80c6e96685aa8eb00411615a366
      b3a9204c
  27. 19 Apr, 2022 1 commit
  28. 15 Apr, 2022 1 commit
    • Yanghan Wang's avatar
      enable moving traced model between devices · 2235f180
      Yanghan Wang authored
      Summary:
      X-link: https://github.com/facebookresearch/detectron2/pull/4132
      
      X-link: https://github.com/fairinternal/detectron2/pull/568
      
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/203
      
      For full discussion: https://fb.workplace.com/groups/1405155842844877/posts/5744470455580039
      
      Tracing the `.to(device)` will cause problem when moving the traced torchscript to another device (eg. from cpu to gpu, or even, from `cuda:0` to `cuda:1`). The reason is that `device` is not a `torch.Tensor`, so the tracer just hardcode the value during tracing. The solution is scripting the casting operation.
      
      Here's the code snippet illustrating this:
      ```
      # define the MyModel similar to GeneralizedRCNN, which casts the input to the model's device
      class MyModel(nn.Module):
          def __init__(self):
              super().__init__()
      
              self.conv1 = nn.Conv2d(3, 20, 5)
              self.conv2 = nn.Conv2d(20, 20, 5)
      
          def forward(self, x):
              # cast the input to the same device as this model, this makes it possible to
              # take a cpu tensor as input when the model is on GPU.
              x = x.to(self.conv1.weight.device)
      
              x = F.relu(self.conv1(x))
              return F.relu(self.conv2(x))
      
      # export the model by tracing
      model = MyModel()
      x = torch.zeros([1, 3, 32, 32])
      ts = torch.jit.trace(model, x)
      print(ts.graph)
      
      # =====================================================
      graph(%self.1 : __torch__.MyModel,
            %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)):
        %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_0.Conv2d = prim::GetAttr[name="conv2"](%self.1)
        %conv1 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %14 : int = prim::Constant[value=6]() # <ipython-input-2-5abde0efc36f>:11:0
        %15 : int = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %16 : Device = prim::Constant[value="cpu"]() # <ipython-input-2-5abde0efc36f>:11:0
        %17 : NoneType = prim::Constant()
        %18 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %19 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %20 : NoneType = prim::Constant()
        %input.1 : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu) = aten::to(%x, %14, %15, %16, %17, %18, %19, %20) # <ipython-input-2-5abde0efc36f>:11:0
        %72 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1)
        %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%72) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        %73 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5)
        %61 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%73) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        return (%61)
      # =====================================================
      
      # PyTorch cuda works
      model = copy.deepcopy(model)
      model.to("cuda")
      y = model(x)
      # torchscript cpu works
      y = ts(x)
      # torchscript cuda doesn't work
      ts = ts.to("cuda")
      y = ts(x)
      
      # =====================================================
      RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
      ---------------------------------------------------------------------------
      RuntimeError                              Traceback (most recent call last)
      <ipython-input-4-2aece3ad6c9a> in <module>
            7 # torchscript cuda doesn't work
            8 ts = ts.to("cuda")
      ----> 9 y = ts(x)
      /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
         1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
         1109                 or _global_forward_hooks or _global_forward_pre_hooks):
      -> 1110             return forward_call(*input, **kwargs)
         1111         # Do not call functions when jit is used
         1112         full_backward_hooks, non_full_backward_hooks = [], []
      RuntimeError: The following operation failed in the TorchScript interpreter.
      # =====================================================
      
      # One solution is scripting the casting instead of tracing it, the folloing code demonstrate how to do it. We need to use mixed scripting/tracing
      torch.jit.script_if_tracing
      def cast_device_like(src: torch.Tensor, dst: torch.Tensor) -> torch.Tensor:
          return src.to(dst.device)
      
      class MyModel2(nn.Module):
          def __init__(self):
              super().__init__()
      
              self.conv1 = nn.Conv2d(3, 20, 5)
              self.conv2 = nn.Conv2d(20, 20, 5)
      
          def forward(self, x):
              # cast the input to the same device as this model, this makes it possible to
              # take a cpu tensor as input when the model is on GPU.
              x = cast_device_like(x, self.conv1.weight)
      
              x = F.relu(self.conv1(x))
              return F.relu(self.conv2(x))
      
      # export the model by tracing
      model = MyModel2()
      x = torch.zeros([1, 3, 32, 32])
      ts = torch.jit.trace(model, x)
      print(ts.graph)
      
      # =====================================================
      graph(%self.1 : __torch__.MyModel2,
            %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)):
        %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_5.Conv2d = prim::GetAttr[name="conv2"](%self.1)
        %conv1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %conv1.1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %weight.5 : Tensor = prim::GetAttr[name="weight"](%conv1.1)
        %14 : Function = prim::Constant[name="cast_device_like"]()
        %input.1 : Tensor = prim::CallFunction(%14, %x, %weight.5)
        %68 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1)
        %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%68) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        %69 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5)
        %55 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%69) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        return (%55)
      # =====================================================
      
      # PyTorch cuda works
      model = copy.deepcopy(model)
      model.to("cuda")
      y = model(x)
      # torchscript cpu works
      y = ts(x)
      # Note that now torchscript cuda works
      ts = ts.to("cuda")
      y = ts(x)
      print(y.device)
      
      # =====================================================
      cuda:0
      # =====================================================
      ```
      
      For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb), this diff creates a `move_tensor_device_same_as_another(A, B)` function to replace `A.to(B.device)`. This diff updates the `rcnn.py` and all its utils.
      
      For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go, since the exported model will become device-agnostic, we can remove the "_gpu" from predictor-type.
      
      Update (April 11):
      Add test to cover tracing on one device and move traced model to another device for inference. When GPU is available, it'll trace on `cuda:0` and run inference on `cpu`, `cuda:0` (and `cuda:N-1` if available).
      
      Summary of the device related patterns
      - The usage of `.to(dtype=another_dype)` won't affect device.
      - Explicit device casting like `.to(device)` can be generally replaced by `move_device_like`.
      - For creating variable directly on device (eg. `torch.zeros`, `torch.arange`), we can replace then with ScriptModule to avoid first create on CPU and then move to new device.
          - Creating things on tracing device and then moving to new device is dangerous, because tracing device (eg. `cuda:0`) might not be available (eg. running on CPU-only machine).
          - It's hard to write `image_list.py` in this pattern because the size behaves differently during tracing (int vs. scalar tensor), in this diff, still create on CPU first and then move to target device.
      
      Reviewed By: tglik
      
      Differential Revision: D35367772
      
      fbshipit-source-id: 02d07e3d96da85f4cfbeb996e3c14c2a6f619beb
      2235f180
  29. 05 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      support do_postprocess when tracing rcnn model in D2 style · 647a3fdf
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/200
      
      Currently when exporting the RCNN model, we call it with `self.model.inference(inputs, do_postprocess=False)[0]`, therefore the output of exported model is not post-processed, eg. the mask is in the squared shape. This diff adds the option to include postprocess in the exported model.
      
      Worth noting that since the input is a single tensor, the post-process doesn't resize the output to original resolution, and we can't apply the post-process twice to further resize it in the Predictor's PostProcessFunc, add an assertion to raise error in this case. But this is fine for most production use cases where the input is not resized.
      
      Set `RCNN_EXPORT.INCLUDE_POSTPROCESS` to `True` to enable this.
      
      Reviewed By: tglik
      
      Differential Revision: D34904058
      
      fbshipit-source-id: 65f120eadc9747e9918d26ce0bd7dd265931cfb5
      647a3fdf
    • Yanghan Wang's avatar
      refactor create_fake_detection_data_loader · 312c6b62
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/199
      
      - `create_fake_detection_data_loader` currently doesn't take `cfg` as input, sometimes we need to test the augmentation that needs more complicated different cfg.
      - name is a bit bad, rename it to `create_detection_data_loader_on_toy_dataset`.
      - width/height were the resized size previously, we want to change it to the size of data source (image files) and use `cfg` to control resized size.
      
      Update V3:
      In V2 there're some test failures, the reason is that V2 is building data loader (via GeneralizedRCNN runner) using actual test config instead of default config before this diff + dataset name change. In V3 we uses the test's runner instead of default runner for the consistency. This reveals some real bugs that we didn't test before.
      
      Reviewed By: omkar-fb
      
      Differential Revision: D35238890
      
      fbshipit-source-id: 28a6037374e74f452f91b494bd455b38d3a48433
      312c6b62
  30. 24 Mar, 2022 1 commit
  31. 12 Jan, 2022 1 commit