1. 26 Apr, 2022 1 commit
  2. 25 Apr, 2022 2 commits
  3. 22 Apr, 2022 1 commit
  4. 21 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      use existing qconfig to create learnable qconfig · 9584b934
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/215
      
      Follow up the comment in D35631192 (https://github.com/facebookresearch/d2go/commit/3204f147d67fb2ce7ac2600c46708195c15bfa3a).
      
      The current `get_learnable_qat_qconfig` implementation mimics the default get qconfig functions, as commented "follow `default_per_channel_weight_fake_quant`", etc. Instead of creating custom qconfig from scratch, this diff change it to convert an existing qconfig to learnable, so that this process is transparent to the orthogonal change on the qconfig (eg. symmetric qscheme or new backend).
      
      The following shows the difference between learnable and non-learnable `QConfig` for `qnnpack` and `fbgemm`, the actual difference is just adding `use_grad_scaling=True` and change FakeQuant type from `FusedMovingAvgObsFakeQuantize` to `_LearnableFakeQuantize`. (maybe more obvious to copy to text editor compare show side-by-side)
      ````
      qat_utils.get_learnable_qat_qconfig("qnnpack")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      		use_grad_scaling=True,
      		reduce_range=False
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      		use_grad_scaling=True,
      		qscheme=torch.per_tensor_symmetric,
      		reduce_range=False
      	){}
      )
      
      torch.ao.quantization.get_default_qat_qconfig("qnnpack")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      
      		reduce_range=False
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      
      		qscheme=torch.per_tensor_symmetric,
      
      	){}
      )
      
      qat_utils.get_learnable_qat_qconfig("fbgemm")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      		use_grad_scaling=True,
      		reduce_range=True
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      		use_grad_scaling=True,
      		qscheme=torch.per_channel_symmetric,
      		reduce_range=False,
      		ch_axis=0
      	){}
      )
      
      torch.ao.quantization.get_default_qat_qconfig("fbgemm")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      
      		reduce_range=True
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      
      		qscheme=torch.per_channel_symmetric
      
      	){}
      )
      ```
      
      Reviewed By: kimishpatel
      
      Differential Revision: D35772970
      
      fbshipit-source-id: 0be8057e4f7ce3b315bd66d77aa88b733b676223
      9584b934
    • Owen Wang's avatar
      Fix Metal optimized models' augment with bundled inputs · c055a84f
      Owen Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/216
      
      Sanity check after augment with bundled inputs fails unless tensor is moved to the correct backend.
      
      Fix warning where "-metal" or "-vulkan" is not correctly removed from the string.
      
      Temporary fix: Remove the call to augment with bundled inputs, because Metal backend for iOS GPU is not available on devserver. The true fix to unblock bundled inputs will be to add an input preformatting step op into metal models to convert input to Metal tensors (and no-op if already a metal tensor). This is outside the scope of this diff.
      
      Reviewed By: ymao1993
      
      Differential Revision: D35574266
      
      fbshipit-source-id: 9f7b5c72dff2e3cf0eddf871379b079a1ec658ff
      c055a84f
  5. 19 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      consolidate the creation of qconfig · 3204f147
      Yanghan Wang authored
      Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/210
      
      Reviewed By: kimishpatel
      
      Differential Revision: D35631192
      
      fbshipit-source-id: a713d86734c6937c16c7ced705171db9ea2f0894
      3204f147
    • Lisa Roach's avatar
      apply import merging for fbcode/mobile-vision/d2go (3 of 4) · ae2f2f64
      Lisa Roach authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/212
      
      Applies new import merging and sorting from µsort v1.0.
      
      When merging imports, µsort will make a best-effort to move associated
      comments to match merged elements, but there are known limitations due to
      the diynamic nature of Python and developer tooling. These changes should
      not produce any dangerous runtime changes, but may require touch-ups to
      satisfy linters and other tooling.
      
      Note that µsort uses case-insensitive, lexicographical sorting, which
      results in a different ordering compared to isort. This provides a more
      consistent sorting order, matching the case-insensitive order used when
      sorting import statements by module name, and ensures that "frog", "FROG",
      and "Frog" always sort next to each other.
      
      For details on µsort's sorting and merging semantics, see the user guide:
      https://usort.readthedocs.io/en/stable/guide.html#sorting
      
      Reviewed By: jreese, wat3rBro
      
      Differential Revision: D35559673
      
      fbshipit-source-id: feeae2465ac2b62c44a0e92dc566e9a386567c9d
      ae2f2f64
  6. 15 Apr, 2022 2 commits
    • Zecheng He's avatar
      Align_corners cannot be set when mode is nearest · d4c58688
      Zecheng He authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/211
      
      Align_core cannot be set if the mode is nearest. Change to default None.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D35681284
      
      fbshipit-source-id: 23c57112e588c0b4ac5facfd61a7af0aa8a07ef0
      d4c58688
    • Yanghan Wang's avatar
      enable moving traced model between devices · 2235f180
      Yanghan Wang authored
      Summary:
      X-link: https://github.com/facebookresearch/detectron2/pull/4132
      
      X-link: https://github.com/fairinternal/detectron2/pull/568
      
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/203
      
      For full discussion: https://fb.workplace.com/groups/1405155842844877/posts/5744470455580039
      
      Tracing the `.to(device)` will cause problem when moving the traced torchscript to another device (eg. from cpu to gpu, or even, from `cuda:0` to `cuda:1`). The reason is that `device` is not a `torch.Tensor`, so the tracer just hardcode the value during tracing. The solution is scripting the casting operation.
      
      Here's the code snippet illustrating this:
      ```
      # define the MyModel similar to GeneralizedRCNN, which casts the input to the model's device
      class MyModel(nn.Module):
          def __init__(self):
              super().__init__()
      
              self.conv1 = nn.Conv2d(3, 20, 5)
              self.conv2 = nn.Conv2d(20, 20, 5)
      
          def forward(self, x):
              # cast the input to the same device as this model, this makes it possible to
              # take a cpu tensor as input when the model is on GPU.
              x = x.to(self.conv1.weight.device)
      
              x = F.relu(self.conv1(x))
              return F.relu(self.conv2(x))
      
      # export the model by tracing
      model = MyModel()
      x = torch.zeros([1, 3, 32, 32])
      ts = torch.jit.trace(model, x)
      print(ts.graph)
      
      # =====================================================
      graph(%self.1 : __torch__.MyModel,
            %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)):
        %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_0.Conv2d = prim::GetAttr[name="conv2"](%self.1)
        %conv1 : __torch__.torch.nn.modules.conv.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %14 : int = prim::Constant[value=6]() # <ipython-input-2-5abde0efc36f>:11:0
        %15 : int = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %16 : Device = prim::Constant[value="cpu"]() # <ipython-input-2-5abde0efc36f>:11:0
        %17 : NoneType = prim::Constant()
        %18 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %19 : bool = prim::Constant[value=0]() # <ipython-input-2-5abde0efc36f>:11:0
        %20 : NoneType = prim::Constant()
        %input.1 : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu) = aten::to(%x, %14, %15, %16, %17, %18, %19, %20) # <ipython-input-2-5abde0efc36f>:11:0
        %72 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1)
        %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%72) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        %73 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5)
        %61 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%73) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        return (%61)
      # =====================================================
      
      # PyTorch cuda works
      model = copy.deepcopy(model)
      model.to("cuda")
      y = model(x)
      # torchscript cpu works
      y = ts(x)
      # torchscript cuda doesn't work
      ts = ts.to("cuda")
      y = ts(x)
      
      # =====================================================
      RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor
      ---------------------------------------------------------------------------
      RuntimeError                              Traceback (most recent call last)
      <ipython-input-4-2aece3ad6c9a> in <module>
            7 # torchscript cuda doesn't work
            8 ts = ts.to("cuda")
      ----> 9 y = ts(x)
      /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
         1108         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
         1109                 or _global_forward_hooks or _global_forward_pre_hooks):
      -> 1110             return forward_call(*input, **kwargs)
         1111         # Do not call functions when jit is used
         1112         full_backward_hooks, non_full_backward_hooks = [], []
      RuntimeError: The following operation failed in the TorchScript interpreter.
      # =====================================================
      
      # One solution is scripting the casting instead of tracing it, the folloing code demonstrate how to do it. We need to use mixed scripting/tracing
      torch.jit.script_if_tracing
      def cast_device_like(src: torch.Tensor, dst: torch.Tensor) -> torch.Tensor:
          return src.to(dst.device)
      
      class MyModel2(nn.Module):
          def __init__(self):
              super().__init__()
      
              self.conv1 = nn.Conv2d(3, 20, 5)
              self.conv2 = nn.Conv2d(20, 20, 5)
      
          def forward(self, x):
              # cast the input to the same device as this model, this makes it possible to
              # take a cpu tensor as input when the model is on GPU.
              x = cast_device_like(x, self.conv1.weight)
      
              x = F.relu(self.conv1(x))
              return F.relu(self.conv2(x))
      
      # export the model by tracing
      model = MyModel2()
      x = torch.zeros([1, 3, 32, 32])
      ts = torch.jit.trace(model, x)
      print(ts.graph)
      
      # =====================================================
      graph(%self.1 : __torch__.MyModel2,
            %x : Float(1, 3, 32, 32, strides=[3072, 1024, 32, 1], requires_grad=0, device=cpu)):
        %conv2 : __torch__.torch.nn.modules.conv.___torch_mangle_5.Conv2d = prim::GetAttr[name="conv2"](%self.1)
        %conv1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %conv1.1 : __torch__.torch.nn.modules.conv.___torch_mangle_4.Conv2d = prim::GetAttr[name="conv1"](%self.1)
        %weight.5 : Tensor = prim::GetAttr[name="weight"](%conv1.1)
        %14 : Function = prim::Constant[name="cast_device_like"]()
        %input.1 : Tensor = prim::CallFunction(%14, %x, %weight.5)
        %68 : Tensor = prim::CallMethod[name="forward"](%conv1, %input.1)
        %input.5 : Float(1, 20, 28, 28, strides=[15680, 784, 28, 1], requires_grad=1, device=cpu) = aten::relu(%68) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        %69 : Tensor = prim::CallMethod[name="forward"](%conv2, %input.5)
        %55 : Float(1, 20, 24, 24, strides=[11520, 576, 24, 1], requires_grad=1, device=cpu) = aten::relu(%69) # /mnt/xarfuse/uid-20293/a90d1698-seed-nspid4026533681_cgpid21128615-ns-4026533618/torch/nn/functional.py:1406:0
        return (%55)
      # =====================================================
      
      # PyTorch cuda works
      model = copy.deepcopy(model)
      model.to("cuda")
      y = model(x)
      # torchscript cpu works
      y = ts(x)
      # Note that now torchscript cuda works
      ts = ts.to("cuda")
      y = ts(x)
      print(y.device)
      
      # =====================================================
      cuda:0
      # =====================================================
      ```
      
      For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb), this diff creates a `move_tensor_device_same_as_another(A, B)` function to replace `A.to(B.device)`. This diff updates the `rcnn.py` and all its utils.
      
      For D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go, since the exported model will become device-agnostic, we can remove the "_gpu" from predictor-type.
      
      Update (April 11):
      Add test to cover tracing on one device and move traced model to another device for inference. When GPU is available, it'll trace on `cuda:0` and run inference on `cpu`, `cuda:0` (and `cuda:N-1` if available).
      
      Summary of the device related patterns
      - The usage of `.to(dtype=another_dype)` won't affect device.
      - Explicit device casting like `.to(device)` can be generally replaced by `move_device_like`.
      - For creating variable directly on device (eg. `torch.zeros`, `torch.arange`), we can replace then with ScriptModule to avoid first create on CPU and then move to new device.
          - Creating things on tracing device and then moving to new device is dangerous, because tracing device (eg. `cuda:0`) might not be available (eg. running on CPU-only machine).
          - It's hard to write `image_list.py` in this pattern because the size behaves differently during tracing (int vs. scalar tensor), in this diff, still create on CPU first and then move to target device.
      
      Reviewed By: tglik
      
      Differential Revision: D35367772
      
      fbshipit-source-id: 02d07e3d96da85f4cfbeb996e3c14c2a6f619beb
      2235f180
  7. 12 Apr, 2022 1 commit
  8. 07 Apr, 2022 1 commit
    • Owen Wang's avatar
      add metal GPU to d2go export · 6b4dbb31
      Owen Wang authored
      Summary: Allow string name of export type to indicate which mobile opt backend user wants to trigger.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D35375928
      
      fbshipit-source-id: dc3f91564681344e1d43862423ab5dc63b6644d3
      6b4dbb31
  9. 05 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      support do_postprocess when tracing rcnn model in D2 style · 647a3fdf
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/200
      
      Currently when exporting the RCNN model, we call it with `self.model.inference(inputs, do_postprocess=False)[0]`, therefore the output of exported model is not post-processed, eg. the mask is in the squared shape. This diff adds the option to include postprocess in the exported model.
      
      Worth noting that since the input is a single tensor, the post-process doesn't resize the output to original resolution, and we can't apply the post-process twice to further resize it in the Predictor's PostProcessFunc, add an assertion to raise error in this case. But this is fine for most production use cases where the input is not resized.
      
      Set `RCNN_EXPORT.INCLUDE_POSTPROCESS` to `True` to enable this.
      
      Reviewed By: tglik
      
      Differential Revision: D34904058
      
      fbshipit-source-id: 65f120eadc9747e9918d26ce0bd7dd265931cfb5
      647a3fdf
    • Yanghan Wang's avatar
      refactor create_fake_detection_data_loader · 312c6b62
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/199
      
      - `create_fake_detection_data_loader` currently doesn't take `cfg` as input, sometimes we need to test the augmentation that needs more complicated different cfg.
      - name is a bit bad, rename it to `create_detection_data_loader_on_toy_dataset`.
      - width/height were the resized size previously, we want to change it to the size of data source (image files) and use `cfg` to control resized size.
      
      Update V3:
      In V2 there're some test failures, the reason is that V2 is building data loader (via GeneralizedRCNN runner) using actual test config instead of default config before this diff + dataset name change. In V3 we uses the test's runner instead of default runner for the consistency. This reveals some real bugs that we didn't test before.
      
      Reviewed By: omkar-fb
      
      Differential Revision: D35238890
      
      fbshipit-source-id: 28a6037374e74f452f91b494bd455b38d3a48433
      312c6b62
  10. 31 Mar, 2022 1 commit
  11. 30 Mar, 2022 1 commit
  12. 28 Mar, 2022 1 commit
  13. 25 Mar, 2022 1 commit
  14. 24 Mar, 2022 4 commits
  15. 22 Mar, 2022 1 commit
    • Owen Wang's avatar
      add .npy file handling in evaluator and visualizer · a0ee06f3
      Owen Wang authored
      Summary: Detectron2[Go]'s Visualizer and sem_seg_evaluation now updated with customization entrypoints for how to handle reading semantic seg masks. By default, PIL and PNG images are expected. However, some specific projects' datasets use .npy files and this customization allows providing an alternate Visualizer and evaluation function for reading them.
      
      Reviewed By: newstzpz
      
      Differential Revision: D33434948
      
      fbshipit-source-id: 42af16d6708ffc5b2c03ec8507757313e23c8204
      a0ee06f3
  16. 21 Mar, 2022 1 commit
    • Hang Zhang's avatar
      rm TARGET in gitignore · 2fe42c47
      Hang Zhang authored
      Summary: rm TARGET in gitignore
      
      Reviewed By: newstzpz
      
      Differential Revision: D35014854
      
      fbshipit-source-id: 4a28f797bd5277eb58df6921f3ae9b7debb65f71
      2fe42c47
  17. 18 Mar, 2022 1 commit
    • Owen Wang's avatar
      d2go/semantic_seg Pre- and PostprocessFunc · 731d98f9
      Owen Wang authored
      Summary: Add documentation on the pre and post processing functions for segmentation.
      
      Reviewed By: XiaoliangDai
      
      Differential Revision: D34882165
      
      fbshipit-source-id: 375c62d0ad632a40b6557065b3362e333df8c55f
      731d98f9
  18. 17 Mar, 2022 1 commit
    • Yanghan Wang's avatar
      misc updates · 4f651f97
      Yanghan Wang authored
      Summary:
      - remove the `None` support for `merge_from_list`
      - fix logging when initializing diskcacje
      - Don't inherit `_FakeListObj` fron `list`, so looping over it can raise error.
      
      Reviewed By: sstsai-adl
      
      Differential Revision: D34952714
      
      fbshipit-source-id: f636e408b79ed77904f257f189fcef216cb2efbc
      4f651f97
  19. 16 Mar, 2022 4 commits
    • Tsahi Glik's avatar
      Cleanup QAT api · 5074692f
      Tsahi Glik authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/190
      
      Currently there is some fragmentation in export for how to apply convert logic in various mode. `prepare_for_quant_convert` is only called in non eager modes and the logic in eager mode is not customizable.
      This diff unify the `prepare_for_quant_convert` code path for all quantization modes.
      Also in this diff we rename `_non_qat_to_qat_state_dict_map`, that is use in qat checkpointer to be publish var `non_qat_to_qat_state_dict_map` and allow models to populate it with custom mapping. This is useful in cases where the param mapping between the non qat model and the qat model cannot be inferred definitely (see note in https://fburl.com/code/9rx172ht) and have some ambiguity that can only be resolved by the model logic.
      
      Reviewed By: wat3rBro
      
      Differential Revision: D34741217
      
      fbshipit-source-id: 38edfec64200ec986ffe4f3d47f527cb6a3fb5e9
      5074692f
    • Yanghan Wang's avatar
      fix the overwrite_opts and handle the new_allowed properly · 2b618211
      Yanghan Wang authored
      Summary:
      D33301363 changes the signature of `update_cfg` from `update_cfg(cfg, *updaters)` to `update_cfg(cfg, updaters, new_allowed)`, while the call sites are not updated. Eg. https://www.internalfb.com/code/fbsource/[9e071979a62ba7fd3d7a71dee1f0809815cbaa43]/fbcode/fblearner/flow/projects/mobile_vision/detectron2go/core/workflow.py?lines=221-225, the `merge_from_list_updater(e2e_train.overwrite_opts),` is then not used.
      
      For the fix:
      - Since there're a lot of call sites for `update_cfg` it's better to keep the original signature.
      - ~~~The `new_allowed` can actually be passed to each individual updater instead of the `update_cfg`, this also gives finer control.~~~
      - Make override the `merge_from_list` to make it respect `new_allowed`.
      - Preserve the `new_allowed` for all nodes (not only the root) in the FLOW Future calls.
      
      Reviewed By: zhanghang1989
      
      Differential Revision: D34840001
      
      fbshipit-source-id: 14aff6bec75a8b53d4109e6cd73d2494f68863b4
      2b618211
    • Chengjiang Long's avatar
      Implement the user calibration model option 3 under D2GO · b30cf9d2
      Chengjiang Long authored
      Summary:
      Dataloader:
      Rewrote the data loader via build_stream_dataset_reader with the DATASET_DEFINITION of "peopleai_face_eng_inference_results".
      
      User Calibration Model (initial version):
      
      nn.Sequential(
          nn.Conv1d(72, 128, 1),
          nn.BatchNorm1d(128),
          nn.ReLU(),
          nn.Flatten(),
          nn.Linear(128, 72),
      )
      
      Differential Revision: D34202009
      
      fbshipit-source-id: 55a2c579e463ed19eac38b5dd12e11c09cbccc11
      b30cf9d2
    • Ananth Subramaniam's avatar
      Trainer(checkpoint_callback) -> Trainer(enable_checkpointing) · f781223c
      Ananth Subramaniam authored
      Reviewed By: kazhang
      
      Differential Revision: D34669519
      
      fbshipit-source-id: 8cfee968104c823a55960f2730d8e888ac1f298e
      f781223c
  20. 10 Mar, 2022 1 commit
  21. 09 Mar, 2022 1 commit
  22. 08 Mar, 2022 2 commits
  23. 07 Mar, 2022 1 commit
  24. 05 Mar, 2022 2 commits
  25. 04 Mar, 2022 3 commits