1. 25 May, 2022 1 commit
  2. 23 May, 2022 2 commits
    • Miquel Jubert Hermoso's avatar
      Change reference from modeling to directly qconfig · 5f71fde2
      Miquel Jubert Hermoso authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/239
      
      *This diff is part of a stack which has the goal of "buckifying" D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go core and enabling autodeps and other tooling. The last diff in the stack introduces the TARGETS. The diffs earlier in the stack are resolving circular dependencies and other issues which prevent the buckification from occurring.*
      
      Break cyclic dependency by referring directly to the source file, instead to a different file that imports it.
      
      Reviewed By: tglik
      
      Differential Revision: D36166602
      
      fbshipit-source-id: 7deafc02a52ab978a21593184d1b3d3810dc9346
      5f71fde2
    • Miquel Jubert Hermoso's avatar
      Reformat d2go_dataset_mapper to break circular dependency · ca094a0a
      Miquel Jubert Hermoso authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/225
      
      *This diff is part of a stack which has the goal of "buckifying" D2 (https://github.com/facebookresearch/d2go/commit/87374efb134e539090e0b5c476809dc35bf6aedb)Go core and enabling autodeps and other tooling. The last diff in the stack introduces the TARGETS. The diffs earlier in the stack are resolving circular dependencies and other issues which prevent the buckification from occurring.*
      
      The overriding pattern applied in d2go_dataset_mapper, with the d2go_dataset_mapper_impl and d2go_dataset_mapper_impl_fb makes it possible that internal users get the _fb behaviour and external users the regular one, with the same import. But, this makes it necessary to put both files in the same dependency.
      
      This causes a circular dependency. In general, one reasonable assumption is that fb-only dependencies can import oss dependencies but not vice versa. In the current setup, grouping both _impl files in a buck target makes that buck target contain both fb and oss code, and depend on both. This is causing circular buck dependency issues.
      
      To fix this, while keeping the transparent import behavior, the following changes are done:
      
      1. The implementation file is moved to a directory under .fb. It will have it's own target.
      2. The non-fb version is renamed to DualInputDatasetMapper, as per Yanghan's suggestion. This simplifies the change, since it seems the fb version's behavior is not used atm.
      3. d2go_dataset_mapper is moved to have the implementation itself
      
      Reviewed By: tglik
      
      Differential Revision: D35930993
      
      fbshipit-source-id: ac57337d221df24f53e360d5dcb38ffa4164fef5
      ca094a0a
  3. 21 May, 2022 1 commit
  4. 20 May, 2022 3 commits
  5. 19 May, 2022 5 commits
  6. 18 May, 2022 1 commit
  7. 17 May, 2022 5 commits
  8. 16 May, 2022 2 commits
  9. 15 May, 2022 1 commit
    • John Reese's avatar
      apply import merging for fbcode (7 of 11) · b3a9204c
      John Reese authored
      Summary:
      Applies new import merging and sorting from µsort v1.0.
      
      When merging imports, µsort will make a best-effort to move associated
      comments to match merged elements, but there are known limitations due to
      the diynamic nature of Python and developer tooling. These changes should
      not produce any dangerous runtime changes, but may require touch-ups to
      satisfy linters and other tooling.
      
      Note that µsort uses case-insensitive, lexicographical sorting, which
      results in a different ordering compared to isort. This provides a more
      consistent sorting order, matching the case-insensitive order used when
      sorting import statements by module name, and ensures that "frog", "FROG",
      and "Frog" always sort next to each other.
      
      For details on µsort's sorting and merging semantics, see the user guide:
      https://usort.readthedocs.io/en/stable/guide.html#sorting
      
      Reviewed By: lisroach
      
      Differential Revision: D36402205
      
      fbshipit-source-id: a4efc688d02da80c6e96685aa8eb00411615a366
      b3a9204c
  10. 14 May, 2022 2 commits
  11. 12 May, 2022 1 commit
    • John Reese's avatar
      formatting changes from black 22.3.0 · e1623106
      John Reese authored
      Summary:
      Applies the black-fbsource codemod with the new build of pyfmt.
      
      paintitblack
      
      Reviewed By: lisroach
      
      Differential Revision: D36324783
      
      fbshipit-source-id: 280c09e88257e5e569ab729691165d8dedd767bc
      e1623106
  12. 11 May, 2022 1 commit
  13. 10 May, 2022 1 commit
    • Tong Xiao's avatar
      Fix a bug in export api that prevents setting specific kwargs for different backends · 70f236a6
      Tong Xiao authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/236
      
      When exporting the model to different backend engines, users may set the `model_export_kwargs` for different backends.
      
      The torchscript backend needs a placeholder `**export_kwargs` to allow the kwargs for other backends.
      
      Frankly speaking, this mechanism of passing the same set of kwargs to different backends is confusing. Better to be refactored to the factory pattern with isolated kwargs.
      
      Reviewed By: HarounH, wat3rBro
      
      Differential Revision: D36140771
      
      fbshipit-source-id: f327559c1d063c9ce914a9afe2c1acf77c2aa287
      70f236a6
  14. 29 Apr, 2022 2 commits
  15. 26 Apr, 2022 4 commits
  16. 25 Apr, 2022 2 commits
  17. 22 Apr, 2022 1 commit
  18. 21 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      use existing qconfig to create learnable qconfig · 9584b934
      Yanghan Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/215
      
      Follow up the comment in D35631192 (https://github.com/facebookresearch/d2go/commit/3204f147d67fb2ce7ac2600c46708195c15bfa3a).
      
      The current `get_learnable_qat_qconfig` implementation mimics the default get qconfig functions, as commented "follow `default_per_channel_weight_fake_quant`", etc. Instead of creating custom qconfig from scratch, this diff change it to convert an existing qconfig to learnable, so that this process is transparent to the orthogonal change on the qconfig (eg. symmetric qscheme or new backend).
      
      The following shows the difference between learnable and non-learnable `QConfig` for `qnnpack` and `fbgemm`, the actual difference is just adding `use_grad_scaling=True` and change FakeQuant type from `FusedMovingAvgObsFakeQuantize` to `_LearnableFakeQuantize`. (maybe more obvious to copy to text editor compare show side-by-side)
      ````
      qat_utils.get_learnable_qat_qconfig("qnnpack")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      		use_grad_scaling=True,
      		reduce_range=False
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      		use_grad_scaling=True,
      		qscheme=torch.per_tensor_symmetric,
      		reduce_range=False
      	){}
      )
      
      torch.ao.quantization.get_default_qat_qconfig("qnnpack")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      
      		reduce_range=False
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      
      		qscheme=torch.per_tensor_symmetric,
      
      	){}
      )
      
      qat_utils.get_learnable_qat_qconfig("fbgemm")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      		use_grad_scaling=True,
      		reduce_range=True
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization._learnable_fake_quantize._LearnableFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      		use_grad_scaling=True,
      		qscheme=torch.per_channel_symmetric,
      		reduce_range=False,
      		ch_axis=0
      	){}
      )
      
      torch.ao.quantization.get_default_qat_qconfig("fbgemm")
      QConfig(
      	activation=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAverageMinMaxObserver'>,
      		quant_min=0,
      		quant_max=255,
      
      		reduce_range=True
      	){},
      	weight=functools.partial(
      		<class 'torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize'>,
      		observer=<class 'torch.ao.quantization.observer.MovingAveragePerChannelMinMaxObserver'>,
      		quant_min=-128,
      		quant_max=127,
      		dtype=torch.qint8,
      
      		qscheme=torch.per_channel_symmetric
      
      	){}
      )
      ```
      
      Reviewed By: kimishpatel
      
      Differential Revision: D35772970
      
      fbshipit-source-id: 0be8057e4f7ce3b315bd66d77aa88b733b676223
      9584b934
    • Owen Wang's avatar
      Fix Metal optimized models' augment with bundled inputs · c055a84f
      Owen Wang authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/216
      
      Sanity check after augment with bundled inputs fails unless tensor is moved to the correct backend.
      
      Fix warning where "-metal" or "-vulkan" is not correctly removed from the string.
      
      Temporary fix: Remove the call to augment with bundled inputs, because Metal backend for iOS GPU is not available on devserver. The true fix to unblock bundled inputs will be to add an input preformatting step op into metal models to convert input to Metal tensors (and no-op if already a metal tensor). This is outside the scope of this diff.
      
      Reviewed By: ymao1993
      
      Differential Revision: D35574266
      
      fbshipit-source-id: 9f7b5c72dff2e3cf0eddf871379b079a1ec658ff
      c055a84f
  19. 19 Apr, 2022 2 commits
    • Yanghan Wang's avatar
      consolidate the creation of qconfig · 3204f147
      Yanghan Wang authored
      Summary: Pull Request resolved: https://github.com/facebookresearch/d2go/pull/210
      
      Reviewed By: kimishpatel
      
      Differential Revision: D35631192
      
      fbshipit-source-id: a713d86734c6937c16c7ced705171db9ea2f0894
      3204f147
    • Lisa Roach's avatar
      apply import merging for fbcode/mobile-vision/d2go (3 of 4) · ae2f2f64
      Lisa Roach authored
      Summary:
      Pull Request resolved: https://github.com/facebookresearch/d2go/pull/212
      
      Applies new import merging and sorting from µsort v1.0.
      
      When merging imports, µsort will make a best-effort to move associated
      comments to match merged elements, but there are known limitations due to
      the diynamic nature of Python and developer tooling. These changes should
      not produce any dangerous runtime changes, but may require touch-ups to
      satisfy linters and other tooling.
      
      Note that µsort uses case-insensitive, lexicographical sorting, which
      results in a different ordering compared to isort. This provides a more
      consistent sorting order, matching the case-insensitive order used when
      sorting import statements by module name, and ensures that "frog", "FROG",
      and "Frog" always sort next to each other.
      
      For details on µsort's sorting and merging semantics, see the user guide:
      https://usort.readthedocs.io/en/stable/guide.html#sorting
      
      Reviewed By: jreese, wat3rBro
      
      Differential Revision: D35559673
      
      fbshipit-source-id: feeae2465ac2b62c44a0e92dc566e9a386567c9d
      ae2f2f64
  20. 15 Apr, 2022 1 commit