"vscode:/vscode.git/clone" did not exist on "8e64140e35c15a626d199a0dfdd9cc7f956ab6cc"
  1. 08 Nov, 2022 1 commit
    • Caroline Chen's avatar
      Enable log probs input for rnnt loss (#2798) · ca478823
      Caroline Chen authored
      Summary:
      Add `fused_log_softmax` argument (default/current behavior = True) to rnnt loss.
      
      If setting it to `False`, call `log_softmax` on the logits prior to passing it in to the rnnt loss function.
      
      The following should produce the same output:
      ```
      rnnt_loss(logits, targets, logit_lengths, target_lengths, fused_log_softmax=True)
      ```
      
      ```
      log_probs = torch.nn.functional.log_softmax(logits, dim=-1)
      rnnt_loss(log_probs, targets, logit_lengths, target_lengths, fused_log_softmax=False)
      ```
      
      testing -- unit tests + get same results on the conformer rnnt recipe
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2798
      
      Reviewed By: xiaohui-zhang
      
      Differential Revision: D41083523
      
      Pulled By: carolineechen
      
      fbshipit-source-id: e15442ceed1f461bbf06b724aa0561ff8827ad61
      ca478823
  2. 03 Aug, 2022 1 commit
    • bshall's avatar
      An implemenation of the ITU-R BS.1770-4 loudness recommendation (#2472) · 946b180a
      bshall authored
      Summary:
      I took a stab at implementing the ITU-R BS.1770-4 loudness recommendation (closes https://github.com/pytorch/audio/issues/1205). To give some more details:
      - I've implemented K-weighting following csteinmetz1 instead of BrechtDeMan since it fit well with torchaudio's already implemented filters (`treble_biquad` and `highpass_biquad`).
      - I've added four audio files to test compliance with the recommendation. These are linked in [this pdf](https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-BS.2217-2-2016-PDF-E.pdf). There are many more test files there but I didn't want to bog down the assets directory with too many files. Let me know if I should add or remove anything.
      - I've kept many of the constant internal to the function (e.g. the block duration, overlap, and the absolute threshold gamma). I'm not sure if these should be exposed in the signature.
      - I've implemented support for up to 5 channels (following both csteinmetz1 and BrechtDeMan). The recommendation includes weights for up to 24 channels. Is there any convention for how many channels to support?
      
      I hope this is helpful! looking forward to hearing from you.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2472
      
      Reviewed By: hwangjeff
      
      Differential Revision: D38389155
      
      Pulled By: carolineechen
      
      fbshipit-source-id: fcc86d864c04ab2bedaa9acd941ebc4478ca6904
      946b180a
  3. 28 Jul, 2022 1 commit
  4. 23 Jun, 2022 1 commit
  5. 03 Jun, 2022 1 commit
  6. 01 Jun, 2022 1 commit
  7. 23 May, 2022 1 commit
    • Zhaoheng Ni's avatar
      Add assertion checks to multi-channel functions (#2401) · 38e530d7
      Zhaoheng Ni authored
      Summary:
      - The multi-channel functions only support complex-valued tensors for spectrogram and PSD matrices.
      - The mask can be real-valued or complex-valued, hence there is no explicit assertion for mask.
      - The shape of input Tensors need to be verified before the computation. For example, the shape of PSD matrix must be `(..., freq, channel, channel)`, the shape of the mask must be `(..., freq, time)`, etc.
      - The autograd unittest of `apply_beamforming` has wrong dimensions for beamform_weights detected by the assertion check. FIx it in this PR.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2401
      
      Reviewed By: carolineechen
      
      Differential Revision: D36597689
      
      Pulled By: nateanl
      
      fbshipit-source-id: 6ad1adebe3726851cc1d865650bdf177a98985f6
      38e530d7
  8. 15 May, 2022 1 commit
    • John Reese's avatar
      [codemod][usort] apply import merging for fbcode (8 of 11) · d62875cc
      John Reese authored
      Summary:
      Applies new import merging and sorting from µsort v1.0.
      
      When merging imports, µsort will make a best-effort to move associated
      comments to match merged elements, but there are known limitations due to
      the diynamic nature of Python and developer tooling. These changes should
      not produce any dangerous runtime changes, but may require touch-ups to
      satisfy linters and other tooling.
      
      Note that µsort uses case-insensitive, lexicographical sorting, which
      results in a different ordering compared to isort. This provides a more
      consistent sorting order, matching the case-insensitive order used when
      sorting import statements by module name, and ensures that "frog", "FROG",
      and "Frog" always sort next to each other.
      
      For details on µsort's sorting and merging semantics, see the user guide:
      https://usort.readthedocs.io/en/stable/guide.html#sorting
      
      Reviewed By: lisroach
      
      Differential Revision: D36402214
      
      fbshipit-source-id: b641bfa9d46242188524d4ae2c44998922a62b4c
      d62875cc
  9. 10 May, 2022 1 commit
  10. 08 Apr, 2022 1 commit
    • moto's avatar
      Add devices/properties badges (#2321) · 72ae755a
      moto authored
      Summary:
      Add badges of supported properties and devices to functionals and transforms.
      
      This commit adds `.. devices::` and `.. properties::` directives to sphinx.
      
      APIs with these directives will have badges (based off of shields.io) which link to the
      page with description of these features.
      
      Continuation of https://github.com/pytorch/audio/issues/2316
      Excluded dtypes for further improvement, and actually added badges to most of functional/transforms.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2321
      
      Reviewed By: hwangjeff
      
      Differential Revision: D35489063
      
      Pulled By: mthrok
      
      fbshipit-source-id: f68a70ebb22df29d5e9bd171273bd19007a81762
      72ae755a
  11. 26 Feb, 2022 1 commit
    • Zhaoheng Ni's avatar
      Add apply_beamforming to torchaudio.functional (#2232) · 9c56ffb4
      Zhaoheng Ni authored
      Summary:
      This PR adds ``apply_beamforming`` method to ``torchaudio.functional``.
      The method employs the beamforming weight to the multi-channel noisy spectrum to obtain the single-channel enhanced spectrum.
      The input arguments are the complex-valued beamforming weight Tensor and the multi-channel noisy spectrum.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2232
      
      Reviewed By: mthrok
      
      Differential Revision: D34474561
      
      Pulled By: nateanl
      
      fbshipit-source-id: 2910251a8f111e65375dfb50495b6a415113f06d
      9c56ffb4
  12. 25 Feb, 2022 5 commits
  13. 17 Feb, 2022 1 commit
    • Zhaoheng Ni's avatar
      Refactor batch consistency test in functional (#2245) · 9cf59e75
      Zhaoheng Ni authored
      Summary:
      In batch_consistency tests, the `assert_batch_consistency` method only accepts single Tensor, which is not applicable to some methods. For example, `lfilter` and `filtfilt` requires three Tensors as the arguments, hence they don't follow `assert_batch_consistency` in the tests.
      This PR refactors the test to accept a tuple of Tensors which have `batch` dimension. For the other arguments like `int` or `str`, they are given as `*args` after the tuple.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2245
      
      Reviewed By: mthrok
      
      Differential Revision: D34273035
      
      Pulled By: nateanl
      
      fbshipit-source-id: 0096b4f062fb4e983818e5374bed6efc7b15b056
      9cf59e75
  14. 16 Feb, 2022 2 commits
    • Zhaoheng Ni's avatar
      Refactor torchscript consistency test in functional (#2246) · 87d79889
      Zhaoheng Ni authored
      Summary:
      In torchscript_consistency tests, the `func` in each test method only accepts one `tensor` as the argument, for the other arguments of `F.xyz` method, they need to be defined inside the `func`. If there is no `Tensor` argument in `F.xzy`, the tests use a `dummy` tensor which is not used anywhere. In this PR, we refactor ``_assert_consistency`` and ``_assert_consistency_complex`` to accept a tuple of inputs instead of just one `tensor`.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2246
      
      Reviewed By: carolineechen
      
      Differential Revision: D34273057
      
      Pulled By: nateanl
      
      fbshipit-source-id: a3900edb3b2c58638e513e1490279d771ebc3d0b
      87d79889
    • Zhaoheng Ni's avatar
      Add complex dtype support in functional autograd test (#2244) · eeba91dc
      Zhaoheng Ni authored
      Summary:
      In autograd tests, to guarantee the precision, the dtype of Tensors are converted to `torch.float64` if they are real. However, the complex dtype is not considered. This PR adds `self.complex_dtype` support to the inputs.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2244
      
      Reviewed By: mthrok
      
      Differential Revision: D34272998
      
      Pulled By: nateanl
      
      fbshipit-source-id: e8698a74d7b8d99ee0fcb5f5cb5f2ffc8c80b9b5
      eeba91dc
  15. 09 Feb, 2022 1 commit
    • hwangjeff's avatar
      Fix librosa calls (#2208) · e5d567c9
      hwangjeff authored
      Summary:
      Yesterday's release of librosa 0.9.0 made args keyword-only and changed default padding from "reflect" to "zero" for some functions. This PR adjusts callsites in our tutorials and tests accordingly.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2208
      
      Reviewed By: mthrok
      
      Differential Revision: D34099793
      
      Pulled By: hwangjeff
      
      fbshipit-source-id: 4e2642cdda8aae6d0a928befaf1bbb3873d229bc
      e5d567c9
  16. 29 Dec, 2021 1 commit
  17. 23 Dec, 2021 1 commit
  18. 04 Nov, 2021 1 commit
  19. 03 Nov, 2021 2 commits
  20. 28 Oct, 2021 1 commit
  21. 13 Oct, 2021 1 commit
  22. 02 Sep, 2021 1 commit
  23. 27 Aug, 2021 1 commit
  24. 20 Aug, 2021 1 commit
  25. 19 Aug, 2021 1 commit
  26. 11 Aug, 2021 1 commit
  27. 10 Aug, 2021 1 commit
  28. 02 Aug, 2021 1 commit
  29. 29 Jul, 2021 1 commit
  30. 21 Jul, 2021 1 commit
  31. 16 Jul, 2021 1 commit
  32. 25 Jun, 2021 1 commit
  33. 04 Jun, 2021 2 commits