"vscode:/vscode.git/clone" did not exist on "538a809f162aa7c2c0bf6e205463c718cd8be216"
  1. 25 Feb, 2022 4 commits
  2. 17 Feb, 2022 1 commit
    • Zhaoheng Ni's avatar
      Refactor batch consistency test in functional (#2245) · 9cf59e75
      Zhaoheng Ni authored
      Summary:
      In batch_consistency tests, the `assert_batch_consistency` method only accepts single Tensor, which is not applicable to some methods. For example, `lfilter` and `filtfilt` requires three Tensors as the arguments, hence they don't follow `assert_batch_consistency` in the tests.
      This PR refactors the test to accept a tuple of Tensors which have `batch` dimension. For the other arguments like `int` or `str`, they are given as `*args` after the tuple.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2245
      
      Reviewed By: mthrok
      
      Differential Revision: D34273035
      
      Pulled By: nateanl
      
      fbshipit-source-id: 0096b4f062fb4e983818e5374bed6efc7b15b056
      9cf59e75
  3. 16 Feb, 2022 2 commits
    • Zhaoheng Ni's avatar
      Refactor torchscript consistency test in functional (#2246) · 87d79889
      Zhaoheng Ni authored
      Summary:
      In torchscript_consistency tests, the `func` in each test method only accepts one `tensor` as the argument, for the other arguments of `F.xyz` method, they need to be defined inside the `func`. If there is no `Tensor` argument in `F.xzy`, the tests use a `dummy` tensor which is not used anywhere. In this PR, we refactor ``_assert_consistency`` and ``_assert_consistency_complex`` to accept a tuple of inputs instead of just one `tensor`.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2246
      
      Reviewed By: carolineechen
      
      Differential Revision: D34273057
      
      Pulled By: nateanl
      
      fbshipit-source-id: a3900edb3b2c58638e513e1490279d771ebc3d0b
      87d79889
    • Zhaoheng Ni's avatar
      Add complex dtype support in functional autograd test (#2244) · eeba91dc
      Zhaoheng Ni authored
      Summary:
      In autograd tests, to guarantee the precision, the dtype of Tensors are converted to `torch.float64` if they are real. However, the complex dtype is not considered. This PR adds `self.complex_dtype` support to the inputs.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2244
      
      Reviewed By: mthrok
      
      Differential Revision: D34272998
      
      Pulled By: nateanl
      
      fbshipit-source-id: e8698a74d7b8d99ee0fcb5f5cb5f2ffc8c80b9b5
      eeba91dc
  4. 09 Feb, 2022 1 commit
    • hwangjeff's avatar
      Fix librosa calls (#2208) · e5d567c9
      hwangjeff authored
      Summary:
      Yesterday's release of librosa 0.9.0 made args keyword-only and changed default padding from "reflect" to "zero" for some functions. This PR adjusts callsites in our tutorials and tests accordingly.
      
      Pull Request resolved: https://github.com/pytorch/audio/pull/2208
      
      Reviewed By: mthrok
      
      Differential Revision: D34099793
      
      Pulled By: hwangjeff
      
      fbshipit-source-id: 4e2642cdda8aae6d0a928befaf1bbb3873d229bc
      e5d567c9
  5. 29 Dec, 2021 1 commit
  6. 23 Dec, 2021 1 commit
  7. 04 Nov, 2021 1 commit
  8. 03 Nov, 2021 2 commits
  9. 28 Oct, 2021 1 commit
  10. 13 Oct, 2021 1 commit
  11. 02 Sep, 2021 1 commit
  12. 27 Aug, 2021 1 commit
  13. 20 Aug, 2021 1 commit
  14. 19 Aug, 2021 1 commit
  15. 11 Aug, 2021 1 commit
  16. 10 Aug, 2021 1 commit
  17. 02 Aug, 2021 1 commit
  18. 29 Jul, 2021 1 commit
  19. 21 Jul, 2021 1 commit
  20. 16 Jul, 2021 1 commit
  21. 25 Jun, 2021 1 commit
  22. 04 Jun, 2021 2 commits
  23. 01 Jun, 2021 1 commit
  24. 22 May, 2021 1 commit
    • parmeet's avatar
      fbsync (#1524) · ae9560da
      parmeet authored
      * Remove `class FunctionalComplex` header accidentally re-introduced in #1490 
      ae9560da
  25. 20 May, 2021 1 commit
  26. 11 May, 2021 1 commit
  27. 06 May, 2021 2 commits
  28. 03 May, 2021 1 commit
  29. 26 Apr, 2021 1 commit
  30. 19 Apr, 2021 1 commit
  31. 15 Apr, 2021 1 commit
  32. 14 Apr, 2021 1 commit
  33. 13 Apr, 2021 1 commit
    • Jcaw's avatar
      Remove VAD from batch consistency tests (#1451) · 749c0e39
      Jcaw authored
      The VAD function trims the input tensor to the first instance of voice
      activity on any channel or item. Trimming batches this way may be
      undesirable as the item with earliest activity will dominate. Either
      way, the batch behaviour does not match the itemwise behaviour.
      
      The VAD batch consistency tests currently pass out of coincidence, but
      they specify incorrect behaviour. This commit removes them.
      749c0e39