1. 15 Oct, 2021 1 commit
    • moto's avatar
      Move wav2vec2 pretrained models to pipelines module (#1876) · fad855cd
      moto authored
      - Move wav2vec2 pretrained weights to `torchaudio.pipelines` namespace to align with #1872.
      - Split `Wav2Vec2PretrainedModelBundle` into `Wav2Vec2Bundle` (for pre-training model) and  `Wav2Vec2ASRBundle` (for models fine-tuned for ASR).
      - Update base URL
      fad855cd
  2. 08 Oct, 2021 2 commits
  3. 07 Oct, 2021 3 commits
    • moto's avatar
      Merge factory functions of pre-training model and fine-tuned model (#1830) · 274ada80
      moto authored
      This commit merges wav2vec2/hubert factory functions for pre-training and fine-tuning. In #1829, we added parameters to customize the models that are not part of architecture, and `aux_num_out` falls into this category, so it is no longer necessary to have separate functions. This concludes the wav2vec2/HuBERT API update in release 0.10.
      
      The summary of BC-breaking changes on wav2vec2 APIs between 0.9 and 0.10 (when this commit is incorporated)
      1. `Wav2Vec2Model.extract_features`
      In 0.9, it was returning the output from `FeatureExtractor` module. In 0.10, it returns the list of outputs from the intermediate layers of `TransformerEncoder` block.
      2. `wav2vec2_base(num_out: int)` -> `wav2vec2_base(<dropout_params:float>, aux_num_out: Optional[int]=None)`
          - `num_out` was renamed to `aux_num_out` and optional. If it is omitted, the resulting model does not have the linear layer for fine-tuning.
          - Added dropout parameters.
      274ada80
    • moto's avatar
      60aeb78a
    • moto's avatar
      Make the core wav2vec2 factory function public (#1829) · 31a69c36
      moto authored
      This commit makes the following changes
      1. Make the factory function with full customizability public.
          i.e. `_get_model(...) -> wav2vec2_model(...)`.
      2. Change the other architecture-specific factory functions so that they accept parameters not related to the model architecture (such as dropout).
          i.e. `wav2vec2_base() -> wav2vec2_base(encoder_projection_dropout, encoder_attention_dropout, encoder_ff_interm_dropout, ...)`
      
      ### Why?
      
      While adding the pre-trained weight support, I realized that separating API for model construction and pre-trained support achieves simple code organization because of the good separation of concern. As mentioned in #1821, in this framework,
        1. Model implementation is responsible for computation logic,
        2. factory functions are responsible for customizability and model construction,
        3. and pre-trained weight API is responsible for constructing a model and loading pre-trained weights along with the complementary information (such as pre-processing and class labels).
      
      (note: for simple models, combining 1 and 2 is also okay.)
      
      This means that factory functions has to support all the customizability required by pre-trained weight API. The current implementation uses the internal function like `from .model import Wav2Vec2Model, _get_model`, which is a bit strange.
      
      This PR rectifies it by making the mother factory function public.
      This also clarifies the purpose of having the other factory functions as public API, which is just a syntax sugar for constructing un-trained model with specific architecture. So this commit also adds supplemental parameters to them.
      31a69c36
  4. 06 Oct, 2021 4 commits
  5. 05 Oct, 2021 2 commits
  6. 29 Sep, 2021 1 commit
    • moto's avatar
      Rename factory functions `wav2vec2_asr_ARCH` to `wav2vec2_ft_ARCH` (#1804) · 5c01c25f
      moto authored
      * Rename factory functions `wav2vec2_asr_ARCH` to `wav2vec2_ft_ARCH`
      
      In #1783, we split the factory functions of wav2vec2 into ones for pretraining models
      and ones for fine-tuning models (pretraining model + extra Linear module).
      
      I picked the name scheme `wav2vec2_asr_ARCH` for factory functions of fine-tuning models,
      but did not feel right, because the architecture code is more generic.
      Even though the resulting model architecture was used for  ASR fine-tuning in the paper, 
      it does not have to be ASR.
      This became more evident as we add pre-trained parameters support, such as #1799.
      It matters more for the weight files that for which task and on which dataset it was
      trained on. For factory function, ASR task is not relevant.
      
      Therefore renaming the functions by replacing `_asr_` to `_ft_` fine-tuning.
      
      Note: Since the new functions are not release yet, this PR itself is not BC-breaking.
      5c01c25f
  7. 28 Sep, 2021 1 commit
    • moto's avatar
      Add HuBERT model architectures (#1769) · a7854f33
      moto authored
      This commit adds the following HuBERT model architectures
      
       - `base` (pre-training)
       - `large` (pre-training / fine-tuning)
       - `xlarge` (pre-training / fine-tuning)
      
      Since the internal components are same as `Wav2Vec2Model`, it reuses the existing modules..
      With these models, it is possible to 
      - import the pre-trained model published by `fairseq` and TorchScript it.
      - fine-tune the existing model for downstream task.
      a7854f33
  8. 24 Sep, 2021 1 commit
    • moto's avatar
      [BC-Breaking] Split pretraining and finetuning factory functions (#1783) · b2e9f1e4
      moto authored
      * [BC-Breaking] Split pretraining and finetuning factory functions
      
      Previously, factory functions of wav2vec2 only generated the architecture
      for the fine-tuning architecture used in wav2ve2 paper for ASR task.
      That is, pre-training architecture + Linear module, and it did not
      provide a straightforward way to generate architectures for pre-training.
      
      The goal of the original implementation was to allow the inference of
      wav2vec2 in non-Python environment via TorchScript. Now we would like to
      expand it to pre-training/fine-tuning and HuBERT model as well.
      
      Therefore, we need to have factory functions for both pre-training and
      fine-tuning. This commit introduces new factory functions and separate
      functions for pre-training and fine-tuning.
      
      1. New functions for ASR fine-tuning.
      
      We introdcue `wav2vec2_asr_XXX` functions which generates the architecture
      used for the fine-tuning task in wav2vec2 paper. *1
      
      2. Re-purpse the old functions
      
      The existing functions, `wav2vec2_XXX`, now generates the architecture with
      pre-trainig module only. (no Linear module)
      
      Note
      *1 This architecture is just one way to define architecture for fine-tuning
      and it is not universal definition. The new `wav2vec2_asr_XXX` functions are
      designed to provide these specific fine-tuning configuration and they are not
      meant to support generic architecture for downstream task.
      b2e9f1e4
  9. 20 Sep, 2021 1 commit
  10. 17 Sep, 2021 1 commit
  11. 01 Sep, 2021 1 commit
  12. 23 Aug, 2021 1 commit
  13. 20 Aug, 2021 2 commits
  14. 19 Aug, 2021 1 commit
  15. 18 Aug, 2021 1 commit
  16. 14 Aug, 2021 1 commit
  17. 12 Aug, 2021 1 commit
  18. 02 Aug, 2021 2 commits
  19. 31 Jul, 2021 1 commit
  20. 29 Jul, 2021 1 commit
  21. 20 Jul, 2021 2 commits
  22. 16 Jul, 2021 1 commit
  23. 04 Jun, 2021 2 commits
  24. 03 Jun, 2021 1 commit
    • moto's avatar
      Update docs (#1550) · 0166a851
      moto authored
      * Use `bibtex` for paper citations.
        * add `override.css` for fixing back reference.
        * wav2vec2
        * wav2letter
        * convtasnet
        * deepspeech
        * rnnt-loss
        * griffinlim
      * Fix broken references in `filtering`.
      * Fix note in soundfile backends.
      * Tweak wav2vec2 example.
      * Removes unused `pytorch_theme.css`
      0166a851
  25. 02 Jun, 2021 1 commit
  26. 01 Jun, 2021 1 commit
  27. 27 May, 2021 2 commits
  28. 11 May, 2021 1 commit