- 12 Dec, 2022 1 commit
-
-
flyingdown authored
-
- 19 Oct, 2021 2 commits
-
-
Caroline Chen authored
-
moto authored
-
- 18 Oct, 2021 4 commits
-
-
moto authored
If `torch.hub` is never used, the cache directory "~/.cach/torch/hub/checkpoints/" does not exist, and the attempt to download DeepPhonemizer checkpoint there would fail. This commit fixes it by calling `os.makedirs(directory, exit_ok=True)`.
-
moto authored
-
moto authored
1. Override the return type so that Sphinx shows the exported symbols. (output model types and input torch.nn.Module) 2. Tweak docs for Tacotron2TTSBundle interfaces 3. Fix for HUBERT_ASR_XLARGE
-
Caroline Chen authored
-
- 16 Oct, 2021 4 commits
-
-
moto authored
-
moto authored
-
Caroline Chen authored
-
moto authored
-
- 15 Oct, 2021 9 commits
-
-
moto authored
-
moto authored
Future work items: - length computation of GriffinLim - better way to make InverseMelScale work in inference_mode
-
moto authored
-
moto authored
-
moto authored
-
moto authored
- Move wav2vec2 pretrained weights to `torchaudio.pipelines` namespace to align with #1872. - Split `Wav2Vec2PretrainedModelBundle` into `Wav2Vec2Bundle` (for pre-training model) and `Wav2Vec2ASRBundle` (for models fine-tuned for ASR). - Update base URL
-
moto authored
-
moto authored
-
moto authored
-
- 13 Oct, 2021 3 commits
-
-
nateanl authored
-
Caroline Chen authored
-
nateanl authored
-
- 12 Oct, 2021 2 commits
-
-
Caroline Chen authored
-
nateanl authored
-
- 11 Oct, 2021 5 commits
- 09 Oct, 2021 1 commit
-
-
moto authored
-
- 08 Oct, 2021 9 commits
-
-
moto authored
-
moto authored
-
hwangjeff authored
-
hwangjeff authored
-
moto authored
-
moto authored
-
moto authored
-
moto authored
-
moto authored
This commit merges wav2vec2/hubert factory functions for pre-training and fine-tuning. In #1829, we added parameters to customize the models that are not part of architecture, and `aux_num_out` falls into this category, so it is no longer necessary to have separate functions. This concludes the wav2vec2/HuBERT API update in release 0.10. The summary of BC-breaking changes on wav2vec2 APIs between 0.9 and 0.10 (when this commit is incorporated) 1. `Wav2Vec2Model.extract_features` In 0.9, it was returning the output from `FeatureExtractor` module. In 0.10, it returns the list of outputs from the intermediate layers of `TransformerEncoder` block. 2. `wav2vec2_base(num_out: int)` -> `wav2vec2_base(<dropout_params:float>, aux_num_out: Optional[int]=None)` - `num_out` was renamed to `aux_num_out` and optional. If it is omitted, the resulting model does not have the linear layer for fine-tuning. - Added dropout parameters.
-