@@ -21,11 +21,13 @@ A pre-trained model and associated pipelines are expressed as an instance of ``B
Under the hood, the implementations of ``Bundle`` use components from other ``torchaudio`` modules, such as :mod:`torchaudio.models` and :mod:`torchaudio.transforms`, or even third party libraries like `SentencPiece <https://github.com/google/sentencepiece>`__ and `DeepPhonemizer <https://github.com/as-ideas/DeepPhonemizer>`__. But this implementation detail is abstracted away from library users.
.. _RNNT:
RNN-T Streaming/Non-Streaming ASR
---------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``RNNTBundle`` defines ASR pipelines and consists of three steps: feature extraction, inference, and de-tokenization.
...
...
@@ -45,7 +47,7 @@ Interface
.. minigallery:: torchaudio.pipelines.RNNTBundle
Pretrained Models
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
...
...
@@ -55,11 +57,11 @@ Pretrained Models
EMFORMER_RNNT_BASE_LIBRISPEECH
wav2vec 2.0 / HuBERT - SSL
--------------------------
wav2vec 2.0 / HuBERT / WavLM - SSL
----------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``Wav2Vec2Bundle`` instantiates models that generate acoustic features that can be used for downstream inference and fine-tuning.
...
...
@@ -73,7 +75,7 @@ Interface
Wav2Vec2Bundle
Pretrained Models
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
...
...
@@ -84,15 +86,21 @@ Pretrained Models
WAV2VEC2_LARGE
WAV2VEC2_LARGE_LV60K
WAV2VEC2_XLSR53
WAV2VEC2_XLSR_300M
WAV2VEC2_XLSR_1B
WAV2VEC2_XLSR_2B
HUBERT_BASE
HUBERT_LARGE
HUBERT_XLARGE
WAVLM_BASE
WAVLM_BASE_PLUS
WAVLM_LARGE
wav2vec 2.0 / HuBERT - Fine-tuned ASR
-------------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``Wav2Vec2ASRBundle`` instantiates models that generate probability distribution over pre-defined labels, that can be used for ASR.
:py:class:`SquimObjectiveBundle` defines speech quality and intelligibility measurement (SQUIM) pipeline that can predict **objecive** metric scores given the input waveform.
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/bundle_class.rst
SquimObjectiveBundle
Pretrained Models
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/bundle_data.rst
SQUIM_OBJECTIVE
Squim Subjective
----------------
Interface
~~~~~~~~~
:py:class:`SquimSubjectiveBundle` defines speech quality and intelligibility measurement (SQUIM) pipeline that can predict **subjective** metric scores given the input waveform.
author={Albert Zeyer and Ralf Schlüter and Hermann Ney},
year={2021},
eprint={2105.14849},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@article{wavernn,
author={Nal Kalchbrenner and
Erich Elsen and
...
...
@@ -439,3 +447,154 @@ abstract = {End-to-end spoken language translation (SLT) has recently gained pop
journal={arXiv preprint arXiv:1805.10190},
year={2018}
}
@INPROCEEDINGS{9746490,
author={Srivastava, Sangeeta and Wang, Yun and Tjandra, Andros and Kumar, Anurag and Liu, Chunxi and Singh, Kritika and Saraf, Yatharth},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Conformer-Based Self-Supervised Learning For Non-Speech Audio Tasks},
year={2022},
volume={},
number={},
pages={8862-8866},
doi={10.1109/ICASSP43922.2022.9746490}}
@article{chen2022wavlm,
title={Wavlm: Large-scale self-supervised pre-training for full stack speech processing},
author={Chen, Sanyuan and Wang, Chengyi and Chen, Zhengyang and Wu, Yu and Liu, Shujie and Chen, Zhuo and Li, Jinyu and Kanda, Naoyuki and Yoshioka, Takuya and Xiao, Xiong and others},
journal={IEEE Journal of Selected Topics in Signal Processing},
volume={16},
number={6},
pages={1505--1518},
year={2022},
publisher={IEEE}
}
@inproceedings{GigaSpeech2021,
title={GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio},
booktitle={Proc. Interspeech 2021},
year=2021,
author={Guoguo Chen and Shuzhou Chai and Guanbo Wang and Jiayu Du and Wei-Qiang Zhang and Chao Weng and Dan Su and Daniel Povey and Jan Trmal and Junbo Zhang and Mingjie Jin and Sanjeev Khudanpur and Shinji Watanabe and Shuaijiang Zhao and Wei Zou and Xiangang Li and Xuchen Yao and Yongqing Wang and Yujun Wang and Zhao You and Zhiyong Yan}
}
@inproceedings{NEURIPS2020_c5d73680,
author={Kong, Jungil and Kim, Jaehyeon and Bae, Jaekyoung},
booktitle={Advances in Neural Information Processing Systems},
editor={H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
pages={17022--17033},
publisher={Curran Associates, Inc.},
title={HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis},
author={Tom Ko and Vijayaditya Peddinti and Daniel Povey and Sanjeev Khudanpur},
title={{Audio augmentation for speech recognition}},
year=2015,
booktitle={Proc. Interspeech 2015},
pages={3586--3589},
doi={10.21437/Interspeech.2015-711}
}
@misc{musan2015,
author={David Snyder and Guoguo Chen and Daniel Povey},
title={{MUSAN}: {A} {M}usic, {S}peech, and {N}oise {C}orpus},
year={2015},
eprint={1510.08484},
note={arXiv:1510.08484v1}
}
@article{babu2021xls,
title={XLS-R: Self-supervised cross-lingual speech representation learning at scale},
author={Babu, Arun and Wang, Changhan and Tjandra, Andros and Lakhotia, Kushal and Xu, Qiantong and Goyal, Naman and Singh, Kritika and von Platen, Patrick and Saraf, Yatharth and Pino, Juan and others},
journal={arXiv preprint arXiv:2111.09296},
year={2021}
}
@inproceedings{valk2021voxlingua107,
title={VoxLingua107: a dataset for spoken language recognition},
author={Valk, J{\"o}rgen and Alum{\"a}e, Tanel},
booktitle={2021 IEEE Spoken Language Technology Workshop (SLT)},
pages={652--658},
year={2021},
organization={IEEE}
}
@inproceedings{scheibler2018pyroomacoustics,
title={Pyroomacoustics: A python package for audio room simulation and array processing algorithms},
author={Scheibler, Robin and Bezzam, Eric and Dokmani{\'c}, Ivan},
booktitle={2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)},
pages={351--355},
year={2018},
organization={IEEE}
}
@article{allen1979image,
title={Image method for efficiently simulating small-room acoustics},
author={Allen, Jont B and Berkley, David A},
journal={The Journal of the Acoustical Society of America},
volume={65},
number={4},
pages={943--950},
year={1979},
publisher={Acoustical Society of America}
}
@misc{wiki:Absorption_(acoustics),
author="{Wikipedia contributors}",
title="Absorption (acoustics) --- {W}ikipedia{,} The Free Encyclopedia",
title={The interspeech 2020 deep noise suppression challenge: Datasets, subjective testing framework, and challenge results},
author={Reddy, Chandan KA and Gopal, Vishak and Cutler, Ross and Beyrami, Ebrahim and Cheng, Roger and Dubey, Harishchandra and Matusevych, Sergiy and Aichner, Robert and Aazami, Ashkan and Braun, Sebastian and others},
journal={arXiv preprint arXiv:2005.13981},
year={2020}
}
@article{manocha2022speech,
title={Speech quality assessment through MOS using non-matching references},
author={Manocha, Pranay and Kumar, Anurag},
journal={arXiv preprint arXiv:2206.12285},
year={2022}
}
@article{cooper2021voices,
title={How do voices from past speech synthesis challenges compare today?},
author={Cooper, Erica and Yamagishi, Junichi},
journal={arXiv preprint arXiv:2105.02373},
year={2021}
}
@article{mysore2014can,
title={Can we automatically transform speech recorded on common consumer devices in real-world environments into professional production quality speech?—a dataset, insights, and challenges},
author={Mysore, Gautham J},
journal={IEEE Signal Processing Letters},
volume={22},
number={8},
pages={1006--1010},
year={2014},
publisher={IEEE}
}
@article{kumar2023torchaudio,
title={TorchAudio-Squim: Reference-less Speech Quality and Intelligibility measures in TorchAudio},
author={Kumar, Anurag and Tan, Ke and Ni, Zhaoheng and Manocha, Pranay and Zhang, Xiaohui and Henderson, Ethan and Xu, Buye},
journal={arXiv preprint arXiv:2304.01448},
year={2023}
}
@incollection{45611,
title={CNN Architectures for Large-Scale Audio Classification},
author={Shawn Hershey and Sourish Chaudhuri and Daniel P. W. Ellis and Jort F. Gemmeke and Aren Jansen and Channing Moore and Manoj Plakal and Devin Platt and Rif A. Saurous and Bryan Seybold and Malcolm Slaney and Ron Weiss and Kevin Wilson},
year={2017},
URL={https://arxiv.org/abs/1609.09430},
booktitle={International Conference on Acoustics, Speech and Signal Processing (ICASSP)}
}
@misc{pratap2023scaling,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
year={2023},
eprint={2305.13516},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{dowson1982frechet,
title={The Fr{\'e}chet distance between multivariate normal distributions},
Audio I/O functions are implemented in :ref:`torchaudio.backend<backend>` module, but for the ease of use, the following functions are made available on :mod:`torchaudio` module. There are different backends available and you can switch backends with :func:`set_audio_backend`.
I/O
---
``torchaudio`` top-level module provides the following functions that make
it easy to handle audio data.
Please refer to :ref:`backend` for the detail, and the :doc:`Audio I/O tutorial <../tutorials/audio_io_tutorial>` for the usage.
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/io.rst
.. function:: torchaudio.info(filepath: str, ...)
info
load
save
Fetch meta data of an audio file. Refer to :ref:`backend` for the detail.
.. _backend:
.. function:: torchaudio.load(filepath: str, ...)
Backend and Dispatcher
----------------------
Load audio file into torch.Tensor object. Refer to :ref:`backend` for the detail.
Decoding and encoding media is highly elaborated process. Therefore, TorchAudio
relies on third party libraries to perform these operations. These third party
libraries are called ``backend``, and currently TorchAudio integrates the
@@ -15,7 +15,7 @@ This directory contains sample implementations of training and evaluation pipeli
### Pipeline Demo
[`pipeline_demo.py`](./pipeline_demo.py) demonstrates how to use the `EMFORMER_RNNT_BASE_LIBRISPEECH`
bundle that wraps a pre-trained Emformer RNN-T produced by the LibriSpeech recipe below to perform streaming and full-context ASR on several audio samples.
or `EMFORMER_RNNT_BASE_TEDLIUM3` bundle that wraps a pre-trained Emformer RNN-T produced by the corresponding recipe below to perform streaming and full-context ASR on several audio samples.
## Model Types
...
...
@@ -67,6 +67,8 @@ The table below contains WER results for dev and test subsets of TED-LIUM releas
| dev | 0.108 |
| test | 0.098 |
[`tedlium3/eval_pipeline.py`](./tedlium3/eval_pipeline.py) evaluates the pre-trained `EMFORMER_RNNT_BASE_TEDLIUM3` bundle on the dev and test sets of TED-LIUM release 3. Running the script should produce WER results that are identical to those in the above table.
### MuST-C release v2.0
The MuST-C model is configured with a vocabulary size of 500. Consequently, the MuST-C model's last linear layer in the joiner has an output dimension of 501 (500 + 1 to account for the blank symbol). In contrast to those of the datasets for the above two models, MuST-C's transcripts are cased and punctuated; we preserve the casing and punctuation when training the SentencePiece model.
This directory contains sample implementations of training and evaluation pipelines for a Conformer RNN-T ASR model.
## Setup
### Install PyTorch and TorchAudio nightly or from source
Because Conformer RNN-T is currently a prototype feature, you will need to either use the TorchAudio nightly build or build TorchAudio from source. Note also that GPU support is required for training.
To install the nightly, follow the directions at <https://pytorch.org/>.
To build TorchAudio from source, refer to the [contributing guidelines](https://github.com/pytorch/audio/blob/main/CONTRIBUTING.md).
[`train.py`](./train.py) trains an Conformer RNN-T model (30.2M parameters, 121MB) on LibriSpeech using PyTorch Lightning. Note that the script expects users to have the following:
- Access to GPU nodes for training.
- Full LibriSpeech dataset.
- SentencePiece model to be used to encode targets; the model can be generated using [`train_spm.py`](./train_spm.py).
- File (--global_stats_path) that contains training set feature statistics; this file can be generated using [`global_stats.py`](../emformer_rnnt/global_stats.py).