@@ -21,11 +21,13 @@ A pre-trained model and associated pipelines are expressed as an instance of ``B
Under the hood, the implementations of ``Bundle`` use components from other ``torchaudio`` modules, such as :mod:`torchaudio.models` and :mod:`torchaudio.transforms`, or even third party libraries like `SentencPiece <https://github.com/google/sentencepiece>`__ and `DeepPhonemizer <https://github.com/as-ideas/DeepPhonemizer>`__. But this implementation detail is abstracted away from library users.
.. _RNNT:
RNN-T Streaming/Non-Streaming ASR
---------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``RNNTBundle`` defines ASR pipelines and consists of three steps: feature extraction, inference, and de-tokenization.
...
...
@@ -45,7 +47,7 @@ Interface
.. minigallery:: torchaudio.pipelines.RNNTBundle
Pretrained Models
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
...
...
@@ -55,11 +57,11 @@ Pretrained Models
EMFORMER_RNNT_BASE_LIBRISPEECH
wav2vec 2.0 / HuBERT - SSL
--------------------------
wav2vec 2.0 / HuBERT / WavLM - SSL
----------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``Wav2Vec2Bundle`` instantiates models that generate acoustic features that can be used for downstream inference and fine-tuning.
...
...
@@ -73,7 +75,7 @@ Interface
Wav2Vec2Bundle
Pretrained Models
^^^^^^^^^^^^^^^^^
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
...
...
@@ -84,15 +86,21 @@ Pretrained Models
WAV2VEC2_LARGE
WAV2VEC2_LARGE_LV60K
WAV2VEC2_XLSR53
WAV2VEC2_XLSR_300M
WAV2VEC2_XLSR_1B
WAV2VEC2_XLSR_2B
HUBERT_BASE
HUBERT_LARGE
HUBERT_XLARGE
WAVLM_BASE
WAVLM_BASE_PLUS
WAVLM_LARGE
wav2vec 2.0 / HuBERT - Fine-tuned ASR
-------------------------------------
Interface
^^^^^^^^^
~~~~~~~~~
``Wav2Vec2ASRBundle`` instantiates models that generate probability distribution over pre-defined labels, that can be used for ASR.
:py:class:`SquimObjectiveBundle` defines speech quality and intelligibility measurement (SQUIM) pipeline that can predict **objecive** metric scores given the input waveform.
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/bundle_class.rst
SquimObjectiveBundle
Pretrained Models
~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/bundle_data.rst
SQUIM_OBJECTIVE
Squim Subjective
----------------
Interface
~~~~~~~~~
:py:class:`SquimSubjectiveBundle` defines speech quality and intelligibility measurement (SQUIM) pipeline that can predict **subjective** metric scores given the input waveform.
author={Albert Zeyer and Ralf Schlüter and Hermann Ney},
year={2021},
eprint={2105.14849},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
@article{wavernn,
author={Nal Kalchbrenner and
Erich Elsen and
...
...
@@ -66,7 +74,7 @@
year={2017}
}
@misc{conneau2020unsupervised,
title={Unsupervised Cross-lingual Representation Learning for Speech Recognition},
title={Unsupervised Cross-lingual Representation Learning for Speech Recognition},
author={Alexis Conneau and Alexei Baevski and Ronan Collobert and Abdelrahman Mohamed and Michael Auli},
year={2020},
eprint={2006.13979},
...
...
@@ -80,7 +88,7 @@
year={2014}
}
@misc{ardila2020common,
title={Common Voice: A Massively-Multilingual Speech Corpus},
title={Common Voice: A Massively-Multilingual Speech Corpus},
author={Rosana Ardila and Megan Branson and Kelly Davis and Michael Henretty and Michael Kohler and Josh Meyer and Reuben Morais and Lindsay Saunders and Francis M. Tyers and Gregor Weber},
year={2020},
eprint={1912.06670},
...
...
@@ -99,16 +107,16 @@
}
@INPROCEEDINGS{librilight,
author={J. {Kahn} and M. {Rivière} and W. {Zheng} and E. {Kharitonov} and Q. {Xu} and P. E. {Mazaré} and J. {Karadayi} and V. {Liptchinsky} and R. {Collobert} and C. {Fuegen} and T. {Likhomanenko} and G. {Synnaeve} and A. {Joulin} and A. {Mohamed} and E. {Dupoux}},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Libri-Light: A Benchmark for ASR with Limited or No Supervision},
booktitle={ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Libri-Light: A Benchmark for ASR with Limited or No Supervision},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Librispeech: An ASR corpus based on public domain audio books},
booktitle={2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Librispeech: An ASR corpus based on public domain audio books},
year={2015},
volume={},
number={},
...
...
@@ -122,7 +130,7 @@
year={2019},
}
@misc{baevski2020wav2vec,
title={wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
title={wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations},
author={Alexei Baevski and Henry Zhou and Abdelrahman Mohamed and Michael Auli},
year={2020},
eprint={2006.11477},
...
...
@@ -130,7 +138,7 @@
primaryClass={cs.CL}
}
@misc{hsu2021hubert,
title={HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units},
title={HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units},
author={Wei-Ning Hsu and Benjamin Bolte and Yao-Hung Hubert Tsai and Kushal Lakhotia and Ruslan Salakhutdinov and Abdelrahman Mohamed},
year={2021},
eprint={2106.07447},
...
...
@@ -138,7 +146,7 @@
primaryClass={cs.CL}
}
@misc{hannun2014deep,
title={Deep Speech: Scaling up end-to-end speech recognition},
title={Deep Speech: Scaling up end-to-end speech recognition},
author={Awni Hannun and Carl Case and Jared Casper and Bryan Catanzaro and Greg Diamos and Erich Elsen and Ryan Prenger and Sanjeev Satheesh and Shubho Sengupta and Adam Coates and Andrew Y. Ng},
year={2014},
eprint={1412.5567},
...
...
@@ -146,7 +154,7 @@
primaryClass={cs.CL}
}
@misc{graves2012sequence,
title={Sequence Transduction with Recurrent Neural Networks},
title={Sequence Transduction with Recurrent Neural Networks},
author={Alex Graves},
year={2012},
eprint={1211.3711},
...
...
@@ -154,7 +162,7 @@
primaryClass={cs.NE}
}
@misc{collobert2016wav2letter,
title={Wav2Letter: an End-to-End ConvNet-based Speech Recognition System},
title={Wav2Letter: an End-to-End ConvNet-based Speech Recognition System},
author={Ronan Collobert and Christian Puhrsch and Gabriel Synnaeve},
year={2016},
eprint={1609.03193},
...
...
@@ -162,7 +170,7 @@
primaryClass={cs.LG}
}
@misc{kalchbrenner2018efficient,
title={Efficient Neural Audio Synthesis},
title={Efficient Neural Audio Synthesis},
author={Nal Kalchbrenner and Erich Elsen and Karen Simonyan and Seb Noury and Norman Casagrande and Edward Lockhart and Florian Stimberg and Aaron van den Oord and Sander Dieleman and Koray Kavukcuoglu},
year={2018},
eprint={1802.08435},
...
...
@@ -202,8 +210,8 @@
}
@INPROCEEDINGS{6701851,
author={Perraudin, Nathanaël and Balazs, Peter and Søndergaard, Peter L.},
booktitle={2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics},
title={A fast Griffin-Lim algorithm},
booktitle={2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics},
title={A fast Griffin-Lim algorithm},
year={2013},
volume={},
number={},
...
...
@@ -211,8 +219,8 @@
doi={10.1109/WASPAA.2013.6701851}}
@INPROCEEDINGS{1172092,
author={Griffin, D. and Jae Lim},
booktitle={ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing},
title={Signal estimation from modified short-time Fourier transform},
booktitle={ICASSP '83. IEEE International Conference on Acoustics, Speech, and Signal Processing},
title={Signal estimation from modified short-time Fourier transform},
year={1983},
volume={8},
number={},
...
...
@@ -220,8 +228,8 @@
doi={10.1109/ICASSP.1983.1172092}}
@INPROCEEDINGS{6854049,
author={Ghahremani, Pegah and BabaAli, Bagher and Povey, Daniel and Riedhammer, Korbinian and Trmal, Jan and Khudanpur, Sanjeev},
booktitle={2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={A pitch extraction algorithm tuned for automatic speech recognition},
booktitle={2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={A pitch extraction algorithm tuned for automatic speech recognition},
year={2014},
volume={},
number={},
...
...
@@ -254,16 +262,16 @@
organization={IEEE}
}
@inproceedings{shi2021emformer,
title={Emformer: Efficient Memory Transformer Based Acoustic Model for Low Latency Streaming Speech Recognition},
title={Emformer: Efficient Memory Transformer Based Acoustic Model for Low Latency Streaming Speech Recognition},
author={Shi, Yangyang and Wang, Yongqiang and Wu, Chunyang and Yeh, Ching-Feng and Chan, Julian and Zhang, Frank and Le, Duc and Seltzer, Mike},
booktitle={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
booktitle={ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={6783-6787},
year={2021}
}
@inproceedings{9747706,
author={Shi, Yangyang and Wu, Chunyang and Wang, Dilin and Xiao, Alex and Mahadeokar, Jay and Zhang, Xiaohui and Liu, Chunxi and Li, Ke and Shangguan, Yuan and Nagaraja, Varun and Kalinli, Ozlem and Seltzer, Mike},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Streaming Transformer Transducer based Speech Recognition Using Non-Causal Convolution},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Streaming Transformer Transducer based Speech Recognition Using Non-Causal Convolution},
year={2022},
volume={},
number={},
...
...
@@ -439,3 +447,154 @@ abstract = {End-to-end spoken language translation (SLT) has recently gained pop
journal={arXiv preprint arXiv:1805.10190},
year={2018}
}
@INPROCEEDINGS{9746490,
author={Srivastava, Sangeeta and Wang, Yun and Tjandra, Andros and Kumar, Anurag and Liu, Chunxi and Singh, Kritika and Saraf, Yatharth},
booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Conformer-Based Self-Supervised Learning For Non-Speech Audio Tasks},
year={2022},
volume={},
number={},
pages={8862-8866},
doi={10.1109/ICASSP43922.2022.9746490}}
@article{chen2022wavlm,
title={Wavlm: Large-scale self-supervised pre-training for full stack speech processing},
author={Chen, Sanyuan and Wang, Chengyi and Chen, Zhengyang and Wu, Yu and Liu, Shujie and Chen, Zhuo and Li, Jinyu and Kanda, Naoyuki and Yoshioka, Takuya and Xiao, Xiong and others},
journal={IEEE Journal of Selected Topics in Signal Processing},
volume={16},
number={6},
pages={1505--1518},
year={2022},
publisher={IEEE}
}
@inproceedings{GigaSpeech2021,
title={GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio},
booktitle={Proc. Interspeech 2021},
year=2021,
author={Guoguo Chen and Shuzhou Chai and Guanbo Wang and Jiayu Du and Wei-Qiang Zhang and Chao Weng and Dan Su and Daniel Povey and Jan Trmal and Junbo Zhang and Mingjie Jin and Sanjeev Khudanpur and Shinji Watanabe and Shuaijiang Zhao and Wei Zou and Xiangang Li and Xuchen Yao and Yongqing Wang and Yujun Wang and Zhao You and Zhiyong Yan}
}
@inproceedings{NEURIPS2020_c5d73680,
author={Kong, Jungil and Kim, Jaehyeon and Bae, Jaekyoung},
booktitle={Advances in Neural Information Processing Systems},
editor={H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin},
pages={17022--17033},
publisher={Curran Associates, Inc.},
title={HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis},
author={Tom Ko and Vijayaditya Peddinti and Daniel Povey and Sanjeev Khudanpur},
title={{Audio augmentation for speech recognition}},
year=2015,
booktitle={Proc. Interspeech 2015},
pages={3586--3589},
doi={10.21437/Interspeech.2015-711}
}
@misc{musan2015,
author={David Snyder and Guoguo Chen and Daniel Povey},
title={{MUSAN}: {A} {M}usic, {S}peech, and {N}oise {C}orpus},
year={2015},
eprint={1510.08484},
note={arXiv:1510.08484v1}
}
@article{babu2021xls,
title={XLS-R: Self-supervised cross-lingual speech representation learning at scale},
author={Babu, Arun and Wang, Changhan and Tjandra, Andros and Lakhotia, Kushal and Xu, Qiantong and Goyal, Naman and Singh, Kritika and von Platen, Patrick and Saraf, Yatharth and Pino, Juan and others},
journal={arXiv preprint arXiv:2111.09296},
year={2021}
}
@inproceedings{valk2021voxlingua107,
title={VoxLingua107: a dataset for spoken language recognition},
author={Valk, J{\"o}rgen and Alum{\"a}e, Tanel},
booktitle={2021 IEEE Spoken Language Technology Workshop (SLT)},
pages={652--658},
year={2021},
organization={IEEE}
}
@inproceedings{scheibler2018pyroomacoustics,
title={Pyroomacoustics: A python package for audio room simulation and array processing algorithms},
author={Scheibler, Robin and Bezzam, Eric and Dokmani{\'c}, Ivan},
booktitle={2018 IEEE international conference on acoustics, speech and signal processing (ICASSP)},
pages={351--355},
year={2018},
organization={IEEE}
}
@article{allen1979image,
title={Image method for efficiently simulating small-room acoustics},
author={Allen, Jont B and Berkley, David A},
journal={The Journal of the Acoustical Society of America},
volume={65},
number={4},
pages={943--950},
year={1979},
publisher={Acoustical Society of America}
}
@misc{wiki:Absorption_(acoustics),
author="{Wikipedia contributors}",
title="Absorption (acoustics) --- {W}ikipedia{,} The Free Encyclopedia",
title={The interspeech 2020 deep noise suppression challenge: Datasets, subjective testing framework, and challenge results},
author={Reddy, Chandan KA and Gopal, Vishak and Cutler, Ross and Beyrami, Ebrahim and Cheng, Roger and Dubey, Harishchandra and Matusevych, Sergiy and Aichner, Robert and Aazami, Ashkan and Braun, Sebastian and others},
journal={arXiv preprint arXiv:2005.13981},
year={2020}
}
@article{manocha2022speech,
title={Speech quality assessment through MOS using non-matching references},
author={Manocha, Pranay and Kumar, Anurag},
journal={arXiv preprint arXiv:2206.12285},
year={2022}
}
@article{cooper2021voices,
title={How do voices from past speech synthesis challenges compare today?},
author={Cooper, Erica and Yamagishi, Junichi},
journal={arXiv preprint arXiv:2105.02373},
year={2021}
}
@article{mysore2014can,
title={Can we automatically transform speech recorded on common consumer devices in real-world environments into professional production quality speech?—a dataset, insights, and challenges},
author={Mysore, Gautham J},
journal={IEEE Signal Processing Letters},
volume={22},
number={8},
pages={1006--1010},
year={2014},
publisher={IEEE}
}
@article{kumar2023torchaudio,
title={TorchAudio-Squim: Reference-less Speech Quality and Intelligibility measures in TorchAudio},
author={Kumar, Anurag and Tan, Ke and Ni, Zhaoheng and Manocha, Pranay and Zhang, Xiaohui and Henderson, Ethan and Xu, Buye},
journal={arXiv preprint arXiv:2304.01448},
year={2023}
}
@incollection{45611,
title={CNN Architectures for Large-Scale Audio Classification},
author={Shawn Hershey and Sourish Chaudhuri and Daniel P. W. Ellis and Jort F. Gemmeke and Aren Jansen and Channing Moore and Manoj Plakal and Devin Platt and Rif A. Saurous and Bryan Seybold and Malcolm Slaney and Ron Weiss and Kevin Wilson},
year={2017},
URL={https://arxiv.org/abs/1609.09430},
booktitle={International Conference on Acoustics, Speech and Signal Processing (ICASSP)}
}
@misc{pratap2023scaling,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
year={2023},
eprint={2305.13516},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{dowson1982frechet,
title={The Fr{\'e}chet distance between multivariate normal distributions},
Audio I/O functions are implemented in :ref:`torchaudio.backend<backend>` module, but for the ease of use, the following functions are made available on :mod:`torchaudio` module. There are different backends available and you can switch backends with :func:`set_audio_backend`.
I/O
---
``torchaudio`` top-level module provides the following functions that make
it easy to handle audio data.
Please refer to :ref:`backend` for the detail, and the :doc:`Audio I/O tutorial <../tutorials/audio_io_tutorial>` for the usage.
.. autosummary::
:toctree: generated
:nosignatures:
:template: autosummary/io.rst
.. function:: torchaudio.info(filepath: str, ...)
info
load
save
Fetch meta data of an audio file. Refer to :ref:`backend` for the detail.
.. _backend:
.. function:: torchaudio.load(filepath: str, ...)
Backend and Dispatcher
----------------------
Load audio file into torch.Tensor object. Refer to :ref:`backend` for the detail.
Decoding and encoding media is highly elaborated process. Therefore, TorchAudio
relies on third party libraries to perform these operations. These third party
libraries are called ``backend``, and currently TorchAudio integrates the
@@ -15,7 +15,7 @@ This directory contains sample implementations of training and evaluation pipeli
### Pipeline Demo
[`pipeline_demo.py`](./pipeline_demo.py) demonstrates how to use the `EMFORMER_RNNT_BASE_LIBRISPEECH`
bundle that wraps a pre-trained Emformer RNN-T produced by the LibriSpeech recipe below to perform streaming and full-context ASR on several audio samples.
or `EMFORMER_RNNT_BASE_TEDLIUM3` bundle that wraps a pre-trained Emformer RNN-T produced by the corresponding recipe below to perform streaming and full-context ASR on several audio samples.
## Model Types
...
...
@@ -67,6 +67,8 @@ The table below contains WER results for dev and test subsets of TED-LIUM releas
| dev | 0.108 |
| test | 0.098 |
[`tedlium3/eval_pipeline.py`](./tedlium3/eval_pipeline.py) evaluates the pre-trained `EMFORMER_RNNT_BASE_TEDLIUM3` bundle on the dev and test sets of TED-LIUM release 3. Running the script should produce WER results that are identical to those in the above table.
### MuST-C release v2.0
The MuST-C model is configured with a vocabulary size of 500. Consequently, the MuST-C model's last linear layer in the joiner has an output dimension of 501 (500 + 1 to account for the blank symbol). In contrast to those of the datasets for the above two models, MuST-C's transcripts are cased and punctuated; we preserve the casing and punctuation when training the SentencePiece model.
This directory contains sample implementations of training and evaluation pipelines for a Conformer RNN-T ASR model.
## Setup
### Install PyTorch and TorchAudio nightly or from source
Because Conformer RNN-T is currently a prototype feature, you will need to either use the TorchAudio nightly build or build TorchAudio from source. Note also that GPU support is required for training.
To install the nightly, follow the directions at <https://pytorch.org/>.
To build TorchAudio from source, refer to the [contributing guidelines](https://github.com/pytorch/audio/blob/main/CONTRIBUTING.md).
[`train.py`](./train.py) trains an Conformer RNN-T model (30.2M parameters, 121MB) on LibriSpeech using PyTorch Lightning. Note that the script expects users to have the following:
- Access to GPU nodes for training.
- Full LibriSpeech dataset.
- SentencePiece model to be used to encode targets; the model can be generated using [`train_spm.py`](./train_spm.py).
- File (--global_stats_path) that contains training set feature statistics; this file can be generated using [`global_stats.py`](../emformer_rnnt/global_stats.py).