Unverified Commit 4a320f6c authored by Patrick von Platen's avatar Patrick von Platen Committed by GitHub
Browse files

[ASR] Add official ASR CTC example to `examples/pytorch/speech-recognition` (#13620)



* up

* rename

* add asr example

* add auto feature extractor

* some more fixes

* correct layerdrop

* correct for multi-gpu dist

* clean up

* refactor

* refactor

* more fixes

* more fixes

* clean-up

* finish

* up

* Apply suggestions from code review

* fix isort

* update

* up

* add note

* apply surajs suggestions

* Apply suggestions from code review
Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>

* isort

* small change

* Apply suggestions from code review
Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>

* Apply suggestions from code review
Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>

* add hubert

* Update examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
Co-authored-by: default avatarAnton Lozhkov <aglozhkov@gmail.com>
parent 41c186d2
<!---
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
# Automatic Speech Recognition examples
## Connectionist Temporal Classification without Language Model (CTC w/o LM)
The script [`run_speech_recognition_ctc.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) can be used to fine-tune any pretrained [Connectionist Temporal Classification Model](https://huggingface.co/transformers/master/model_doc/auto.html?highlight=automodelforctc#automodelforctc) for automatic speech
recognition on one of the [official speech recognition datasets](https://huggingface.co/datasets?task_ids=task_ids:automatic-speech-recognition) or a custom dataset.
Speech recognition models that have been pretrained in unsupervised fashion on audio data alone, *e.g.* [Wav2Vec2](https://huggingface.co/transformers/master/model_doc/wav2vec2.html), [HuBERT](https://huggingface.co/transformers/master/model_doc/hubert.html), [XLSR-Wav2Vec2](https://huggingface.co/transformers/master/model_doc/xlsr_wav2vec2.html), have shown to require only
very little annotated data to yield good performance on automatic speech recognition datasets.
In the script [`run_speech_recognition_ctc`], we first create a vocabulary from all unique characters of both the training data and evaluation data. Then, we preprocesses the speech recognition dataset, which includes correct resampling, normalization and padding. Finally, the pretrained speech recognition model is fine-tuned on the annotated speech recognition datasets using CTC loss.
---
**NOTE**
It is currently not recommended to make use of `--preprocessing_num_workers`.
If however, you wish to use multi-processing for data preprocessing by setting `--preprocessing_num_workers` > 1,
please make sure to set the environment variable `OMP_NUM_THREADS` to 1 as follows:
```bash
OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...
```
If the environment variable is not set, the training script might hang, *i.e.* see: https://github.com/pytorch/audio/issues/1021#issuecomment-726915239
---
### Single-GPU
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/master/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using a single GPU in half-precision.
```bash
python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-demo" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--audio_column_name="path" \
--text_column_name="sentence" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval
```
On a single V100 GPU, this script should run in *ca.* 1 hour 20 minutes and yield a CTC loss of **0.39** and word error rate
of **0.35**.
### Multi-GPU
The following command shows how to fine-tune [XLSR-Wav2Vec2](https://huggingface.co/transformers/master/model_doc/xlsr_wav2vec2.html) on [Common Voice](https://huggingface.co/datasets/common_voice) using 8 GPUs in half-precision.
```bash
python -m torch.distributed.launch \
--nproc_per_node 8 run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="tr" \
--output_dir="./wav2vec2-common_voice-tr-demo-dist" \
--preprocessing_num_workers="16" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="4" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--audio_column_name="path" \
--text_column_name="sentence" \
--save_steps="400" \
--eval_steps="100" \
--logging_steps="1" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_extractor \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval
```
On 8 V100 GPUs, this script should run in *ca.* 18 minutes and yield a CTC loss of **0.39** and word error rate
of **0.36**.
### Examples
In the following a couple of demonstration fine-tuning runs are listed.
It has been verified that the script works for the following datasets:
- [Common Voice](https://huggingface.co/datasets/common_voice)
- [Librispeech](https://huggingface.co/datasets/librispeech_asr)
| Dataset | Dataset Config | Pretrained Model | Word error rate on eval | GPU setup | Training time | Fine-tuned Model & Logs |
|-------|------------------------------|-------------|---------------|---------------|----------------------|-------------|
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) | 0.042 | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-librispeech-clean-100h-demo-dist) |
| [Librispeech](https://huggingface.co/datasets/librispeech_asr)| `"clean"` - `"train.100"` | [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) | 0.088 | 8 GPU V100 | 1h30min | [here](https://huggingface.co/patrickvonplaten/hubert-librispeech-clean-100h-demo-dist) |
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.36 | 8 GPU V100 | 18min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo-dist) |
| [Common Voice](https://huggingface.co/datasets/common_voice)| `"tr"` | [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) | 0.35 | 1 GPU V100 | 1h20min | [here](https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tr-demo) |
datasets >= 1.12.0
torch >= 1.5
torchaudio
...@@ -39,6 +39,7 @@ SRC_DIRS = [ ...@@ -39,6 +39,7 @@ SRC_DIRS = [
"summarization", "summarization",
"translation", "translation",
"image-classification", "image-classification",
"speech-recognition",
] ]
] ]
sys.path.extend(SRC_DIRS) sys.path.extend(SRC_DIRS)
...@@ -52,6 +53,7 @@ if SRC_DIRS is not None: ...@@ -52,6 +53,7 @@ if SRC_DIRS is not None:
import run_mlm import run_mlm
import run_ner import run_ner
import run_qa as run_squad import run_qa as run_squad
import run_speech_recognition_ctc
import run_summarization import run_summarization
import run_swag import run_swag
import run_translation import run_translation
...@@ -374,3 +376,37 @@ class ExamplesTests(TestCasePlus): ...@@ -374,3 +376,37 @@ class ExamplesTests(TestCasePlus):
run_image_classification.main() run_image_classification.main()
result = get_results(tmp_dir) result = get_results(tmp_dir)
self.assertGreaterEqual(result["eval_accuracy"], 0.8) self.assertGreaterEqual(result["eval_accuracy"], 0.8)
def test_run_speech_recognition_ctc(self):
stream_handler = logging.StreamHandler(sys.stdout)
logger.addHandler(stream_handler)
tmp_dir = self.get_auto_remove_tmp_dir()
testargs = f"""
run_speech_recognition_ctc.py
--output_dir {tmp_dir}
--model_name_or_path hf-internal-testing/tiny-random-wav2vec2
--dataset_name patrickvonplaten/librispeech_asr_dummy
--dataset_config_name clean
--train_split_name validation
--eval_split_name validation
--audio_column_name file
--do_train
--do_eval
--learning_rate 1e-4
--per_device_train_batch_size 2
--per_device_eval_batch_size 1
--remove_unused_columns False
--overwrite_output_dir True
--preprocessing_num_workers 16
--max_steps 10
--seed 42
""".split()
if is_cuda_and_apex_available():
testargs.append("--fp16")
with patch.object(sys, "argv", testargs):
run_speech_recognition_ctc.main()
result = get_results(tmp_dir)
self.assertLess(result["eval_loss"], result["train_loss"])
...@@ -54,10 +54,12 @@ class HubertConfig(PretrainedConfig): ...@@ -54,10 +54,12 @@ class HubertConfig(PretrainedConfig):
hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`): hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, The non-linear activation function (function or string) in the encoder and pooler. If string,
:obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported. :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported.
hidden_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): hidden_dropout(:obj:`float`, `optional`, defaults to 0.1):
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): attention_dropout(:obj:`float`, `optional`, defaults to 0.1):
The dropout ratio for the attention probabilities. The dropout ratio for the attention probabilities.
final_dropout (:obj:`float`, `optional`, defaults to 0.1):
The dropout probabilitiy for the final projection layer of :class:`Wav2Vec2ForCTC`.
initializer_range (:obj:`float`, `optional`, defaults to 0.02): initializer_range (:obj:`float`, `optional`, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12): layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12):
...@@ -66,8 +68,8 @@ class HubertConfig(PretrainedConfig): ...@@ -66,8 +68,8 @@ class HubertConfig(PretrainedConfig):
The norm to be applied to 1D convolutional layers in feature extractor. One of :obj:`"group"` for group The norm to be applied to 1D convolutional layers in feature extractor. One of :obj:`"group"` for group
normalization of only the first 1D convolutional layer or :obj:`"layer"` for layer normalization of all 1D normalization of only the first 1D convolutional layer or :obj:`"layer"` for layer normalization of all 1D
convolutional layers. convolutional layers.
feat_extract_dropout (:obj:`float`, `optional`, defaults to 0.0): feat_proj_dropout (:obj:`float`, `optional`, defaults to 0.0):
The dropout probabilitiy for all 1D convolutional layers in feature extractor. The dropout probability for output of the feature extractor.
feat_extract_activation (:obj:`str, `optional`, defaults to :obj:`"gelu"`): feat_extract_activation (:obj:`str, `optional`, defaults to :obj:`"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported. extractor. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported.
...@@ -147,7 +149,7 @@ class HubertConfig(PretrainedConfig): ...@@ -147,7 +149,7 @@ class HubertConfig(PretrainedConfig):
hidden_dropout=0.1, hidden_dropout=0.1,
activation_dropout=0.1, activation_dropout=0.1,
attention_dropout=0.1, attention_dropout=0.1,
feat_proj_dropout=0.1, feat_proj_dropout=0.0,
final_dropout=0.1, final_dropout=0.1,
layerdrop=0.1, layerdrop=0.1,
initializer_range=0.02, initializer_range=0.02,
......
...@@ -54,10 +54,12 @@ class Wav2Vec2Config(PretrainedConfig): ...@@ -54,10 +54,12 @@ class Wav2Vec2Config(PretrainedConfig):
hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`): hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, The non-linear activation function (function or string) in the encoder and pooler. If string,
:obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported. :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported.
hidden_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): hidden_dropout (:obj:`float`, `optional`, defaults to 0.1):
The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): attention_dropout (:obj:`float`, `optional`, defaults to 0.1):
The dropout ratio for the attention probabilities. The dropout ratio for the attention probabilities.
final_dropout (:obj:`float`, `optional`, defaults to 0.1):
The dropout probability for the final projection layer of :class:`Wav2Vec2ForCTC`.
initializer_range (:obj:`float`, `optional`, defaults to 0.02): initializer_range (:obj:`float`, `optional`, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices. The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12): layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12):
...@@ -66,8 +68,8 @@ class Wav2Vec2Config(PretrainedConfig): ...@@ -66,8 +68,8 @@ class Wav2Vec2Config(PretrainedConfig):
The norm to be applied to 1D convolutional layers in feature extractor. One of :obj:`"group"` for group The norm to be applied to 1D convolutional layers in feature extractor. One of :obj:`"group"` for group
normalization of only the first 1D convolutional layer or :obj:`"layer"` for layer normalization of all 1D normalization of only the first 1D convolutional layer or :obj:`"layer"` for layer normalization of all 1D
convolutional layers. convolutional layers.
feat_extract_dropout (:obj:`float`, `optional`, defaults to 0.0): feat_proj_dropout (:obj:`float`, `optional`, defaults to 0.0):
The dropout probabilitiy for all 1D convolutional layers in feature extractor. The dropout probability for output of the feature extractor.
feat_extract_activation (:obj:`str, `optional`, defaults to :obj:`"gelu"`): feat_extract_activation (:obj:`str, `optional`, defaults to :obj:`"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported. extractor. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported.
...@@ -165,7 +167,7 @@ class Wav2Vec2Config(PretrainedConfig): ...@@ -165,7 +167,7 @@ class Wav2Vec2Config(PretrainedConfig):
hidden_dropout=0.1, hidden_dropout=0.1,
activation_dropout=0.1, activation_dropout=0.1,
attention_dropout=0.1, attention_dropout=0.1,
feat_proj_dropout=0.1, feat_proj_dropout=0.0,
feat_quantizer_dropout=0.0, feat_quantizer_dropout=0.0,
final_dropout=0.1, final_dropout=0.1,
layerdrop=0.1, layerdrop=0.1,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment