xlsr_wav2vec2.md 2.96 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
2

Sylvain Gugger's avatar
Sylvain Gugger committed
3
4
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
5

Sylvain Gugger's avatar
Sylvain Gugger committed
6
http://www.apache.org/licenses/LICENSE-2.0
7

Sylvain Gugger's avatar
Sylvain Gugger committed
8
9
10
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Sylvain Gugger's avatar
Sylvain Gugger committed
15
-->
16

Sylvain Gugger's avatar
Sylvain Gugger committed
17
# XLSR-Wav2Vec2
18

Sylvain Gugger's avatar
Sylvain Gugger committed
19
## Overview
20

Sylvain Gugger's avatar
Sylvain Gugger committed
21
The XLSR-Wav2Vec2 model was proposed in [Unsupervised Cross-Lingual Representation Learning For Speech Recognition](https://arxiv.org/abs/2006.13979) by Alexis Conneau, Alexei Baevski, Ronan Collobert, Abdelrahman Mohamed, Michael
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
Auli.

The abstract from the paper is the following:

*This paper presents XLSR which learns cross-lingual speech representations by pretraining a single model from the raw
waveform of speech in multiple languages. We build on wav2vec 2.0 which is trained by solving a contrastive task over
masked latent speech representations and jointly learns a quantization of the latents shared across languages. The
resulting model is fine-tuned on labeled data and experiments show that cross-lingual pretraining significantly
outperforms monolingual pretraining. On the CommonVoice benchmark, XLSR shows a relative phoneme error rate reduction
of 72% compared to the best known results. On BABEL, our approach improves word error rate by 16% relative compared to
a comparable system. Our approach enables a single multilingual speech recognition model which is competitive to strong
individual models. Analysis shows that the latent discrete speech representations are shared across languages with
increased sharing for related languages. We hope to catalyze research in low-resource speech understanding by releasing
XLSR-53, a large model pretrained in 53 languages.*

37
38
The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).

39
40
Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).

41
## Usage tips
42
43
44

- XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
- XLSR-Wav2Vec2 model was trained using connectionist temporal classification (CTC) so the model output has to be
Sylvain Gugger's avatar
Sylvain Gugger committed
45
  decoded using [`Wav2Vec2CTCTokenizer`].
46

47
48
<Tip>

Sylvain Gugger's avatar
Sylvain Gugger committed
49
XLSR-Wav2Vec2's architecture is based on the Wav2Vec2 model, so one can refer to [Wav2Vec2's documentation page](wav2vec2).
50

51
</Tip>