Unverified Commit 4a602492 authored by Vaibhav Srivastav's avatar Vaibhav Srivastav Committed by GitHub
Browse files

doc: add info about wav2vec2 bert in older wav2vec2 models. (#31120)



* doc: add info about wav2vec2 bert in older wav2vec2 models.

* apply suggestions from review.

* forward contrib credits from review

---------
Co-authored-by: default avatarSanchit Gandhi <sanchit-gandhi@users.noreply.github.com>
parent c39aaea9
...@@ -27,6 +27,8 @@ The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fai ...@@ -27,6 +27,8 @@ The Wav2Vec2-Conformer weights were released by the Meta AI team within the [Fai
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec). The original code can be found [here](https://github.com/pytorch/fairseq/tree/main/examples/wav2vec).
Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).
## Usage tips ## Usage tips
- Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the *Attention*-block with a *Conformer*-block - Wav2Vec2-Conformer follows the same architecture as Wav2Vec2, but replaces the *Attention*-block with a *Conformer*-block
......
...@@ -33,6 +33,8 @@ recognition with limited amounts of labeled data.* ...@@ -33,6 +33,8 @@ recognition with limited amounts of labeled data.*
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten).
Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).
## Usage tips ## Usage tips
- Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
......
...@@ -36,6 +36,8 @@ XLSR-53, a large model pretrained in 53 languages.* ...@@ -36,6 +36,8 @@ XLSR-53, a large model pretrained in 53 languages.*
The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec). The original code can be found [here](https://github.com/pytorch/fairseq/tree/master/fairseq/models/wav2vec).
Note: Meta (FAIR) released a new version of [Wav2Vec2-BERT 2.0](https://huggingface.co/docs/transformers/en/model_doc/wav2vec2-bert) - it's pretrained on 4.5M hours of audio. We especially recommend using it for fine-tuning tasks, e.g. as per [this guide](https://huggingface.co/blog/fine-tune-w2v2-bert).
## Usage tips ## Usage tips
- XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. - XLSR-Wav2Vec2 is a speech model that accepts a float array corresponding to the raw waveform of the speech signal.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment