"...git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "abb1fa3f374811ea09d0bc3440d820c50735008d"
Unverified Commit e830495c authored by Thien Tran's avatar Thien Tran Committed by GitHub
Browse files

Fix data2vec-audio note about attention mask (#27116)

fix data2vec audio note about attention mask
parent 16043211
......@@ -786,12 +786,11 @@ DATA2VEC_AUDIO_INPUTS_DOCSTRING = r"""
<Tip warning={true}>
`attention_mask` should only be passed if the corresponding processor has `config.return_attention_mask ==
True`. For all models whose processor has `config.return_attention_mask == False`, such as
[data2vec-audio-base](https://huggingface.co/facebook/data2vec-audio-base-960h), `attention_mask` should
**not** be passed to avoid degraded performance when doing batched inference. For such models
`input_values` should simply be padded with 0 and passed without `attention_mask`. Be aware that these
models also yield slightly different results depending on whether `input_values` is padded or not.
`attention_mask` should be passed if the corresponding processor has `config.return_attention_mask ==
True`, which is the case for all pre-trained Data2Vec Audio models. Be aware that that even with
`attention_mask`, zero-padded inputs will have slightly different outputs compared to non-padded inputs
because there are more than one convolutional layer in the positional encodings. For a more detailed
explanation, see [here](https://github.com/huggingface/transformers/issues/25621#issuecomment-1713759349).
</Tip>
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment