Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
f4786d7f
Unverified
Commit
f4786d7f
authored
Jan 18, 2023
by
Jordi Mas
Committed by
GitHub
Jan 18, 2023
Browse files
Fix typos in documentation (#21160)
* Fix typos in documentation * Small fix * Fix formatting
parent
defdcd28
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
3 additions
and
3 deletions
+3
-3
src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py
...odels/speech_to_text/feature_extraction_speech_to_text.py
+1
-1
src/transformers/models/whisper/feature_extraction_whisper.py
...transformers/models/whisper/feature_extraction_whisper.py
+2
-2
No files found.
src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py
View file @
f4786d7f
...
...
@@ -169,7 +169,7 @@ class Speech2TextFeatureExtractor(SequenceFeatureExtractor):
<Tip>
For Speech2TextTrans
o
former models, `attention_mask` should alwys be passed for batched inference, to
For Speech2TextTransformer models, `attention_mask` should alw
a
ys be passed for batched inference, to
avoid subtle bugs.
</Tip>
...
...
src/transformers/models/whisper/feature_extraction_whisper.py
View file @
f4786d7f
...
...
@@ -249,8 +249,8 @@ class WhisperFeatureExtractor(SequenceFeatureExtractor):
<Tip>
For Whisper
Transoformer
models, `attention_mask` should alwys be passed for batched inference, to avoid
subtle
bugs.
For Whisper models, `attention_mask` should alw
a
ys be passed for batched inference, to avoid
subtle
bugs.
</Tip>
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment