Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
d55fcbcc
"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "295466aae615a92d3a953fc5b6f6fb370b032b4c"
Unverified
Commit
d55fcbcc
authored
Apr 06, 2022
by
Jun
Committed by
GitHub
Apr 06, 2022
Browse files
fix default num_attention_heads in segformer doc (#16612)
parent
b18dfd95
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/models/segformer/configuration_segformer.py
src/transformers/models/segformer/configuration_segformer.py
+1
-1
No files found.
src/transformers/models/segformer/configuration_segformer.py
View file @
d55fcbcc
...
...
@@ -54,7 +54,7 @@ class SegformerConfig(PretrainedConfig):
Patch size before each encoder block.
strides (`List[int]`, *optional*, defaults to [4, 2, 2, 2]):
Stride before each encoder block.
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2,
4
, 8]):
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2,
5
, 8]):
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to [4, 4, 4, 4]):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment