"git@developer.sourcefind.cn:chenpangpang/transformers.git" did not exist on "295466aae615a92d3a953fc5b6f6fb370b032b4c"
Unverified Commit d55fcbcc authored by Jun's avatar Jun Committed by GitHub
Browse files

fix default num_attention_heads in segformer doc (#16612)

parent b18dfd95
......@@ -54,7 +54,7 @@ class SegformerConfig(PretrainedConfig):
Patch size before each encoder block.
strides (`List[int]`, *optional*, defaults to [4, 2, 2, 2]):
Stride before each encoder block.
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2, 4, 8]):
num_attention_heads (`List[int]`, *optional*, defaults to [1, 2, 5, 8]):
Number of attention heads for each attention layer in each block of the Transformer encoder.
mlp_ratios (`List[int]`, *optional*, defaults to [4, 4, 4, 4]):
Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment