Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
50573c64
Unverified
Commit
50573c64
authored
Aug 28, 2023
by
Stas Bekman
Committed by
GitHub
Aug 28, 2023
Browse files
[idefics] fix vision's `hidden_act` (#25787)
[idefics] fix vision's hidden_act
parent
886b6be0
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
src/transformers/models/idefics/configuration_idefics.py
src/transformers/models/idefics/configuration_idefics.py
+2
-2
No files found.
src/transformers/models/idefics/configuration_idefics.py
View file @
50573c64
...
...
@@ -57,7 +57,7 @@ class IdeficsVisionConfig(PretrainedConfig):
Number of attention heads for each attention layer in the Transformer encoder.
image_num_channels (`int`, *optional*, defaults to `3`):
Number of image channels.
hidden_act (`str` or `function`, *optional*, defaults to `"
quick_
gelu"`):
hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`):
The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
`"relu"`, `"selu"` and `"gelu_new"` ``"quick_gelu"` are supported.
layer_norm_eps (`float`, *optional*, defaults to 1e-5):
...
...
@@ -86,7 +86,7 @@ class IdeficsVisionConfig(PretrainedConfig):
num_hidden_layers
=
32
,
num_attention_heads
=
16
,
num_channels
=
3
,
hidden_act
=
"
quick_
gelu"
,
hidden_act
=
"gelu"
,
layer_norm_eps
=
1e-5
,
attention_dropout
=
0.0
,
initializer_range
=
0.02
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment