Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
8c424156
Unverified
Commit
8c424156
authored
Apr 13, 2023
by
Zhiyuan Chen
Committed by
GitHub
Apr 13, 2023
Browse files
make mlp hidden_features defaults to 4*in_features
parent
853ff729
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
flash_attn/modules/mlp.py
flash_attn/modules/mlp.py
+1
-1
No files found.
flash_attn/modules/mlp.py
View file @
8c424156
...
...
@@ -17,7 +17,7 @@ class Mlp(nn.Module):
factory_kwargs
=
{
'device'
:
device
,
'dtype'
:
dtype
}
super
().
__init__
()
out_features
=
out_features
or
in_features
hidden_features
=
hidden_features
or
in_features
hidden_features
=
hidden_features
or
in_features
*
4
self
.
return_residual
=
return_residual
self
.
fc1
=
nn
.
Linear
(
in_features
,
hidden_features
,
**
factory_kwargs
)
self
.
activation
=
activation
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment