Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
5cee0714
Unverified
Commit
5cee0714
authored
Apr 12, 2023
by
Tri Dao
Committed by
GitHub
Apr 12, 2023
Browse files
Merge pull request #164 from ZhiyuanChen/patch-1
make mlp hidden_features defaults to 4*in_features
parents
853ff729
8c424156
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
flash_attn/modules/mlp.py
flash_attn/modules/mlp.py
+1
-1
No files found.
flash_attn/modules/mlp.py
View file @
5cee0714
...
...
@@ -17,7 +17,7 @@ class Mlp(nn.Module):
factory_kwargs
=
{
'device'
:
device
,
'dtype'
:
dtype
}
super
().
__init__
()
out_features
=
out_features
or
in_features
hidden_features
=
hidden_features
or
in_features
hidden_features
=
hidden_features
or
in_features
*
4
self
.
return_residual
=
return_residual
self
.
fc1
=
nn
.
Linear
(
in_features
,
hidden_features
,
**
factory_kwargs
)
self
.
activation
=
activation
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment