Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
2d62cf36
Commit
2d62cf36
authored
Oct 08, 2020
by
Chen Chen
Committed by
A. Unique TensorFlower
Oct 08, 2020
Browse files
internal change
PiperOrigin-RevId: 336043643
parent
e8e987a6
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
official/nlp/configs/encoders.py
official/nlp/configs/encoders.py
+3
-3
No files found.
official/nlp/configs/encoders.py
View file @
2d62cf36
...
@@ -63,7 +63,7 @@ class MobileBertEncoderConfig(hyperparams.Config):
...
@@ -63,7 +63,7 @@ class MobileBertEncoderConfig(hyperparams.Config):
num_attention_heads: number of attention heads in the transformer block.
num_attention_heads: number of attention heads in the transformer block.
intermediate_size: the size of the "intermediate" (a.k.a., feed forward)
intermediate_size: the size of the "intermediate" (a.k.a., feed forward)
layer.
layer.
intermediate_act_f
n: the non-linear activation function to apply to the
hidden_activatio
n: the non-linear activation function to apply to the
output of the intermediate/feed-forward layer.
output of the intermediate/feed-forward layer.
hidden_dropout_prob: dropout probability for the hidden layers.
hidden_dropout_prob: dropout probability for the hidden layers.
attention_probs_dropout_prob: dropout probability of the attention
attention_probs_dropout_prob: dropout probability of the attention
...
@@ -89,7 +89,7 @@ class MobileBertEncoderConfig(hyperparams.Config):
...
@@ -89,7 +89,7 @@ class MobileBertEncoderConfig(hyperparams.Config):
hidden_size
:
int
=
512
hidden_size
:
int
=
512
num_attention_heads
:
int
=
4
num_attention_heads
:
int
=
4
intermediate_size
:
int
=
4096
intermediate_size
:
int
=
4096
intermediate_act_f
n
:
str
=
"gelu"
hidden_activatio
n
:
str
=
"gelu"
hidden_dropout_prob
:
float
=
0.1
hidden_dropout_prob
:
float
=
0.1
attention_probs_dropout_prob
:
float
=
0.1
attention_probs_dropout_prob
:
float
=
0.1
intra_bottleneck_size
:
int
=
1024
intra_bottleneck_size
:
int
=
1024
...
@@ -221,7 +221,7 @@ def build_encoder(
...
@@ -221,7 +221,7 @@ def build_encoder(
hidden_size
=
encoder_cfg
.
hidden_size
,
hidden_size
=
encoder_cfg
.
hidden_size
,
num_attention_heads
=
encoder_cfg
.
num_attention_heads
,
num_attention_heads
=
encoder_cfg
.
num_attention_heads
,
intermediate_size
=
encoder_cfg
.
intermediate_size
,
intermediate_size
=
encoder_cfg
.
intermediate_size
,
intermediate_act_fn
=
encoder_cfg
.
intermediate_act_f
n
,
intermediate_act_fn
=
encoder_cfg
.
hidden_activatio
n
,
hidden_dropout_prob
=
encoder_cfg
.
hidden_dropout_prob
,
hidden_dropout_prob
=
encoder_cfg
.
hidden_dropout_prob
,
attention_probs_dropout_prob
=
encoder_cfg
.
attention_probs_dropout_prob
,
attention_probs_dropout_prob
=
encoder_cfg
.
attention_probs_dropout_prob
,
intra_bottleneck_size
=
encoder_cfg
.
intra_bottleneck_size
,
intra_bottleneck_size
=
encoder_cfg
.
intra_bottleneck_size
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment