Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
4d5e9122
Commit
4d5e9122
authored
Mar 02, 2021
by
A. Unique TensorFlower
Browse files
Fix minor comment typos in BERT and MobileBERT APIs.
PiperOrigin-RevId: 360529962
parent
5a6b0958
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
4 additions
and
4 deletions
+4
-4
official/nlp/keras_nlp/encoders/bert_encoder.py
official/nlp/keras_nlp/encoders/bert_encoder.py
+1
-1
official/nlp/modeling/layers/mobile_bert_layers_test.py
official/nlp/modeling/layers/mobile_bert_layers_test.py
+1
-1
official/nlp/modeling/networks/bert_encoder.py
official/nlp/modeling/networks/bert_encoder.py
+1
-1
official/nlp/modeling/networks/mobile_bert_encoder_test.py
official/nlp/modeling/networks/mobile_bert_encoder_test.py
+1
-1
No files found.
official/nlp/keras_nlp/encoders/bert_encoder.py
View file @
4d5e9122
...
@@ -60,7 +60,7 @@ class BertEncoder(tf.keras.Model):
...
@@ -60,7 +60,7 @@ class BertEncoder(tf.keras.Model):
initializer: The initialzer to use for all weights in this encoder.
initializer: The initialzer to use for all weights in this encoder.
output_range: The sequence output range, [0, output_range), by slicing the
output_range: The sequence output range, [0, output_range), by slicing the
target sequence of the last transformer layer. `None` means the entire
target sequence of the last transformer layer. `None` means the entire
target sequence will attend to the source sequence, which y
e
ilds the full
target sequence will attend to the source sequence, which yi
e
lds the full
output.
output.
embedding_width: The width of the word embeddings. If the embedding width is
embedding_width: The width of the word embeddings. If the embedding width is
not equal to hidden size, embedding parameters will be factorized into two
not equal to hidden size, embedding parameters will be factorized into two
...
...
official/nlp/modeling/layers/mobile_bert_layers_test.py
View file @
4d5e9122
...
@@ -22,7 +22,7 @@ from official.nlp.modeling.networks import mobile_bert_encoder
...
@@ -22,7 +22,7 @@ from official.nlp.modeling.networks import mobile_bert_encoder
def
generate_fake_input
(
batch_size
=
1
,
seq_len
=
5
,
vocab_size
=
10000
,
seed
=
0
):
def
generate_fake_input
(
batch_size
=
1
,
seq_len
=
5
,
vocab_size
=
10000
,
seed
=
0
):
"""Generate consis
ita
nt fake integer input sequences."""
"""Generate consis
te
nt fake integer input sequences."""
np
.
random
.
seed
(
seed
)
np
.
random
.
seed
(
seed
)
fake_input
=
[]
fake_input
=
[]
for
_
in
range
(
batch_size
):
for
_
in
range
(
batch_size
):
...
...
official/nlp/modeling/networks/bert_encoder.py
View file @
4d5e9122
...
@@ -65,7 +65,7 @@ class BertEncoder(keras_nlp.encoders.BertEncoder):
...
@@ -65,7 +65,7 @@ class BertEncoder(keras_nlp.encoders.BertEncoder):
keyed by `encoder_outputs`.
keyed by `encoder_outputs`.
output_range: The sequence output range, [0, output_range), by slicing the
output_range: The sequence output range, [0, output_range), by slicing the
target sequence of the last transformer layer. `None` means the entire
target sequence of the last transformer layer. `None` means the entire
target sequence will attend to the source sequence, which y
e
ilds the full
target sequence will attend to the source sequence, which yi
e
lds the full
output.
output.
embedding_width: The width of the word embeddings. If the embedding width is
embedding_width: The width of the word embeddings. If the embedding width is
not equal to hidden size, embedding parameters will be factorized into two
not equal to hidden size, embedding parameters will be factorized into two
...
...
official/nlp/modeling/networks/mobile_bert_encoder_test.py
View file @
4d5e9122
...
@@ -21,7 +21,7 @@ from official.nlp.modeling.networks import mobile_bert_encoder
...
@@ -21,7 +21,7 @@ from official.nlp.modeling.networks import mobile_bert_encoder
def
generate_fake_input
(
batch_size
=
1
,
seq_len
=
5
,
vocab_size
=
10000
,
seed
=
0
):
def
generate_fake_input
(
batch_size
=
1
,
seq_len
=
5
,
vocab_size
=
10000
,
seed
=
0
):
"""Generate consis
ita
nt fake integer input sequences."""
"""Generate consis
te
nt fake integer input sequences."""
np
.
random
.
seed
(
seed
)
np
.
random
.
seed
(
seed
)
fake_input
=
[]
fake_input
=
[]
for
_
in
range
(
batch_size
):
for
_
in
range
(
batch_size
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment