Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e9ef2117
"tests/git@developer.sourcefind.cn:OpenDAS/apex.git" did not exist on "66158f66a027a2e7de483d9b3e6ca7c889489b13"
Unverified
Commit
e9ef2117
authored
Jun 22, 2020
by
Patrick von Platen
Committed by
GitHub
Jun 22, 2020
Browse files
improve doc (#5185)
parent
ebc36108
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
0 deletions
+2
-0
src/transformers/configuration_t5.py
src/transformers/configuration_t5.py
+2
-0
No files found.
src/transformers/configuration_t5.py
View file @
e9ef2117
...
@@ -41,6 +41,8 @@ class T5Config(PretrainedConfig):
...
@@ -41,6 +41,8 @@ class T5Config(PretrainedConfig):
vocab_size_or_config_json_file: Vocabulary size of `inputs_ids` in `T5Model`.
vocab_size_or_config_json_file: Vocabulary size of `inputs_ids` in `T5Model`.
d_model: Size of the encoder layers and the pooler layer. `d_model` can also accesed via the property `hidden_size`.
d_model: Size of the encoder layers and the pooler layer. `d_model` can also accesed via the property `hidden_size`.
num_layers: Number of hidden layers in the Transformer encoder. `num_layers` can also be accessed via the property `num_hidden_layers`.
num_layers: Number of hidden layers in the Transformer encoder. `num_layers` can also be accessed via the property `num_hidden_layers`.
d_kv: Size of the key, query, value projections per attention head. `d_kv` has to be equal to `d_model // num_heads`.
d_ff: Size of the intermediate feed forward layer in each `T5Block`.
num_heads: Number of attention heads for each attention layer in
num_heads: Number of attention heads for each attention layer in
the Transformer encoder. `num_heads` can also be accessed via the property `num_attention_heads`.
the Transformer encoder. `num_heads` can also be accessed via the property `num_attention_heads`.
intermediate_size: The size of the "intermediate" (i.e., feed-forward)
intermediate_size: The size of the "intermediate" (i.e., feed-forward)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment