Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
04c446f7
Unverified
Commit
04c446f7
authored
Dec 08, 2020
by
Sylvain Gugger
Committed by
GitHub
Dec 08, 2020
Browse files
Make `ModelOutput` pickle-able (#8989)
parent
0d9e6ca9
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
9 additions
and
9 deletions
+9
-9
src/transformers/modeling_outputs.py
src/transformers/modeling_outputs.py
+9
-9
No files found.
src/transformers/modeling_outputs.py
View file @
04c446f7
...
...
@@ -41,7 +41,7 @@ class BaseModelOutput(ModelOutput):
heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -71,7 +71,7 @@ class BaseModelOutputWithPooling(ModelOutput):
heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
pooler_output
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -107,7 +107,7 @@ class BaseModelOutputWithPast(ModelOutput):
heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
past_key_values
:
Optional
[
List
[
torch
.
FloatTensor
]]
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -140,7 +140,7 @@ class BaseModelOutputWithCrossAttentions(ModelOutput):
weighted average in the cross-attention heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
cross_attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -177,7 +177,7 @@ class BaseModelOutputWithPoolingAndCrossAttentions(ModelOutput):
weighted average in the cross-attention heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
pooler_output
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -220,7 +220,7 @@ class BaseModelOutputWithPastAndCrossAttentions(ModelOutput):
weighted average in the cross-attention heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
past_key_values
:
Optional
[
List
[
torch
.
FloatTensor
]]
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -277,7 +277,7 @@ class Seq2SeqModelOutput(ModelOutput):
self-attention heads.
"""
last_hidden_state
:
torch
.
FloatTensor
last_hidden_state
:
torch
.
FloatTensor
=
None
past_key_values
:
Optional
[
List
[
torch
.
FloatTensor
]]
=
None
decoder_hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
decoder_attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -310,7 +310,7 @@ class CausalLMOutput(ModelOutput):
heads.
"""
loss
:
Optional
[
torch
.
FloatTensor
]
loss
:
Optional
[
torch
.
FloatTensor
]
=
None
logits
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
@@ -381,7 +381,7 @@ class CausalLMOutputWithCrossAttentions(ModelOutput):
cross-attention heads.
"""
loss
:
Optional
[
torch
.
FloatTensor
]
loss
:
Optional
[
torch
.
FloatTensor
]
=
None
logits
:
torch
.
FloatTensor
=
None
hidden_states
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
attentions
:
Optional
[
Tuple
[
torch
.
FloatTensor
]]
=
None
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment