Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
123b597f
Unverified
Commit
123b597f
authored
Jun 02, 2021
by
Gunjan Chhablani
Committed by
GitHub
Jun 02, 2021
Browse files
Fix examples (#11990)
parent
88ca6a23
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
15 additions
and
1 deletion
+15
-1
src/transformers/models/visual_bert/modeling_visual_bert.py
src/transformers/models/visual_bert/modeling_visual_bert.py
+15
-1
No files found.
src/transformers/models/visual_bert/modeling_visual_bert.py
View file @
123b597f
...
@@ -728,6 +728,9 @@ class VisualBertModel(VisualBertPreTrainedModel):
...
@@ -728,6 +728,9 @@ class VisualBertModel(VisualBertPreTrainedModel):
return_dict
=
None
,
return_dict
=
None
,
):
):
r
"""
r
"""
Returns:
Example::
Example::
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image.
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image.
...
@@ -907,7 +910,7 @@ class VisualBertForPreTraining(VisualBertPreTrainedModel):
...
@@ -907,7 +910,7 @@ class VisualBertForPreTraining(VisualBertPreTrainedModel):
- 0 indicates sequence B is a matching pair of sequence A for the given image,
- 0 indicates sequence B is a matching pair of sequence A for the given image,
- 1 indicates sequence B is a random sequence w.r.t A for the given image.
- 1 indicates sequence B is a random sequence w.r.t A for the given image.
Returns:
Returns:
Example::
Example::
...
@@ -1016,6 +1019,7 @@ class VisualBertForMultipleChoice(VisualBertPreTrainedModel):
...
@@ -1016,6 +1019,7 @@ class VisualBertForMultipleChoice(VisualBertPreTrainedModel):
@
add_start_docstrings_to_model_forward
(
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, num_choices, sequence_length"
)
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, num_choices, sequence_length"
)
)
)
@
replace_return_docstrings
(
output_type
=
MultipleChoiceModelOutput
,
config_class
=
_CONFIG_FOR_DOC
)
def
forward
(
def
forward
(
self
,
self
,
input_ids
=
None
,
input_ids
=
None
,
...
@@ -1039,6 +1043,8 @@ class VisualBertForMultipleChoice(VisualBertPreTrainedModel):
...
@@ -1039,6 +1043,8 @@ class VisualBertForMultipleChoice(VisualBertPreTrainedModel):
num_choices-1]`` where :obj:`num_choices` is the size of the second dimension of the input tensors.
num_choices-1]`` where :obj:`num_choices` is the size of the second dimension of the input tensors.
(See :obj:`input_ids` above)
(See :obj:`input_ids` above)
Returns:
Example::
Example::
>>> from transformers import BertTokenizer, VisualBertForMultipleChoice
>>> from transformers import BertTokenizer, VisualBertForMultipleChoice
...
@@ -1160,6 +1166,7 @@ class VisualBertForQuestionAnswering(VisualBertPreTrainedModel):
...
@@ -1160,6 +1166,7 @@ class VisualBertForQuestionAnswering(VisualBertPreTrainedModel):
self
.
init_weights
()
self
.
init_weights
()
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
replace_return_docstrings
(
output_type
=
SequenceClassifierOutput
,
config_class
=
_CONFIG_FOR_DOC
)
def
forward
(
def
forward
(
self
,
self
,
input_ids
=
None
,
input_ids
=
None
,
...
@@ -1182,6 +1189,7 @@ class VisualBertForQuestionAnswering(VisualBertPreTrainedModel):
...
@@ -1182,6 +1189,7 @@ class VisualBertForQuestionAnswering(VisualBertPreTrainedModel):
Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
config.num_labels - 1]`. A KLDivLoss is computed between the labels and the returned logits.
config.num_labels - 1]`. A KLDivLoss is computed between the labels and the returned logits.
Returns:
Example::
Example::
...
@@ -1280,6 +1288,7 @@ class VisualBertForVisualReasoning(VisualBertPreTrainedModel):
...
@@ -1280,6 +1288,7 @@ class VisualBertForVisualReasoning(VisualBertPreTrainedModel):
self
.
init_weights
()
self
.
init_weights
()
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
replace_return_docstrings
(
output_type
=
SequenceClassifierOutput
,
config_class
=
_CONFIG_FOR_DOC
)
def
forward
(
def
forward
(
self
,
self
,
input_ids
=
None
,
input_ids
=
None
,
...
@@ -1302,6 +1311,8 @@ class VisualBertForVisualReasoning(VisualBertPreTrainedModel):
...
@@ -1302,6 +1311,8 @@ class VisualBertForVisualReasoning(VisualBertPreTrainedModel):
Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
config.num_labels - 1]`. A classification loss is computed (Cross-Entropy) against these labels.
config.num_labels - 1]`. A classification loss is computed (Cross-Entropy) against these labels.
Returns:
Example::
Example::
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.
...
@@ -1433,6 +1444,7 @@ class VisualBertForRegionToPhraseAlignment(VisualBertPreTrainedModel):
...
@@ -1433,6 +1444,7 @@ class VisualBertForRegionToPhraseAlignment(VisualBertPreTrainedModel):
self
.
init_weights
()
self
.
init_weights
()
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
add_start_docstrings_to_model_forward
(
VISUAL_BERT_INPUTS_DOCSTRING
.
format
(
"batch_size, sequence_length"
))
@
replace_return_docstrings
(
output_type
=
SequenceClassifierOutput
,
config_class
=
_CONFIG_FOR_DOC
)
def
forward
(
def
forward
(
self
,
self
,
input_ids
=
None
,
input_ids
=
None
,
...
@@ -1459,6 +1471,8 @@ class VisualBertForRegionToPhraseAlignment(VisualBertPreTrainedModel):
...
@@ -1459,6 +1471,8 @@ class VisualBertForRegionToPhraseAlignment(VisualBertPreTrainedModel):
Labels for computing the masked language modeling loss. KLDivLoss is computed against these labels and
Labels for computing the masked language modeling loss. KLDivLoss is computed against these labels and
the outputs from the attention layer.
the outputs from the attention layer.
Returns:
Example::
Example::
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.
>>> # Assumption: `get_visual_embeddings(image)` gets the visual embeddings of the image in the batch.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment