Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
183fedfe
Commit
183fedfe
authored
Jul 15, 2019
by
thomwolf
Browse files
fix doc on python2
parent
0e9825e2
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
7 deletions
+7
-7
pytorch_transformers/modeling_bert.py
pytorch_transformers/modeling_bert.py
+7
-7
No files found.
pytorch_transformers/modeling_bert.py
View file @
183fedfe
...
@@ -642,7 +642,7 @@ BERT_INPUTS_DOCSTRING = r"""
...
@@ -642,7 +642,7 @@ BERT_INPUTS_DOCSTRING = r"""
@
add_start_docstrings
(
"The bare Bert Model transformer outputing raw hidden-states without any specific head on top."
,
@
add_start_docstrings
(
"The bare Bert Model transformer outputing raw hidden-states without any specific head on top."
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertModel
(
BertPreTrainedModel
):
class
BertModel
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
Outputs: `Tuple` comprising various elements depending on the configuration (config) and inputs:
**last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``
**last_hidden_state**: ``torch.FloatTensor`` of shape ``(batch_size, sequence_length, hidden_size)``
Sequence of hidden-states at the last layer of the model.
Sequence of hidden-states at the last layer of the model.
...
@@ -738,7 +738,7 @@ class BertModel(BertPreTrainedModel):
...
@@ -738,7 +738,7 @@ class BertModel(BertPreTrainedModel):
a `masked language modeling` head and a `next sentence prediction (classification)` head. """
,
a `masked language modeling` head and a `next sentence prediction (classification)` head. """
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertForPreTraining
(
BertPreTrainedModel
):
class
BertForPreTraining
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
**masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
**masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
Labels for computing the masked language modeling loss.
Labels for computing the masked language modeling loss.
Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
...
@@ -814,7 +814,7 @@ class BertForPreTraining(BertPreTrainedModel):
...
@@ -814,7 +814,7 @@ class BertForPreTraining(BertPreTrainedModel):
@
add_start_docstrings
(
"""Bert Model transformer BERT model with a `language modeling` head on top. """
,
@
add_start_docstrings
(
"""Bert Model transformer BERT model with a `language modeling` head on top. """
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertForMaskedLM
(
BertPreTrainedModel
):
class
BertForMaskedLM
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
**masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
**masked_lm_labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
Labels for computing the masked language modeling loss.
Labels for computing the masked language modeling loss.
Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
Indices should be in ``[-1, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
...
@@ -879,7 +879,7 @@ class BertForMaskedLM(BertPreTrainedModel):
...
@@ -879,7 +879,7 @@ class BertForMaskedLM(BertPreTrainedModel):
@
add_start_docstrings
(
"""Bert Model transformer BERT model with a `next sentence prediction (classification)` head on top. """
,
@
add_start_docstrings
(
"""Bert Model transformer BERT model with a `next sentence prediction (classification)` head on top. """
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertForNextSentencePrediction
(
BertPreTrainedModel
):
class
BertForNextSentencePrediction
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
**next_sentence_label**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
**next_sentence_label**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see ``input_ids`` docstring)
Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see ``input_ids`` docstring)
Indices should be in ``[0, 1]``.
Indices should be in ``[0, 1]``.
...
@@ -937,7 +937,7 @@ class BertForNextSentencePrediction(BertPreTrainedModel):
...
@@ -937,7 +937,7 @@ class BertForNextSentencePrediction(BertPreTrainedModel):
the pooled output) e.g. for GLUE tasks. """
,
the pooled output) e.g. for GLUE tasks. """
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertForSequenceClassification
(
BertPreTrainedModel
):
class
BertForSequenceClassification
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size,)``:
Labels for computing the sequence classification/regression loss.
Labels for computing the sequence classification/regression loss.
Indices should be in ``[0, ..., config.num_labels]``.
Indices should be in ``[0, ..., config.num_labels]``.
...
@@ -1005,7 +1005,7 @@ class BertForSequenceClassification(BertPreTrainedModel):
...
@@ -1005,7 +1005,7 @@ class BertForSequenceClassification(BertPreTrainedModel):
the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """
,
the pooled output and a softmax) e.g. for RocStories/SWAG tasks. """
,
BERT_START_DOCSTRING
)
BERT_START_DOCSTRING
)
class
BertForMultipleChoice
(
BertPreTrainedModel
):
class
BertForMultipleChoice
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
Inputs:
Inputs:
**input_ids**: ``torch.LongTensor`` of shape ``(batch_size, num_choices, sequence_length)``:
**input_ids**: ``torch.LongTensor`` of shape ``(batch_size, num_choices, sequence_length)``:
Indices of input sequence tokens in the vocabulary.
Indices of input sequence tokens in the vocabulary.
...
@@ -1110,7 +1110,7 @@ class BertForMultipleChoice(BertPreTrainedModel):
...
@@ -1110,7 +1110,7 @@ class BertForMultipleChoice(BertPreTrainedModel):
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """
,
the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. """
,
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
BERT_START_DOCSTRING
,
BERT_INPUTS_DOCSTRING
)
class
BertForTokenClassification
(
BertPreTrainedModel
):
class
BertForTokenClassification
(
BertPreTrainedModel
):
r
"""
__doc__
=
r
"""
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
**labels**: (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
Labels for computing the token classification loss.
Labels for computing the token classification loss.
Indices should be in ``[0, ..., config.num_labels]``.
Indices should be in ``[0, ..., config.num_labels]``.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment