Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
76779363
"src/include/vscode:/vscode.git/clone" did not exist on "8c2d316e08b5a6808f19787fc38a2695da907da3"
Unverified
Commit
76779363
authored
Jun 01, 2020
by
Sylvain Gugger
Committed by
GitHub
Jun 01, 2020
Browse files
Make docstring match args (#4711)
parent
6449c494
Changes
5
Show whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
6 additions
and
6 deletions
+6
-6
src/transformers/modeling_bart.py
src/transformers/modeling_bart.py
+2
-2
src/transformers/modeling_gpt2.py
src/transformers/modeling_gpt2.py
+1
-1
src/transformers/modeling_openai.py
src/transformers/modeling_openai.py
+1
-1
src/transformers/modeling_transfo_xl.py
src/transformers/modeling_transfo_xl.py
+1
-1
src/transformers/modeling_xlm.py
src/transformers/modeling_xlm.py
+1
-1
No files found.
src/transformers/modeling_bart.py
View file @
76779363
...
...
@@ -904,7 +904,7 @@ class BartForConditionalGeneration(PretrainedBartModel):
**
unused
):
r
"""
masked_
lm_labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
lm_labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for computing the masked language modeling loss.
Indices should either be in ``[0, ..., config.vocab_size]`` or -100 (see ``input_ids`` docstring).
Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens
...
...
@@ -913,7 +913,7 @@ class BartForConditionalGeneration(PretrainedBartModel):
Returns:
:obj:`tuple(torch.FloatTensor)` comprising various elements depending on the configuration (:class:`~transformers.RobertaConfig`) and inputs:
masked_lm_loss (`optional`, returned when ``
masked_
lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
masked_lm_loss (`optional`, returned when ``lm_labels`` is provided) ``torch.FloatTensor`` of shape ``(1,)``:
Masked language modeling loss.
prediction_scores (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, config.vocab_size)`)
Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
...
...
src/transformers/modeling_gpt2.py
View file @
76779363
...
...
@@ -554,7 +554,7 @@ class GPT2LMHeadModel(GPT2PreTrainedModel):
r
"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``
lm_
labels = input_ids``
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
...
...
src/transformers/modeling_openai.py
View file @
76779363
...
...
@@ -491,7 +491,7 @@ class OpenAIGPTLMHeadModel(OpenAIGPTPreTrainedModel):
r
"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``
lm_
labels = input_ids``
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
...
...
src/transformers/modeling_transfo_xl.py
View file @
76779363
...
...
@@ -852,7 +852,7 @@ class TransfoXLLMHeadModel(TransfoXLPreTrainedModel):
r
"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``
lm_
labels = input_ids``
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
...
...
src/transformers/modeling_xlm.py
View file @
76779363
...
...
@@ -640,7 +640,7 @@ class XLMWithLMHeadModel(XLMPreTrainedModel):
r
"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`, defaults to :obj:`None`):
Labels for language modeling.
Note that the labels **are shifted** inside the model, i.e. you can set ``
lm_
labels = input_ids``
Note that the labels **are shifted** inside the model, i.e. you can set ``labels = input_ids``
Indices are selected in ``[-100, 0, ..., config.vocab_size]``
All labels set to ``-100`` are ignored (masked), the loss is only
computed for labels in ``[0, ..., config.vocab_size]``
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment