"doc/git@developer.sourcefind.cn:wqshmzh/ktransformers.git" did not exist on "f03faa53766e56715d17cfa037272fec06aaf5cc"
Unverified Commit 13c18577 authored by Sylvain Gugger's avatar Sylvain Gugger Committed by GitHub
Browse files

Fix typo in all model docs (#7714)

parent 83086858
...@@ -539,7 +539,7 @@ ALBERT_INPUTS_DOCSTRING = r""" ...@@ -539,7 +539,7 @@ ALBERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
......
...@@ -113,7 +113,7 @@ BART_INPUTS_DOCSTRING = r""" ...@@ -113,7 +113,7 @@ BART_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
......
...@@ -218,7 +218,7 @@ BERT_GENERATION_INPUTS_DOCSTRING = r""" ...@@ -218,7 +218,7 @@ BERT_GENERATION_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
position_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): position_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
...@@ -450,7 +450,7 @@ class BertGenerationDecoder(BertGenerationPreTrainedModel): ...@@ -450,7 +450,7 @@ class BertGenerationDecoder(BertGenerationPreTrainedModel):
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Labels for computing the left-to-right language modeling loss (next word prediction). Labels for computing the left-to-right language modeling loss (next word prediction).
Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring)
......
...@@ -273,7 +273,7 @@ CTRL_INPUTS_DOCSTRING = r""" ...@@ -273,7 +273,7 @@ CTRL_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
......
...@@ -401,7 +401,7 @@ DISTILBERT_INPUTS_DOCSTRING = r""" ...@@ -401,7 +401,7 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`): head_mask (:obj:`torch.FloatTensor` of shape :obj:`(num_heads,)` or :obj:`(num_layers, num_heads)`, `optional`):
......
...@@ -358,7 +358,7 @@ DPR_ENCODERS_INPUTS_DOCSTRING = r""" ...@@ -358,7 +358,7 @@ DPR_ENCODERS_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
...@@ -403,7 +403,7 @@ DPR_READER_INPUTS_DOCSTRING = r""" ...@@ -403,7 +403,7 @@ DPR_READER_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(n_passages, sequence_length, hidden_size)`, `optional`): inputs_embeds (:obj:`torch.FloatTensor` of shape :obj:`(n_passages, sequence_length, hidden_size)`, `optional`):
......
...@@ -611,7 +611,7 @@ ELECTRA_INPUTS_DOCSTRING = r""" ...@@ -611,7 +611,7 @@ ELECTRA_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
......
...@@ -74,7 +74,7 @@ ENCODER_DECODER_INPUTS_DOCSTRING = r""" ...@@ -74,7 +74,7 @@ ENCODER_DECODER_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
......
...@@ -81,7 +81,7 @@ FLAUBERT_INPUTS_DOCSTRING = r""" ...@@ -81,7 +81,7 @@ FLAUBERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
......
...@@ -224,7 +224,7 @@ FSMT_INPUTS_DOCSTRING = r""" ...@@ -224,7 +224,7 @@ FSMT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`): decoder_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, target_sequence_length)`, `optional`):
......
...@@ -857,7 +857,7 @@ FUNNEL_INPUTS_DOCSTRING = r""" ...@@ -857,7 +857,7 @@ FUNNEL_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
......
...@@ -429,7 +429,7 @@ GPT2_INPUTS_DOCSTRING = r""" ...@@ -429,7 +429,7 @@ GPT2_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, input_ids_length)`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, input_ids_length)`, `optional`):
......
...@@ -1018,7 +1018,7 @@ LONGFORMER_INPUTS_DOCSTRING = r""" ...@@ -1018,7 +1018,7 @@ LONGFORMER_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
global_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`): global_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`):
......
...@@ -848,7 +848,7 @@ LXMERT_INPUTS_DOCSTRING = r""" ...@@ -848,7 +848,7 @@ LXMERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
visual_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`): visual_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`({0})`, `optional`):
...@@ -856,7 +856,7 @@ LXMERT_INPUTS_DOCSTRING = r""" ...@@ -856,7 +856,7 @@ LXMERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
......
...@@ -123,7 +123,7 @@ MMBT_INPUTS_DOCSTRING = r""" ...@@ -123,7 +123,7 @@ MMBT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``: token_type_ids (`optional`) ``torch.LongTensor`` of shape ``(batch_size, sequence_length)``:
...@@ -167,7 +167,7 @@ MMBT_INPUTS_DOCSTRING = r""" ...@@ -167,7 +167,7 @@ MMBT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
output_attentions (:obj:`bool`, `optional`): output_attentions (:obj:`bool`, `optional`):
Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
......
...@@ -756,7 +756,7 @@ MOBILEBERT_INPUTS_DOCSTRING = r""" ...@@ -756,7 +756,7 @@ MOBILEBERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`({0})`, `optional`):
...@@ -792,7 +792,7 @@ MOBILEBERT_INPUTS_DOCSTRING = r""" ...@@ -792,7 +792,7 @@ MOBILEBERT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
output_attentions (:obj:`bool`, `optional`): output_attentions (:obj:`bool`, `optional`):
Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
......
...@@ -360,7 +360,7 @@ OPENAI_GPT_INPUTS_DOCSTRING = r""" ...@@ -360,7 +360,7 @@ OPENAI_GPT_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): token_type_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
......
...@@ -406,7 +406,7 @@ RAG_FORWARD_INPUTS_DOCSTRING = r""" ...@@ -406,7 +406,7 @@ RAG_FORWARD_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
encoder_outputs (:obj:`tuple(tuple(torch.FloatTensor)`, `optional`) encoder_outputs (:obj:`tuple(tuple(torch.FloatTensor)`, `optional`)
...@@ -836,7 +836,7 @@ class RagSequenceForGeneration(RagPreTrainedModel): ...@@ -836,7 +836,7 @@ class RagSequenceForGeneration(RagPreTrainedModel):
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
context_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size * config.n_docs, config.max_combined_length)`, `optional`, returned when `output_retrieved=True`): context_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size * config.n_docs, config.max_combined_length)`, `optional`, returned when `output_retrieved=True`):
...@@ -1221,7 +1221,7 @@ class RagTokenForGeneration(RagPreTrainedModel): ...@@ -1221,7 +1221,7 @@ class RagTokenForGeneration(RagPreTrainedModel):
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
context_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size * config.n_docs, config.max_combined_length)`, `optional`, returned when `output_retrieved=True`): context_input_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size * config.n_docs, config.max_combined_length)`, `optional`, returned when `output_retrieved=True`):
......
...@@ -1926,7 +1926,7 @@ REFORMER_INPUTS_DOCSTRING = r""" ...@@ -1926,7 +1926,7 @@ REFORMER_INPUTS_DOCSTRING = r"""
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
position_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): position_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
......
...@@ -185,7 +185,7 @@ class RetriBertModel(RetriBertPreTrainedModel): ...@@ -185,7 +185,7 @@ class RetriBertModel(RetriBertPreTrainedModel):
Mask values selected in ``[0, 1]``: Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**, - 1 for tokens that are **not masked**,
- 0 for tokens that are **maked**. - 0 for tokens that are **masked**.
`What are attention masks? <../glossary.html#attention-mask>`__ `What are attention masks? <../glossary.html#attention-mask>`__
input_ids_doc (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`): input_ids_doc (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment