Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
77f4c46b
Unverified
Commit
77f4c46b
authored
May 12, 2021
by
Philip May
Committed by
GitHub
May 12, 2021
Browse files
remove defaults to None if optional (#11703)
parent
6797cdc0
Changes
11
Show whitespace changes
Inline
Side-by-side
Showing
11 changed files
with
18 additions
and
18 deletions
+18
-18
examples/research_projects/wav2vec2/run_asr.py
examples/research_projects/wav2vec2/run_asr.py
+2
-2
src/transformers/debug_utils.py
src/transformers/debug_utils.py
+1
-1
src/transformers/modeling_tf_utils.py
src/transformers/modeling_tf_utils.py
+1
-1
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+1
-1
src/transformers/models/albert/tokenization_albert_fast.py
src/transformers/models/albert/tokenization_albert_fast.py
+2
-2
src/transformers/models/big_bird/tokenization_big_bird_fast.py
...ransformers/models/big_bird/tokenization_big_bird_fast.py
+3
-3
src/transformers/models/ibert/quant_modules.py
src/transformers/models/ibert/quant_modules.py
+3
-3
src/transformers/models/mpnet/modeling_mpnet.py
src/transformers/models/mpnet/modeling_mpnet.py
+1
-1
src/transformers/models/mpnet/tokenization_mpnet.py
src/transformers/models/mpnet/tokenization_mpnet.py
+1
-1
src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py
...mers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py
+1
-1
src/transformers/pipelines/text2text_generation.py
src/transformers/pipelines/text2text_generation.py
+2
-2
No files found.
examples/research_projects/wav2vec2/run_asr.py
View file @
77f4c46b
...
...
@@ -144,7 +144,7 @@ class Orthography:
Args:
do_lower_case (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to accept lowercase input and lowercase the output when decoding.
vocab_file (:obj:`str`, `optional`
, defaults to :obj:`None`
):
vocab_file (:obj:`str`, `optional`):
File containing the vocabulary.
word_delimiter_token (:obj:`str`, `optional`, defaults to :obj:`"|"`):
The token used for delimiting words; it needs to be in the vocabulary.
...
...
@@ -152,7 +152,7 @@ class Orthography:
Table to use with `str.translate()` when preprocessing text (e.g., "-" -> " ").
words_to_remove (:obj:`Set[str]`, `optional`, defaults to :obj:`set()`):
Words to remove when preprocessing text (e.g., "sil").
untransliterator (:obj:`Callable[[str], str]`, `optional`
, defaults to :obj:`None`
):
untransliterator (:obj:`Callable[[str], str]`, `optional`):
Function that untransliterates text back into native writing system.
"""
...
...
src/transformers/debug_utils.py
View file @
77f4c46b
...
...
@@ -118,7 +118,7 @@ class DebugUnderflowOverflow:
How many frames back to record
trace_batch_nums(:obj:`List[int]`, `optional`, defaults to ``[]``):
Which batch numbers to trace (turns detection off)
abort_after_batch_num (:obj:`int`, `optional`
, defaults to :obj:`None`
):
abort_after_batch_num (:obj:`int`, `optional`):
Whether to abort after a certain batch number has finished
"""
...
...
src/transformers/modeling_tf_utils.py
View file @
77f4c46b
...
...
@@ -1128,7 +1128,7 @@ class TFPreTrainedModel(tf.keras.Model, TFModelUtilsMixin, TFGenerationMixin, Pu
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so ``revision`` can be any
identifier allowed by git.
mirror(:obj:`str`, `optional`
, defaults to :obj:`None`
):
mirror(:obj:`str`, `optional`):
Mirror source to accelerate downloads in China. If you are from China and have an accessibility
problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
Please refer to the mirror site for more information.
...
...
src/transformers/modeling_utils.py
View file @
77f4c46b
...
...
@@ -975,7 +975,7 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so ``revision`` can be any
identifier allowed by git.
mirror(:obj:`str`, `optional`
, defaults to :obj:`None`
):
mirror(:obj:`str`, `optional`):
Mirror source to accelerate downloads in China. If you are from China and have an accessibility
problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety.
Please refer to the mirror site for more information.
...
...
src/transformers/models/albert/tokenization_albert_fast.py
View file @
77f4c46b
...
...
@@ -172,7 +172,7 @@ class AlbertTokenizerFast(PreTrainedTokenizerFast):
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
@@ -201,7 +201,7 @@ class AlbertTokenizerFast(PreTrainedTokenizerFast):
Args:
token_ids_0 (:obj:`List[int]`):
List of ids.
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
src/transformers/models/big_bird/tokenization_big_bird_fast.py
View file @
77f4c46b
...
...
@@ -152,7 +152,7 @@ class BigBirdTokenizerFast(PreTrainedTokenizerFast):
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
@@ -174,7 +174,7 @@ class BigBirdTokenizerFast(PreTrainedTokenizerFast):
Args:
token_ids_0 (:obj:`List[int]`):
List of ids.
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
already_has_special_tokens (:obj:`bool`, `optional`, defaults to :obj:`False`):
Set to True if the token list is already formatted with special tokens for the model
...
...
@@ -212,7 +212,7 @@ class BigBirdTokenizerFast(PreTrainedTokenizerFast):
Args:
token_ids_0 (:obj:`List[int]`):
List of ids.
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
src/transformers/models/ibert/quant_modules.py
View file @
77f4c46b
...
...
@@ -124,7 +124,7 @@ class QuantAct(nn.Module):
Momentum for updating the activation quantization range.
per_channel (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether to or not use channel-wise quantization.
channel_len (:obj:`int`, `optional`
, defaults to :obj:`None`
):
channel_len (:obj:`int`, `optional`):
Specify the channel length when set the `per_channel` True.
quant_mode (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not the layer is quantized.
...
...
@@ -755,9 +755,9 @@ class FixedPointMul(Function):
Quantization bitwidth.
z_scaling_factor (:obj:`torch.Tensor`):
Scaling factor of the output tensor.
identity (:obj:`torch.Tensor`, `optional`
, defaults to :obj:`None`
):
identity (:obj:`torch.Tensor`, `optional`):
Identity tensor, if exists.
identity_scaling_factor (:obj:`torch.Tensor`, `optional`
, defaults to :obj:`None`
):
identity_scaling_factor (:obj:`torch.Tensor`, `optional`):
Scaling factor of the identity tensor `identity`, if exists.
Returns:
...
...
src/transformers/models/mpnet/modeling_mpnet.py
View file @
77f4c46b
...
...
@@ -444,7 +444,7 @@ MPNET_INPUTS_DOCSTRING = r"""
details.
`What are input IDs? <../glossary.html#input-ids>`__
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`
, defaults to :obj:`None`
):
attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
Mask to avoid performing attention on padding token indices. Mask values selected in ``[0, 1]``:
- 1 for tokens that are **not masked**,
...
...
src/transformers/models/mpnet/tokenization_mpnet.py
View file @
77f4c46b
...
...
@@ -235,7 +235,7 @@ class MPNetTokenizer(PreTrainedTokenizer):
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py
View file @
77f4c46b
...
...
@@ -290,7 +290,7 @@ class XLMProphetNetTokenizer(PreTrainedTokenizer):
Args:
token_ids_0 (:obj:`List[int]`):
List of IDs to which the special tokens will be added
token_ids_1 (:obj:`List[int]`, `optional`
, defaults to :obj:`None`
):
token_ids_1 (:obj:`List[int]`, `optional`):
Optional second list of IDs for sequence pairs.
Returns:
...
...
src/transformers/pipelines/text2text_generation.py
View file @
77f4c46b
...
...
@@ -295,10 +295,10 @@ class TranslationPipeline(Text2TextGenerationPipeline):
Whether or not to include the decoded texts in the outputs.
clean_up_tokenization_spaces (:obj:`bool`, `optional`, defaults to :obj:`False`):
Whether or not to clean up the potential extra spaces in the text output.
src_lang (:obj:`str`, `optional`
, defaults to :obj:`None`
):
src_lang (:obj:`str`, `optional`):
The language of the input. Might be required for multilingual models. Will not have any effect for
single pair translation models
tgt_lang (:obj:`str`, `optional`
, defaults to :obj:`None`
):
tgt_lang (:obj:`str`, `optional`):
The language of the desired output. Might be required for multilingual models. Will not have any effect
for single pair translation models
generate_kwargs:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment