Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
088c1880
Unverified
Commit
088c1880
authored
Mar 25, 2022
by
Sylvain Gugger
Committed by
GitHub
Mar 25, 2022
Browse files
Big file_utils cleanup (#16396)
* Big file_utils cleanup * This one still needs to be treated separately
parent
2b23e080
Changes
222
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
33 additions
and
33 deletions
+33
-33
src/transformers/models/deberta/modeling_deberta.py
src/transformers/models/deberta/modeling_deberta.py
+1
-1
src/transformers/models/deberta/modeling_tf_deberta.py
src/transformers/models/deberta/modeling_tf_deberta.py
+1
-1
src/transformers/models/deberta_v2/modeling_deberta_v2.py
src/transformers/models/deberta_v2/modeling_deberta_v2.py
+1
-1
src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py
src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py
+1
-1
src/transformers/models/decision_transformer/__init__.py
src/transformers/models/decision_transformer/__init__.py
+1
-1
src/transformers/models/decision_transformer/modeling_decision_transformer.py
...els/decision_transformer/modeling_decision_transformer.py
+3
-3
src/transformers/models/deit/feature_extraction_deit.py
src/transformers/models/deit/feature_extraction_deit.py
+1
-1
src/transformers/models/deit/modeling_deit.py
src/transformers/models/deit/modeling_deit.py
+1
-1
src/transformers/models/detr/feature_extraction_detr.py
src/transformers/models/detr/feature_extraction_detr.py
+2
-2
src/transformers/models/detr/modeling_detr.py
src/transformers/models/detr/modeling_detr.py
+3
-3
src/transformers/models/distilbert/modeling_distilbert.py
src/transformers/models/distilbert/modeling_distilbert.py
+1
-1
src/transformers/models/distilbert/modeling_flax_distilbert.py
...ransformers/models/distilbert/modeling_flax_distilbert.py
+1
-1
src/transformers/models/distilbert/modeling_tf_distilbert.py
src/transformers/models/distilbert/modeling_tf_distilbert.py
+2
-2
src/transformers/models/dpr/modeling_dpr.py
src/transformers/models/dpr/modeling_dpr.py
+2
-2
src/transformers/models/dpr/modeling_tf_dpr.py
src/transformers/models/dpr/modeling_tf_dpr.py
+4
-4
src/transformers/models/dpr/tokenization_dpr.py
src/transformers/models/dpr/tokenization_dpr.py
+2
-2
src/transformers/models/dpr/tokenization_dpr_fast.py
src/transformers/models/dpr/tokenization_dpr_fast.py
+2
-2
src/transformers/models/electra/modeling_electra.py
src/transformers/models/electra/modeling_electra.py
+1
-1
src/transformers/models/electra/modeling_flax_electra.py
src/transformers/models/electra/modeling_flax_electra.py
+1
-1
src/transformers/models/electra/modeling_tf_electra.py
src/transformers/models/electra/modeling_tf_electra.py
+2
-2
No files found.
src/transformers/models/deberta/modeling_deberta.py
View file @
088c1880
...
@@ -871,7 +871,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
...
@@ -871,7 +871,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/deberta/modeling_tf_deberta.py
View file @
088c1880
...
@@ -1063,7 +1063,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
...
@@ -1063,7 +1063,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
transformers.file_
utils.ModelOutput``] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput``] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/deberta_v2/modeling_deberta_v2.py
View file @
088c1880
...
@@ -965,7 +965,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
...
@@ -965,7 +965,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/deberta_v2/modeling_tf_deberta_v2.py
View file @
088c1880
...
@@ -1164,7 +1164,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
...
@@ -1164,7 +1164,7 @@ DEBERTA_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
transformers.file_
utils.ModelOutput``] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput``] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/decision_transformer/__init__.py
View file @
088c1880
...
@@ -18,7 +18,7 @@
...
@@ -18,7 +18,7 @@
from
typing
import
TYPE_CHECKING
from
typing
import
TYPE_CHECKING
# rely on isort to merge the imports
# rely on isort to merge the imports
from
...
file_
utils
import
_LazyModule
,
is_torch_available
from
...utils
import
_LazyModule
,
is_torch_available
_import_structure
=
{
_import_structure
=
{
...
...
src/transformers/models/decision_transformer/modeling_decision_transformer.py
View file @
088c1880
...
@@ -25,14 +25,14 @@ from packaging import version
...
@@ -25,14 +25,14 @@ from packaging import version
from
torch
import
nn
from
torch
import
nn
from
...activations
import
ACT2FN
from
...activations
import
ACT2FN
from
...file_utils
import
(
from
...modeling_utils
import
Conv1D
,
PreTrainedModel
,
find_pruneable_heads_and_indices
,
prune_conv1d_layer
from
...utils
import
(
ModelOutput
,
ModelOutput
,
add_start_docstrings
,
add_start_docstrings
,
add_start_docstrings_to_model_forward
,
add_start_docstrings_to_model_forward
,
logging
,
replace_return_docstrings
,
replace_return_docstrings
,
)
)
from
...modeling_utils
import
Conv1D
,
PreTrainedModel
,
find_pruneable_heads_and_indices
,
prune_conv1d_layer
from
...utils
import
logging
if
version
.
parse
(
torch
.
__version__
)
>=
version
.
parse
(
"1.6"
):
if
version
.
parse
(
torch
.
__version__
)
>=
version
.
parse
(
"1.6"
):
...
...
src/transformers/models/deit/feature_extraction_deit.py
View file @
088c1880
...
@@ -107,7 +107,7 @@ class DeiTFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
...
@@ -107,7 +107,7 @@ class DeiTFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
number of channels, H and W are image height and width.
return_tensors (`str` or [`~
file_
utils.TensorType`], *optional*, defaults to `'np'`):
return_tensors (`str` or [`~utils.TensorType`], *optional*, defaults to `'np'`):
If set, will return tensors of a particular framework. Acceptable values are:
If set, will return tensors of a particular framework. Acceptable values are:
- `'tf'`: Return TensorFlow `tf.constant` objects.
- `'tf'`: Return TensorFlow `tf.constant` objects.
...
...
src/transformers/models/deit/modeling_deit.py
View file @
088c1880
...
@@ -460,7 +460,7 @@ DEIT_INPUTS_DOCSTRING = r"""
...
@@ -460,7 +460,7 @@ DEIT_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/detr/feature_extraction_detr.py
View file @
088c1880
...
@@ -455,7 +455,7 @@ class DetrFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
...
@@ -455,7 +455,7 @@ class DetrFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
- 1 for pixels that are real (i.e. **not masked**),
- 1 for pixels that are real (i.e. **not masked**),
- 0 for pixels that are padding (i.e. **masked**).
- 0 for pixels that are padding (i.e. **masked**).
return_tensors (`str` or [`~
file_
utils.TensorType`], *optional*):
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of NumPy arrays. If set to `'pt'`, return PyTorch `torch.Tensor`
If set, will return tensors instead of NumPy arrays. If set to `'pt'`, return PyTorch `torch.Tensor`
objects.
objects.
...
@@ -638,7 +638,7 @@ class DetrFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
...
@@ -638,7 +638,7 @@ class DetrFeatureExtractor(FeatureExtractionMixin, ImageFeatureExtractionMixin):
Args:
Args:
pixel_values_list (`List[torch.Tensor]`):
pixel_values_list (`List[torch.Tensor]`):
List of images (pixel values) to be padded. Each image should be a tensor of shape (C, H, W).
List of images (pixel values) to be padded. Each image should be a tensor of shape (C, H, W).
return_tensors (`str` or [`~
file_
utils.TensorType`], *optional*):
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of NumPy arrays. If set to `'pt'`, return PyTorch `torch.Tensor`
If set, will return tensors instead of NumPy arrays. If set to `'pt'`, return PyTorch `torch.Tensor`
objects.
objects.
...
...
src/transformers/models/detr/modeling_detr.py
View file @
088c1880
...
@@ -868,7 +868,7 @@ DETR_INPUTS_DOCSTRING = r"""
...
@@ -868,7 +868,7 @@ DETR_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
@@ -932,7 +932,7 @@ class DetrEncoder(DetrPreTrainedModel):
...
@@ -932,7 +932,7 @@ class DetrEncoder(DetrPreTrainedModel):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
for more detail.
for more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
output_attentions
=
output_attentions
if
output_attentions
is
not
None
else
self
.
config
.
output_attentions
output_attentions
=
output_attentions
if
output_attentions
is
not
None
else
self
.
config
.
output_attentions
output_hidden_states
=
(
output_hidden_states
=
(
...
@@ -1054,7 +1054,7 @@ class DetrDecoder(DetrPreTrainedModel):
...
@@ -1054,7 +1054,7 @@ class DetrDecoder(DetrPreTrainedModel):
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors
for more detail.
for more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
output_attentions
=
output_attentions
if
output_attentions
is
not
None
else
self
.
config
.
output_attentions
output_attentions
=
output_attentions
if
output_attentions
is
not
None
else
self
.
config
.
output_attentions
output_hidden_states
=
(
output_hidden_states
=
(
...
...
src/transformers/models/distilbert/modeling_distilbert.py
View file @
088c1880
...
@@ -446,7 +446,7 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
...
@@ -446,7 +446,7 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/distilbert/modeling_flax_distilbert.py
View file @
088c1880
...
@@ -89,7 +89,7 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
...
@@ -89,7 +89,7 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/distilbert/modeling_tf_distilbert.py
View file @
088c1880
...
@@ -508,8 +508,8 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
...
@@ -508,8 +508,8 @@ DISTILBERT_INPUTS_DOCSTRING = r"""
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
used instead.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple. This argument can be used
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used
in
in
eager mode, in graph mode the value will always be set to True.
eager mode, in graph mode the value will always be set to True.
training (`bool`, *optional*, defaults to `False`):
training (`bool`, *optional*, defaults to `False`):
Whether or not to use the model in training mode (some modules like dropout modules have different
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
behaviors between training and evaluation).
...
...
src/transformers/models/dpr/modeling_dpr.py
View file @
088c1880
...
@@ -398,7 +398,7 @@ DPR_ENCODERS_INPUTS_DOCSTRING = r"""
...
@@ -398,7 +398,7 @@ DPR_ENCODERS_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
DPR_READER_INPUTS_DOCSTRING
=
r
"""
DPR_READER_INPUTS_DOCSTRING
=
r
"""
...
@@ -434,7 +434,7 @@ DPR_READER_INPUTS_DOCSTRING = r"""
...
@@ -434,7 +434,7 @@ DPR_READER_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/dpr/modeling_tf_dpr.py
View file @
088c1880
...
@@ -487,8 +487,8 @@ TF_DPR_ENCODERS_INPUTS_DOCSTRING = r"""
...
@@ -487,8 +487,8 @@ TF_DPR_ENCODERS_INPUTS_DOCSTRING = r"""
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
used instead.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple. This argument can be used
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used
in
in
eager mode, in graph mode the value will always be set to True.
eager mode, in graph mode the value will always be set to True.
training (`bool`, *optional*, defaults to `False`):
training (`bool`, *optional*, defaults to `False`):
Whether or not to use the model in training mode (some modules like dropout modules have different
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
behaviors between training and evaluation).
...
@@ -523,8 +523,8 @@ TF_DPR_READER_INPUTS_DOCSTRING = r"""
...
@@ -523,8 +523,8 @@ TF_DPR_READER_INPUTS_DOCSTRING = r"""
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
used instead.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple. This argument can be used
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used
in
in
eager mode, in graph mode the value will always be set to True.
eager mode, in graph mode the value will always be set to True.
training (`bool`, *optional*, defaults to `False`):
training (`bool`, *optional*, defaults to `False`):
Whether or not to use the model in training mode (some modules like dropout modules have different
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
behaviors between training and evaluation).
...
...
src/transformers/models/dpr/tokenization_dpr.py
View file @
088c1880
...
@@ -144,7 +144,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
...
@@ -144,7 +144,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
texts (`str` or `List[str]`):
texts (`str` or `List[str]`):
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
padding (`bool`, `str` or [`~
file_
utils.PaddingStrategy`], *optional*, defaults to `False`):
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence
...
@@ -174,7 +174,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
...
@@ -174,7 +174,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
If left unset or set to `None`, this will use the predefined model maximum length if a maximum length
If left unset or set to `None`, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
length (like XLNet) truncation/padding to a maximum length will be deactivated.
return_tensors (`str` or [`~
file_
utils.TensorType`], *optional*):
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'tf'`: Return TensorFlow `tf.constant` objects.
- `'tf'`: Return TensorFlow `tf.constant` objects.
...
...
src/transformers/models/dpr/tokenization_dpr_fast.py
View file @
088c1880
...
@@ -145,7 +145,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
...
@@ -145,7 +145,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
The passages titles to be encoded. This can be a string or a list of strings if there are several passages.
texts (`str` or `List[str]`):
texts (`str` or `List[str]`):
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
The passages texts to be encoded. This can be a string or a list of strings if there are several passages.
padding (`bool`, `str` or [`~
file_
utils.PaddingStrategy`], *optional*, defaults to `False`):
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Activates and controls padding. Accepts the following values:
Activates and controls padding. Accepts the following values:
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence
- `True` or `'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence
...
@@ -175,7 +175,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
...
@@ -175,7 +175,7 @@ CUSTOM_DPR_READER_DOCSTRING = r"""
If left unset or set to `None`, this will use the predefined model maximum length if a maximum length
If left unset or set to `None`, this will use the predefined model maximum length if a maximum length
is required by one of the truncation/padding parameters. If the model has no specific maximum input
is required by one of the truncation/padding parameters. If the model has no specific maximum input
length (like XLNet) truncation/padding to a maximum length will be deactivated.
length (like XLNet) truncation/padding to a maximum length will be deactivated.
return_tensors (`str` or [`~
file_
utils.TensorType`], *optional*):
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors instead of list of python integers. Acceptable values are:
If set, will return tensors instead of list of python integers. Acceptable values are:
- `'tf'`: Return TensorFlow `tf.constant` objects.
- `'tf'`: Return TensorFlow `tf.constant` objects.
...
...
src/transformers/models/electra/modeling_electra.py
View file @
088c1880
...
@@ -794,7 +794,7 @@ ELECTRA_INPUTS_DOCSTRING = r"""
...
@@ -794,7 +794,7 @@ ELECTRA_INPUTS_DOCSTRING = r"""
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
more detail.
more detail.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/electra/modeling_flax_electra.py
View file @
088c1880
...
@@ -134,7 +134,7 @@ ELECTRA_INPUTS_DOCSTRING = r"""
...
@@ -134,7 +134,7 @@ ELECTRA_INPUTS_DOCSTRING = r"""
- 0 indicates the head is **masked**.
- 0 indicates the head is **masked**.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple.
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
"""
"""
...
...
src/transformers/models/electra/modeling_tf_electra.py
View file @
088c1880
...
@@ -907,8 +907,8 @@ ELECTRA_INPUTS_DOCSTRING = r"""
...
@@ -907,8 +907,8 @@ ELECTRA_INPUTS_DOCSTRING = r"""
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
more detail. This argument can be used only in eager mode, in graph mode the value in the config will be
used instead.
used instead.
return_dict (`bool`, *optional*):
return_dict (`bool`, *optional*):
Whether or not to return a [`~
file_
utils.ModelOutput`] instead of a plain tuple. This argument can be used
Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. This argument can be used
in
in
eager mode, in graph mode the value will always be set to True.
eager mode, in graph mode the value will always be set to True.
training (`bool`, *optional*, defaults to `False`):
training (`bool`, *optional*, defaults to `False`):
Whether or not to use the model in training mode (some modules like dropout modules have different
Whether or not to use the model in training mode (some modules like dropout modules have different
behaviors between training and evaluation).
behaviors between training and evaluation).
...
...
Prev
1
2
3
4
5
6
7
8
…
12
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment