Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e13f72fb
Unverified
Commit
e13f72fb
authored
Dec 27, 2021
by
Stas Bekman
Committed by
GitHub
Dec 27, 2021
Browse files
[doc] :obj: hunt (#14954)
* redo sans examples * style
parent
133c5e40
Changes
33
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
43 additions
and
43 deletions
+43
-43
docs/source/testing.mdx
docs/source/testing.mdx
+1
-1
src/transformers/generation_utils.py
src/transformers/generation_utils.py
+5
-5
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+4
-4
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
...ormers/models/encoder_decoder/modeling_encoder_decoder.py
+2
-2
src/transformers/models/encoder_decoder/modeling_flax_encoder_decoder.py
...s/models/encoder_decoder/modeling_flax_encoder_decoder.py
+2
-2
src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py
...ers/models/encoder_decoder/modeling_tf_encoder_decoder.py
+2
-2
src/transformers/models/ibert/quant_modules.py
src/transformers/models/ibert/quant_modules.py
+1
-1
src/transformers/models/layoutlm/modeling_layoutlm.py
src/transformers/models/layoutlm/modeling_layoutlm.py
+1
-1
src/transformers/models/layoutlm/modeling_tf_layoutlm.py
src/transformers/models/layoutlm/modeling_tf_layoutlm.py
+1
-1
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
+1
-1
src/transformers/models/lxmert/modeling_tf_lxmert.py
src/transformers/models/lxmert/modeling_tf_lxmert.py
+2
-2
src/transformers/models/rag/modeling_rag.py
src/transformers/models/rag/modeling_rag.py
+2
-2
src/transformers/models/rag/modeling_tf_rag.py
src/transformers/models/rag/modeling_tf_rag.py
+2
-2
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py
...speech_encoder_decoder/modeling_speech_encoder_decoder.py
+2
-2
src/transformers/models/t5/modeling_t5.py
src/transformers/models/t5/modeling_t5.py
+1
-1
src/transformers/models/tapas/modeling_tapas.py
src/transformers/models/tapas/modeling_tapas.py
+5
-5
src/transformers/models/tapas/modeling_tf_tapas.py
src/transformers/models/tapas/modeling_tf_tapas.py
+5
-5
src/transformers/models/unispeech/configuration_unispeech.py
src/transformers/models/unispeech/configuration_unispeech.py
+1
-1
src/transformers/models/unispeech_sat/configuration_unispeech_sat.py
...rmers/models/unispeech_sat/configuration_unispeech_sat.py
+1
-1
src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py
...n_encoder_decoder/modeling_flax_vision_encoder_decoder.py
+2
-2
No files found.
docs/source/testing.mdx
View file @
e13f72fb
...
...
@@ -738,7 +738,7 @@ leave any data in there.
<Tip>
In order to run the equivalent of `rm -r` safely, only subdirs of the project repository checkout are allowed if
an explicit
obj:*
tmp_dir
*
is used, so that by mistake no `/tmp` or similar important part of the filesystem will
an explicit
`
tmp_dir
`
is used, so that by mistake no `/tmp` or similar important part of the filesystem will
get nuked. i.e. please always pass paths that start with `./`.
</Tip>
...
...
src/transformers/generation_utils.py
View file @
e13f72fb
...
...
@@ -1320,7 +1320,7 @@ class GenerationMixin:
Return:
[`~generation_utils.GreedySearchDecoderOnlyOutput`], [`~generation_utils.GreedySearchEncoderDecoderOutput`]
or
obj:*
torch.LongTensor
*
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
or
`
torch.LongTensor
`
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
[`~generation_utils.GreedySearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
`return_dict_in_generate=True` or a [`~generation_utils.GreedySearchEncoderDecoderOutput`] if
`model.config.is_encoder_decoder=True`.
...
...
@@ -1547,7 +1547,7 @@ class GenerationMixin:
Return:
[`~generation_utils.SampleDecoderOnlyOutput`], [`~generation_utils.SampleEncoderDecoderOutput`] or
obj:*
torch.LongTensor
*
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
`
torch.LongTensor
`
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
[`~generation_utils.SampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
`return_dict_in_generate=True` or a [`~generation_utils.SampleEncoderDecoderOutput`] if
`model.config.is_encoder_decoder=True`.
...
...
@@ -1785,7 +1785,7 @@ class GenerationMixin:
Return:
[`generation_utilsBeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or
obj:*
torch.LongTensor
*
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
`
torch.LongTensor
`
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
[`~generation_utils.BeamSearchDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
`return_dict_in_generate=True` or a [`~generation_utils.BeamSearchEncoderDecoderOutput`] if
`model.config.is_encoder_decoder=True`.
...
...
@@ -2079,7 +2079,7 @@ class GenerationMixin:
Return:
[`~generation_utils.BeamSampleDecoderOnlyOutput`], [`~generation_utils.BeamSampleEncoderDecoderOutput`] or
obj:*
torch.LongTensor
*
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
`
torch.LongTensor
`
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
[`~generation_utils.BeamSampleDecoderOnlyOutput`] if `model.config.is_encoder_decoder=False` and
`return_dict_in_generate=True` or a [`~generation_utils.BeamSampleEncoderDecoderOutput`] if
`model.config.is_encoder_decoder=True`.
...
...
@@ -2375,7 +2375,7 @@ class GenerationMixin:
Return:
[`~generation_utils.BeamSearchDecoderOnlyOutput`], [`~generation_utils.BeamSearchEncoderDecoderOutput`] or
obj:*
torch.LongTensor
*
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
`
torch.LongTensor
`
: A `torch.LongTensor` containing the generated tokens (default behaviour) or a
[`~generation_utils.BeamSearchDecoderOnlyOutput`] if [`~generation_utils.BeamSearchDecoderOnlyOutput`] if
`model.config.is_encoder_decoder=False` and `return_dict_in_generate=True` or a
[`~generation_utils.BeamSearchEncoderDecoderOutput`] if `model.config.is_encoder_decoder=True`.
...
...
src/transformers/modeling_utils.py
View file @
e13f72fb
...
...
@@ -1840,8 +1840,8 @@ class PoolerEndLogits(nn.Module):
<Tip>
One of `start_states` or `start_positions` should be not
obj:
`None`. If both are set, `start_positions`
overrides
`start_states`.
One of `start_states` or `start_positions` should be not `None`. If both are set, `start_positions`
overrides
`start_states`.
</Tip>
...
...
@@ -1906,8 +1906,8 @@ class PoolerAnswerClass(nn.Module):
<Tip>
One of `start_states` or `start_positions` should be not
obj:
`None`. If both are set, `start_positions`
overrides
`start_states`.
One of `start_states` or `start_positions` should be not `None`. If both are set, `start_positions`
overrides
`start_states`.
</Tip>
...
...
src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
View file @
e13f72fb
...
...
@@ -293,7 +293,7 @@ class EncoderDecoderModel(PreTrainedModel):
the model, you need to first set it back in training mode with `model.train()`.
Params:
encoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*):
encoder_pretrained_model_name_or_path (
`
str
`
, *optional*):
Information necessary to initiate the encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
@@ -306,7 +306,7 @@ class EncoderDecoderModel(PreTrainedModel):
`config` argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
decoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
decoder_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the decoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
src/transformers/models/encoder_decoder/modeling_flax_encoder_decoder.py
View file @
e13f72fb
...
...
@@ -746,7 +746,7 @@ class FlaxEncoderDecoderModel(FlaxPreTrainedModel):
checkpoints.
Params:
encoder_pretrained_model_name_or_path (
:obj: *
Union[str, os.PathLike]
*
, *optional*):
encoder_pretrained_model_name_or_path (
`
Union[str, os.PathLike]
`
, *optional*):
Information necessary to initiate the encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
@@ -755,7 +755,7 @@ class FlaxEncoderDecoderModel(FlaxPreTrainedModel):
- A path to a *directory* containing model weights saved using
[`~FlaxPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
decoder_pretrained_model_name_or_path (
:obj: *
Union[str, os.PathLike]
*
, *optional*, defaults to `None`):
decoder_pretrained_model_name_or_path (
`
Union[str, os.PathLike]
`
, *optional*, defaults to `None`):
Information necessary to initiate the decoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py
View file @
e13f72fb
...
...
@@ -308,7 +308,7 @@ class TFEncoderDecoderModel(TFPreTrainedModel):
Params:
encoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*):
encoder_pretrained_model_name_or_path (
`
str
`
, *optional*):
Information necessary to initiate the encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
@@ -319,7 +319,7 @@ class TFEncoderDecoderModel(TFPreTrainedModel):
- A path or url to a *pytorch index checkpoint file* (e.g, `./pt_model/`). In this case,
`encoder_from_pt` should be set to `True`.
decoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
decoder_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the decoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
src/transformers/models/ibert/quant_modules.py
View file @
e13f72fb
...
...
@@ -713,7 +713,7 @@ def batch_frexp(inputs, max_bit=31):
Target scaling factor to decompose.
Returns:
:obj:
``Tuple(torch.Tensor, torch.Tensor)`: mantisa and exponent
``Tuple(torch.Tensor, torch.Tensor)`: mantisa and exponent
"""
shape_of_input
=
inputs
.
size
()
...
...
src/transformers/models/layoutlm/modeling_layoutlm.py
View file @
e13f72fb
...
...
@@ -108,7 +108,7 @@ class LayoutLMEmbeddings(nn.Module):
right_position_embeddings
=
self
.
x_position_embeddings
(
bbox
[:,
:,
2
])
lower_position_embeddings
=
self
.
y_position_embeddings
(
bbox
[:,
:,
3
])
except
IndexError
as
e
:
raise
IndexError
(
"The
:obj:
`bbox`coordinate values should be within 0-1000 range."
)
from
e
raise
IndexError
(
"The `bbox`coordinate values should be within 0-1000 range."
)
from
e
h_position_embeddings
=
self
.
h_position_embeddings
(
bbox
[:,
:,
3
]
-
bbox
[:,
:,
1
])
w_position_embeddings
=
self
.
w_position_embeddings
(
bbox
[:,
:,
2
]
-
bbox
[:,
:,
0
])
...
...
src/transformers/models/layoutlm/modeling_tf_layoutlm.py
View file @
e13f72fb
...
...
@@ -162,7 +162,7 @@ class TFLayoutLMEmbeddings(tf.keras.layers.Layer):
right_position_embeddings
=
tf
.
gather
(
self
.
x_position_embeddings
,
bbox
[:,
:,
2
])
lower_position_embeddings
=
tf
.
gather
(
self
.
y_position_embeddings
,
bbox
[:,
:,
3
])
except
IndexError
as
e
:
raise
IndexError
(
"The
:obj:
`bbox`coordinate values should be within 0-1000 range."
)
from
e
raise
IndexError
(
"The `bbox`coordinate values should be within 0-1000 range."
)
from
e
h_position_embeddings
=
tf
.
gather
(
self
.
h_position_embeddings
,
bbox
[:,
:,
3
]
-
bbox
[:,
:,
1
])
w_position_embeddings
=
tf
.
gather
(
self
.
w_position_embeddings
,
bbox
[:,
:,
2
]
-
bbox
[:,
:,
0
])
...
...
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
View file @
e13f72fb
...
...
@@ -86,7 +86,7 @@ class LayoutLMv2Embeddings(nn.Module):
right_position_embeddings
=
self
.
x_position_embeddings
(
bbox
[:,
:,
2
])
lower_position_embeddings
=
self
.
y_position_embeddings
(
bbox
[:,
:,
3
])
except
IndexError
as
e
:
raise
IndexError
(
"The
:obj:
`bbox` coordinate values should be within 0-1000 range."
)
from
e
raise
IndexError
(
"The `bbox` coordinate values should be within 0-1000 range."
)
from
e
h_position_embeddings
=
self
.
h_position_embeddings
(
bbox
[:,
:,
3
]
-
bbox
[:,
:,
1
])
w_position_embeddings
=
self
.
w_position_embeddings
(
bbox
[:,
:,
2
]
-
bbox
[:,
:,
0
])
...
...
src/transformers/models/lxmert/modeling_tf_lxmert.py
View file @
e13f72fb
...
...
@@ -1324,7 +1324,7 @@ class TFLxmertForPreTraining(TFLxmertPreTrainedModel):
Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ...,
config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the
loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`
obj_labels: (`Dict[Str: Tuple[tf.Tensor, tf.Tensor]]`, *optional*, defaults to
:obj:
`None`):
obj_labels: (`Dict[Str: Tuple[tf.Tensor, tf.Tensor]]`, *optional*, defaults to `None`):
each key is named after each one of the visual losses and each element of the tuple is of the shape
`(batch_size, num_features)` and `(batch_size, num_features, visual_feature_dim)` for each the label id and
the label score respectively
...
...
@@ -1334,7 +1334,7 @@ class TFLxmertForPreTraining(TFLxmertPreTrainedModel):
- 0 indicates that the sentence does not match the image,
- 1 indicates that the sentence does match the image.
ans (`Torch.Tensor` of shape `(batch_size)`, *optional*, defaults to
:obj:
`None`):
ans (`Torch.Tensor` of shape `(batch_size)`, *optional*, defaults to `None`):
a one hot representation hof the correct answer *optional*
Returns:
...
...
src/transformers/models/rag/modeling_rag.py
View file @
e13f72fb
...
...
@@ -258,7 +258,7 @@ class RagPreTrainedModel(PreTrainedModel):
the model, you need to first set it back in training mode with `model.train()`.
Params:
question_encoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
question_encoder_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the question encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
@@ -271,7 +271,7 @@ class RagPreTrainedModel(PreTrainedModel):
`config` argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
generator_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
generator_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the generator. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
src/transformers/models/rag/modeling_tf_rag.py
View file @
e13f72fb
...
...
@@ -233,7 +233,7 @@ class TFRagPreTrainedModel(TFPreTrainedModel):
model checkpoints.
Params:
question_encoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*):
question_encoder_pretrained_model_name_or_path (
`
str
`
, *optional*):
Information necessary to initiate the question encoder. Can be either:
- A string with the *shortcut name* of a pretrained model to load from cache or download, e.g.,
...
...
@@ -245,7 +245,7 @@ class TFRagPreTrainedModel(TFPreTrainedModel):
- A path or url to a *pytorch index checkpoint file* (e.g, `./pt_model/`). In this case,
`question_encoder_from_pt` should be set to `True`.
generator_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
generator_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the generator. Can be either:
- A string with the *shortcut name* of a pretrained model to load from cache or download, e.g.,
...
...
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py
View file @
e13f72fb
...
...
@@ -287,7 +287,7 @@ class SpeechEncoderDecoderModel(PreTrainedModel):
the model, you need to first set it back in training mode with `model.train()`.
Params:
encoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*):
encoder_pretrained_model_name_or_path (
`
str
`
, *optional*):
Information necessary to initiate the encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
@@ -300,7 +300,7 @@ class SpeechEncoderDecoderModel(PreTrainedModel):
`config` argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
decoder_pretrained_model_name_or_path (
:obj: *
str
*
, *optional*, defaults to `None`):
decoder_pretrained_model_name_or_path (
`
str
`
, *optional*, defaults to `None`):
Information necessary to initiate the decoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
src/transformers/models/t5/modeling_t5.py
View file @
e13f72fb
...
...
@@ -915,7 +915,7 @@ class T5Stack(T5PreTrainedModel):
mask_seq_length
=
past_key_values
[
0
][
0
].
shape
[
2
]
+
seq_length
if
past_key_values
is
not
None
else
seq_length
if
use_cache
is
True
:
assert
self
.
is_decoder
,
f
"
:obj:
`use_cache` can only be set to `True` if
{
self
}
is used as a decoder"
assert
self
.
is_decoder
,
f
"`use_cache` can only be set to `True` if
{
self
}
is used as a decoder"
if
attention_mask
is
None
:
attention_mask
=
torch
.
ones
(
batch_size
,
mask_seq_length
).
to
(
inputs_embeds
.
device
)
...
...
src/transformers/models/tapas/modeling_tapas.py
View file @
e13f72fb
...
...
@@ -2277,7 +2277,7 @@ def _calculate_expected_result(
Numeric values of every token. Nan for tokens which are not numeric values.
numeric_values_scale (`torch.FloatTensor` of shape `(batch_size, seq_length)`):
Scale of the numeric values of every token.
input_mask_float (
:obj: *
torch.FloatTensor
*
of shape `(batch_size, seq_length)`):
input_mask_float (
`
torch.FloatTensor
`
of shape `(batch_size, seq_length)`):
Mask for the table, without question tokens and table headers.
logits_aggregation (`torch.FloatTensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
...
...
@@ -2371,9 +2371,9 @@ def _calculate_regression_loss(
Calculates the regression loss per example.
Args:
answer (
:obj: *
torch.FloatTensor
*
of shape `(batch_size,)`):
answer (
`
torch.FloatTensor
`
of shape `(batch_size,)`):
Answer for every example in the batch. Nan if there is no scalar answer.
aggregate_mask (
:obj: *
torch.FloatTensor
*
of shape `(batch_size,)`):
aggregate_mask (
`
torch.FloatTensor
`
of shape `(batch_size,)`):
A mask set to 1 for examples that should use aggregation functions.
dist_per_cell (`torch.distributions.Bernoulli`):
Cell selection distribution for each cell.
...
...
@@ -2381,9 +2381,9 @@ def _calculate_regression_loss(
Numeric values of every token. Nan for tokens which are not numeric values.
numeric_values_scale (`torch.FloatTensor` of shape `(batch_size, seq_length)`):
Scale of the numeric values of every token.
input_mask_float (
:obj: *
torch.FloatTensor
*
of shape `(batch_size, seq_length)`):
input_mask_float (
`
torch.FloatTensor
`
of shape `(batch_size, seq_length)`):
Mask for the table, without question tokens and table headers.
logits_aggregation (
:obj: *
torch.FloatTensor
*
of shape `(batch_size, num_aggregation_labels)`):
logits_aggregation (
`
torch.FloatTensor
`
of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
config ([`TapasConfig`]):
Model configuration class with all the parameters of the model
...
...
src/transformers/models/tapas/modeling_tf_tapas.py
View file @
e13f72fb
...
...
@@ -2241,7 +2241,7 @@ def _calculate_expected_result(
Numeric values of every token. Nan for tokens which are not numeric values.
numeric_values_scale (`tf.Tensor` of shape `(batch_size, seq_length)`):
Scale of the numeric values of every token.
input_mask_float (
:obj: *
tf.Tensor
*
of shape `(batch_size, seq_length)`):
input_mask_float (
`
tf.Tensor
`
of shape `(batch_size, seq_length)`):
Mask for the table, without question tokens and table headers.
logits_aggregation (`tf.Tensor` of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
...
...
@@ -2321,9 +2321,9 @@ def _calculate_regression_loss(
Calculates the regression loss per example.
Args:
answer (
:obj: *
tf.Tensor
*
of shape `(batch_size,)`):
answer (
`
tf.Tensor
`
of shape `(batch_size,)`):
Answer for every example in the batch. Nan if there is no scalar answer.
aggregate_mask (
:obj: *
tf.Tensor
*
of shape `(batch_size,)`):
aggregate_mask (
`
tf.Tensor
`
of shape `(batch_size,)`):
A mask set to 1 for examples that should use aggregation functions.
dist_per_cell (`torch.distributions.Bernoulli`):
Cell selection distribution for each cell.
...
...
@@ -2331,9 +2331,9 @@ def _calculate_regression_loss(
Numeric values of every token. Nan for tokens which are not numeric values.
numeric_values_scale (`tf.Tensor` of shape `(batch_size, seq_length)`):
Scale of the numeric values of every token.
input_mask_float (
:obj: *
tf.Tensor
*
of shape `(batch_size, seq_length)`):
input_mask_float (
`
tf.Tensor
`
of shape `(batch_size, seq_length)`):
Mask for the table, without question tokens and table headers.
logits_aggregation (
:obj: *
tf.Tensor
*
of shape `(batch_size, num_aggregation_labels)`):
logits_aggregation (
`
tf.Tensor
`
of shape `(batch_size, num_aggregation_labels)`):
Logits per aggregation operation.
config ([`TapasConfig`]):
Model configuration class with all the parameters of the model
...
...
src/transformers/models/unispeech/configuration_unispeech.py
View file @
e13f72fb
...
...
@@ -73,7 +73,7 @@ class UniSpeechConfig(PretrainedConfig):
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
feat_quantizer_dropout (
obj:*
float
*
, *optional*, defaults to 0.0):
feat_quantizer_dropout (
`
float
`
, *optional*, defaults to 0.0):
The dropout probabilitiy for quantized feature extractor states.
conv_dim (`Tuple[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
...
...
src/transformers/models/unispeech_sat/configuration_unispeech_sat.py
View file @
e13f72fb
...
...
@@ -73,7 +73,7 @@ class UniSpeechSatConfig(PretrainedConfig):
feat_extract_activation (`str, `optional`, defaults to `"gelu"`):
The non-linear activation function (function or string) in the 1D convolutional layers of the feature
extractor. If string, `"gelu"`, `"relu"`, `"selu"` and `"gelu_new"` are supported.
feat_quantizer_dropout (
obj:*
float
*
, *optional*, defaults to 0.0):
feat_quantizer_dropout (
`
float
`
, *optional*, defaults to 0.0):
The dropout probabilitiy for quantized feature extractor states.
conv_dim (`Tuple[int]`, *optional*, defaults to `(512, 512, 512, 512, 512, 512, 512)`):
A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the
...
...
src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py
View file @
e13f72fb
...
...
@@ -712,7 +712,7 @@ class FlaxVisionEncoderDecoderModel(FlaxPreTrainedModel):
checkpoints.
Params:
encoder_pretrained_model_name_or_path (
:obj: *
Union[str, os.PathLike]
*
, *optional*):
encoder_pretrained_model_name_or_path (
`
Union[str, os.PathLike]
`
, *optional*):
Information necessary to initiate the encoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co. An
...
...
@@ -720,7 +720,7 @@ class FlaxVisionEncoderDecoderModel(FlaxPreTrainedModel):
- A path to a *directory* containing model weights saved using
[`~FlaxPreTrainedModel.save_pretrained`], e.g., `./my_model_directory/`.
decoder_pretrained_model_name_or_path (
:obj: *
Union[str, os.PathLike]
*
, *optional*, defaults to `None`):
decoder_pretrained_model_name_or_path (
`
Union[str, os.PathLike]
`
, *optional*, defaults to `None`):
Information necessary to initiate the decoder. Can be either:
- A string, the *model id* of a pretrained model hosted inside a model repo on huggingface.co.
...
...
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment