Unverified Commit 4ece3b94 authored by Matthijs Hollemans's avatar Matthijs Hollemans Committed by GitHub
Browse files

add VITS model (#24085)



* add VITS model

* let's vits

* finish TextEncoder (mostly)

* rename VITS to Vits

* add StochasticDurationPredictor

* ads flow model

* add generator

* correctly set vocab size

* add tokenizer

* remove processor & feature extractor

* add PosteriorEncoder

* add missing weights to SDP

* also convert LJSpeech and VCTK checkpoints

* add training stuff in forward

* add placeholder tests for tokenizer

* add placeholder tests for model

* starting cleanup

* let the great renaming begin!

* use config

* global_conditioning

* more cleaning

* renaming variables

* more renaming

* more renaming

* it never ends

* reticulating the splines

* more renaming

* HiFi-GAN

* doc strings for main model

* fixup

* fix-copies

* don't make it a PreTrainedModel

* fixup

* rename config options

* remove training logic from forward pass

* simplify relative position

* use actual checkpoint

* style

* PR review fixes

* more review changes

* fixup

* more unit tests

* fixup

* fix doc test

* add integration test

* improve tokenizer tests

* add tokenizer integration test

* fix tests on GPU (gave OOM)

* conversion script can handle repos from hub

* add conversion script for all MMS-TTS checkpoints

* automatically create a README for the converted checkpoint

* small changes to config

* push README to hub

* only show uroman note for checkpoints that need it

* remove conversion script because code formatting breaks the readme

* make WaveNet layers configurable

* rename variables

* simplifying the math

* output attentions and hidden states

* remove VitsFlip in flow model

* also got rid of the other flip

* fix tests

* rename more variables

* rename tokenizer, add phonemization

* raise error when phonemizer missing

* re-order config docstrings to match method

* change config naming

* remove redundant str -> list

* fix copyright: vits authors -> kakao enterprise

* (mean, log_variances) -> (prior_mean, prior_log_variances)

* if return dict -> if not return dict

* speed -> speaking rate

* Apply suggestions from code review
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>

* update fused tanh sigmoid

* reduce dims in tester

* audio -> output_values

* audio -> output_values in tuple out

* fix return type

* fix return type

* make _unconstrained_rational_quadratic_spline a function

* all nn's to accept a config

* add spectro to output

* move {speaking rate, noise scale, noise scale duration} to config

* path -> attn_path

* idxs -> valid idxs -> padded idxs

* output values -> waveform

* use config for attention

* make generation work

* harden integration test

* add spectrogram to dict output

* tokenizer refactor

* make style

* remove 'fake' padding token

* harden tokenizer tests

* ron norm test

* fprop / save tests deterministic

* move uroman to tokenizer as much as possible

* better logger message

* fix vivit imports

* add uroman integration test

* make style

* up

* matthijs -> sanchit-gandhi

* fix tokenizer test

* make fix-copies

* fix dict comprehension

* fix config tests

* fix model tests

* make outputs consistent with reverse/not reverse

* fix key concat

* more model details

* add author

* return dict

* speaker error

* labels error

* Apply suggestions from code review
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>

* Update src/transformers/models/vits/convert_original_checkpoint.py
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>

* remove uromanize

* add docstrings

* add docstrings for tokenizer

* upper-case skip messages

* fix return dict

* style

* finish tests

* update checkpoints

* make style

* remove doctest file

* revert

* fix docstring

* fix tokenizer

* remove uroman integration test

* add sampling rate

* fix docs / docstrings

* style

* add sr to model output

* fix outputs

* style / copies

* fix docstring

* fix copies

* remove sr from model outputs

* Update utils/documentation_tests.txt
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>

* add sr as allowed attr

---------
Co-authored-by: default avatarsanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: default avatarSanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: default avataramyeroberts <22614925+amyeroberts@users.noreply.github.com>
parent ef10dbce
...@@ -7929,6 +7929,23 @@ class VitDetPreTrainedModel(metaclass=DummyObject): ...@@ -7929,6 +7929,23 @@ class VitDetPreTrainedModel(metaclass=DummyObject):
requires_backends(self, ["torch"]) requires_backends(self, ["torch"])
VITS_PRETRAINED_MODEL_ARCHIVE_LIST = None
class VitsModel(metaclass=DummyObject):
_backends = ["torch"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["torch"])
class VitsPreTrainedModel(metaclass=DummyObject):
_backends = ["torch"]
def __init__(self, *args, **kwargs):
requires_backends(self, ["torch"])
VIVIT_PRETRAINED_MODEL_ARCHIVE_LIST = None VIVIT_PRETRAINED_MODEL_ARCHIVE_LIST = None
......
# coding=utf-8
# Copyright 2023 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Testing suite for the PyTorch VITS model. """
import copy
import os
import tempfile
import unittest
from typing import Dict, List, Tuple
import numpy as np
from transformers import PretrainedConfig, VitsConfig
from transformers.testing_utils import (
is_torch_available,
require_torch,
slow,
torch_device,
)
from transformers.trainer_utils import set_seed
from ...test_configuration_common import ConfigTester
from ...test_modeling_common import (
ModelTesterMixin,
global_rng,
ids_tensor,
random_attention_mask,
)
if is_torch_available():
import torch
from transformers import VitsModel, VitsTokenizer
CONFIG_NAME = "config.json"
GENERATION_CONFIG_NAME = "generation_config.json"
def _config_zero_init(config):
configs_no_init = copy.deepcopy(config)
for key in configs_no_init.__dict__.keys():
if "_range" in key or "_std" in key or "initializer_factor" in key or "layer_scale" in key:
setattr(configs_no_init, key, 1e-10)
if isinstance(getattr(configs_no_init, key, None), PretrainedConfig):
no_init_subconfig = _config_zero_init(getattr(configs_no_init, key))
setattr(configs_no_init, key, no_init_subconfig)
return configs_no_init
@require_torch
class VitsModelTester:
def __init__(
self,
parent,
batch_size=2,
seq_length=7,
is_training=False,
hidden_size=16,
num_hidden_layers=2,
num_attention_heads=2,
intermediate_size=64,
flow_size=16,
vocab_size=38,
spectrogram_bins=8,
duration_predictor_num_flows=2,
duration_predictor_filter_channels=16,
prior_encoder_num_flows=2,
upsample_initial_channel=16,
):
self.parent = parent
self.batch_size = batch_size
self.seq_length = seq_length
self.is_training = is_training
self.hidden_size = hidden_size
self.num_hidden_layers = num_hidden_layers
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.flow_size = flow_size
self.vocab_size = vocab_size
self.spectrogram_bins = spectrogram_bins
self.duration_predictor_num_flows = duration_predictor_num_flows
self.duration_predictor_filter_channels = duration_predictor_filter_channels
self.prior_encoder_num_flows = prior_encoder_num_flows
self.upsample_initial_channel = upsample_initial_channel
def prepare_config_and_inputs(self):
input_ids = ids_tensor([self.batch_size, self.seq_length], self.vocab_size).clamp(2)
attention_mask = random_attention_mask([self.batch_size, self.seq_length])
config = self.get_config()
inputs_dict = {
"input_ids": input_ids,
"attention_mask": attention_mask,
}
return config, inputs_dict
def prepare_config_and_inputs_for_common(self):
config, inputs_dict = self.prepare_config_and_inputs()
return config, inputs_dict
def get_config(self):
return VitsConfig(
hidden_size=self.hidden_size,
num_hidden_layers=self.num_hidden_layers,
num_attention_heads=self.num_attention_heads,
ffn_dim=self.intermediate_size,
flow_size=self.flow_size,
vocab_size=self.vocab_size,
spectrogram_bins=self.spectrogram_bins,
duration_predictor_num_flows=self.duration_predictor_num_flows,
prior_encoder_num_flows=self.prior_encoder_num_flows,
duration_predictor_filter_channels=self.duration_predictor_filter_channels,
posterior_encoder_num_wavenet_layers=self.num_hidden_layers,
upsample_initial_channel=self.upsample_initial_channel,
)
def create_and_check_model_forward(self, config, inputs_dict):
model = VitsModel(config=config).to(torch_device).eval()
input_ids = inputs_dict["input_ids"]
attention_mask = inputs_dict["attention_mask"]
result = model(input_ids, attention_mask=attention_mask)
self.parent.assertEqual(result.waveform.shape, (self.batch_size, 11008))
@require_torch
class VitsModelTest(ModelTesterMixin, unittest.TestCase):
all_model_classes = (VitsModel,) if is_torch_available() else ()
is_encoder_decoder = False
test_pruning = False
test_headmasking = False
test_resize_embeddings = False
test_head_masking = False
test_torchscript = False
has_attentions = False
input_name = "input_ids"
def setUp(self):
self.model_tester = VitsModelTester(self)
self.config_tester = ConfigTester(self, config_class=VitsConfig, hidden_size=37)
def test_config(self):
self.config_tester.run_common_tests()
def test_model_forward(self):
set_seed(12345)
global_rng.seed(12345)
config_and_inputs = self.model_tester.prepare_config_and_inputs()
self.model_tester.create_and_check_model_forward(*config_and_inputs)
@unittest.skip("VITS is not deterministic")
def test_determinism(self):
pass
def test_initialization(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
configs_no_init = _config_zero_init(config)
for model_class in self.all_model_classes:
model = model_class(config=configs_no_init)
for name, param in model.named_parameters():
uniform_init_parms = [
"emb_rel_k",
"emb_rel_v",
"conv_1",
"conv_2",
"conv_pre",
"conv_post",
"conv_proj",
"conv_dds",
"project",
"wavenet.in_layers",
"wavenet.res_skip_layers",
"upsampler",
"resblocks",
]
if param.requires_grad:
if any(x in name for x in uniform_init_parms):
self.assertTrue(
-1.0 <= ((param.data.mean() * 1e9).round() / 1e9).item() <= 1.0,
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
)
else:
self.assertIn(
((param.data.mean() * 1e9).round() / 1e9).item(),
[0.0, 1.0],
msg=f"Parameter {name} of model {model_class} seems not properly initialized",
)
@unittest.skip("VITS has no inputs_embeds")
def test_inputs_embeds(self):
pass
@unittest.skip("VITS has no input embeddings")
def test_model_common_attributes(self):
pass
# override since the model is not deterministic, so we need to set the seed for each forward pass
def test_model_outputs_equivalence(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
def set_nan_tensor_to_zero(t):
t[t != t] = 0
return t
def check_equivalence(model, tuple_inputs, dict_inputs, additional_kwargs={}):
with torch.no_grad():
set_seed(0)
tuple_output = model(**tuple_inputs, return_dict=False, **additional_kwargs)
set_seed(0)
dict_output = model(**dict_inputs, return_dict=True, **additional_kwargs).to_tuple()
def recursive_check(tuple_object, dict_object):
if isinstance(tuple_object, (List, Tuple)):
for tuple_iterable_value, dict_iterable_value in zip(tuple_object, dict_object):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif isinstance(tuple_object, Dict):
for tuple_iterable_value, dict_iterable_value in zip(
tuple_object.values(), dict_object.values()
):
recursive_check(tuple_iterable_value, dict_iterable_value)
elif tuple_object is None:
return
else:
self.assertTrue(
torch.allclose(
set_nan_tensor_to_zero(tuple_object), set_nan_tensor_to_zero(dict_object), atol=1e-5
),
msg=(
"Tuple and dict output are not equal. Difference:"
f" {torch.max(torch.abs(tuple_object - dict_object))}. Tuple has `nan`:"
f" {torch.isnan(tuple_object).any()} and `inf`: {torch.isinf(tuple_object)}. Dict has"
f" `nan`: {torch.isnan(dict_object).any()} and `inf`: {torch.isinf(dict_object)}."
),
)
recursive_check(tuple_output, dict_output)
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
tuple_inputs = self._prepare_for_class(inputs_dict, model_class)
dict_inputs = self._prepare_for_class(inputs_dict, model_class)
check_equivalence(model, tuple_inputs, dict_inputs)
tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
check_equivalence(model, tuple_inputs, dict_inputs)
tuple_inputs = self._prepare_for_class(inputs_dict, model_class)
dict_inputs = self._prepare_for_class(inputs_dict, model_class)
check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True})
tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
check_equivalence(model, tuple_inputs, dict_inputs, {"output_hidden_states": True})
if self.has_attentions:
tuple_inputs = self._prepare_for_class(inputs_dict, model_class)
dict_inputs = self._prepare_for_class(inputs_dict, model_class)
check_equivalence(model, tuple_inputs, dict_inputs, {"output_attentions": True})
tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
check_equivalence(model, tuple_inputs, dict_inputs, {"output_attentions": True})
tuple_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
dict_inputs = self._prepare_for_class(inputs_dict, model_class, return_labels=True)
check_equivalence(
model, tuple_inputs, dict_inputs, {"output_hidden_states": True, "output_attentions": True}
)
# override since the model is not deterministic, so we need to set the seed for each forward pass
def test_save_load(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
def check_save_load(out1, out2):
# make sure we don't have nans
out_2 = out2.cpu().numpy()
out_2[np.isnan(out_2)] = 0
out_1 = out1.cpu().numpy()
out_1[np.isnan(out_1)] = 0
max_diff = np.amax(np.abs(out_1 - out_2))
self.assertLessEqual(max_diff, 1e-5)
for model_class in self.all_model_classes:
model = model_class(config)
model.to(torch_device)
model.eval()
with torch.no_grad():
set_seed(0)
first = model(**self._prepare_for_class(inputs_dict, model_class))[0]
with tempfile.TemporaryDirectory() as tmpdirname:
model.save_pretrained(tmpdirname)
# the config file (and the generation config file, if it can generate) should be saved
self.assertTrue(os.path.exists(os.path.join(tmpdirname, CONFIG_NAME)))
self.assertEqual(
model.can_generate(), os.path.exists(os.path.join(tmpdirname, GENERATION_CONFIG_NAME))
)
model = model_class.from_pretrained(tmpdirname)
model.to(torch_device)
with torch.no_grad():
set_seed(0)
second = model(**self._prepare_for_class(inputs_dict, model_class))[0]
if isinstance(first, tuple) and isinstance(second, tuple):
for tensor1, tensor2 in zip(first, second):
check_save_load(tensor1, tensor2)
else:
check_save_load(first, second)
# overwrite from test_modeling_common
def _mock_init_weights(self, module):
if hasattr(module, "weight") and module.weight is not None:
module.weight.data.fill_(3)
if hasattr(module, "weight_g") and module.weight_g is not None:
module.weight_g.data.fill_(3)
if hasattr(module, "weight_v") and module.weight_v is not None:
module.weight_v.data.fill_(3)
if hasattr(module, "bias") and module.bias is not None:
module.bias.data.fill_(3)
@require_torch
@slow
class VitsModelIntegrationTests(unittest.TestCase):
def test_forward(self):
# GPU gives different results than CPU
torch_device = "cpu"
model = VitsModel.from_pretrained("facebook/mms-tts-eng")
model.to(torch_device)
tokenizer = VitsTokenizer.from_pretrained("facebook/mms-tts-eng")
set_seed(555) # make deterministic
input_text = "Mister quilter is the apostle of the middle classes and we are glad to welcome his gospel!"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(torch_device)
with torch.no_grad():
outputs = model(input_ids)
self.assertEqual(outputs.waveform.shape, (1, 87040))
# fmt: off
EXPECTED_LOGITS = torch.tensor(
[
-0.0042, 0.0176, 0.0354, 0.0504, 0.0621, 0.0777, 0.0980, 0.1224,
0.1475, 0.1679, 0.1817, 0.1832, 0.1713, 0.1542, 0.1384, 0.1256,
0.1147, 0.1066, 0.1026, 0.0958, 0.0823, 0.0610, 0.0340, 0.0022,
-0.0337, -0.0677, -0.0969, -0.1178, -0.1311, -0.1363
]
)
# fmt: on
self.assertTrue(torch.allclose(outputs.waveform[0, 10000:10030].cpu(), EXPECTED_LOGITS, atol=1e-4))
# coding=utf-8
# Copyright 2023 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the VITS tokenizer."""
import json
import os
import shutil
import tempfile
import unittest
from transformers import VitsTokenizer
from transformers.models.vits.tokenization_vits import VOCAB_FILES_NAMES
from transformers.testing_utils import slow
from ...test_tokenization_common import TokenizerTesterMixin
class VitsTokenizerTest(TokenizerTesterMixin, unittest.TestCase):
tokenizer_class = VitsTokenizer
test_rust_tokenizer = False
def setUp(self):
super().setUp()
vocab = (
"k ' z y u d h e s w – 3 c p - 1 j m i X f l o 0 b r a 4 2 n _ x v t q 5 6 g ț ţ < > | <pad> <unk>".split(
" "
)
)
vocab_tokens = dict(zip(vocab, range(len(vocab))))
vocab_tokens[" "] = vocab_tokens["X"]
del vocab_tokens["X"]
self.special_tokens_map = {"pad_token": "<pad>", "unk_token": "<unk>"}
self.tmpdirname = tempfile.mkdtemp()
self.vocab_file = os.path.join(self.tmpdirname, VOCAB_FILES_NAMES["vocab_file"])
with open(self.vocab_file, "w", encoding="utf-8") as fp:
fp.write(json.dumps(vocab_tokens) + "\n")
def get_tokenizer(self, **kwargs):
kwargs.update(self.special_tokens_map)
kwargs["phonemize"] = False
kwargs["normalize"] = False
return VitsTokenizer.from_pretrained(self.tmpdirname, **kwargs)
def get_clean_sequence(self, tokenizer, with_prefix_space=False, max_length=20, min_length=5):
txt = "beyonce lives in los angeles"
ids = tokenizer.encode(txt, add_special_tokens=False)
return txt, ids
@unittest.skip("Adding multicharacter tokens does not work with the VITS tokenizer")
def test_add_tokens_tokenizer(self):
pass
@unittest.skip("Adding multicharacter tokens does not work with the VITS tokenizer")
def test_encode_decode_with_spaces(self):
pass
@unittest.skip("The VITS tokenizer does not support `is_split_into_words`")
def test_pretokenized_inputs(self):
pass
def test_save_and_load_tokenizer(self):
# safety check on max_len default value so we are sure the test works
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
self.assertNotEqual(tokenizer.model_max_length, 42)
# Now let's start the test
tokenizers = self.get_tokenizers()
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
# Isolate this from the other tests because we save additional tokens/etc
tmpdirname = tempfile.mkdtemp()
sample_text = " He is very happy, UNwant\u00E9d,running"
before_tokens = tokenizer.encode(sample_text, add_special_tokens=False)
before_vocab = tokenizer.get_vocab()
tokenizer.save_pretrained(tmpdirname)
after_tokenizer = tokenizer.__class__.from_pretrained(tmpdirname)
after_tokens = after_tokenizer.encode(sample_text, add_special_tokens=False)
after_vocab = after_tokenizer.get_vocab()
self.assertListEqual(before_tokens, after_tokens)
self.assertDictEqual(before_vocab, after_vocab)
shutil.rmtree(tmpdirname)
@unittest.skip("Adding multicharacter tokens does not work the VITS tokenizer")
def test_special_tokens_initialization_with_non_empty_additional_special_tokens(self):
pass
def test_ron_normalization(self):
tokenizer = self.get_tokenizer()
tokenizer.language = "ron"
sequences = ["vițs"]
normalized_sequences = ["viţs"]
encoded_ids = tokenizer(sequences, normalize=True)["input_ids"]
decoded_sequences = tokenizer.batch_decode(encoded_ids)
self.assertEqual(normalized_sequences, decoded_sequences)
def test_normalization(self):
tokenizer = self.get_tokenizer()
sequences = ["VITS; is a model for t-t-s!"]
normalized_sequences = ["vits is a model for t-t-s"]
unnormalized_sequences = [
"<unk><unk><unk><unk><unk> is a model for t-t-s<unk>"
] # can't handle upper-case or certain punctuations
encoded_normalized_ids = tokenizer(sequences, normalize=True)
encoded_unnormalized_ids = tokenizer(sequences, normalize=False)
decoded_normalized_sequences = [
tokenizer.decode(seq, skip_special_tokens=False) for seq in encoded_normalized_ids["input_ids"]
]
decoded_unnormalized_sequences = [
tokenizer.decode(seq, skip_special_tokens=False) for seq in encoded_unnormalized_ids["input_ids"]
]
self.assertEqual(decoded_normalized_sequences, normalized_sequences)
self.assertEqual(decoded_unnormalized_sequences, unnormalized_sequences)
@slow
def test_tokenizer_integration(self):
sequences = [
"BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly "
"conditioning on both left and right context in all layers.",
"The quick brown fox! Jumps over the lazy dog...",
"We use k as our padding token",
]
normalized_sequences = [
"bert is designed to pre-train deep bidirectional representations from unlabeled text by jointly "
"conditioning on both left and right context in all layers",
"the quick brown fox jumps over the lazy dog",
"we use k as our padding token",
]
# fmt: off
expected_encoding = {
'input_ids': [
[0, 24, 0, 7, 0, 25, 0, 33, 0, 19, 0, 18, 0, 8, 0, 19, 0, 5, 0, 7, 0, 8, 0, 18, 0, 37, 0, 29, 0, 7, 0, 5, 0, 19, 0, 33, 0, 22, 0, 19, 0, 13, 0, 25, 0, 7, 0, 14, 0, 33, 0, 25, 0, 26, 0, 18, 0, 29, 0, 19, 0, 5, 0, 7, 0, 7, 0, 13, 0, 19, 0, 24, 0, 18, 0, 5, 0, 18, 0, 25, 0, 7, 0, 12, 0, 33, 0, 18, 0, 22, 0, 29, 0, 26, 0, 21, 0, 19, 0, 25, 0, 7, 0, 13, 0, 25, 0, 7, 0, 8, 0, 7, 0, 29, 0, 33, 0, 26, 0, 33, 0, 18, 0, 22, 0, 29, 0, 8, 0, 19, 0, 20, 0, 25, 0, 22, 0, 17, 0, 19, 0, 4, 0, 29, 0, 21, 0, 26, 0, 24, 0, 7, 0, 21, 0, 7, 0, 5, 0, 19, 0, 33, 0, 7, 0, 31, 0, 33, 0, 19, 0, 24, 0, 3, 0, 19, 0, 16, 0, 22, 0, 18, 0, 29, 0, 33, 0, 21, 0, 3, 0, 19, 0, 12, 0, 22, 0, 29, 0, 5, 0, 18, 0, 33, 0, 18, 0, 22, 0, 29, 0, 18, 0, 29, 0, 37, 0, 19, 0, 22, 0, 29, 0, 19, 0, 24, 0, 22, 0, 33, 0, 6, 0, 19, 0, 21, 0, 7, 0, 20, 0, 33, 0, 19, 0, 26, 0, 29, 0, 5, 0, 19, 0, 25, 0, 18, 0, 37, 0, 6, 0, 33, 0, 19, 0, 12, 0, 22, 0, 29, 0, 33, 0, 7, 0, 31, 0, 33, 0, 19, 0, 18, 0, 29, 0, 19, 0, 26, 0, 21, 0, 21, 0, 19, 0, 21, 0, 26, 0, 3, 0, 7, 0, 25, 0, 8, 0],
[0, 33, 0, 6, 0, 7, 0, 19, 0, 34, 0, 4, 0, 18, 0, 12, 0, 0, 0, 19, 0, 24, 0, 25, 0, 22, 0, 9, 0, 29, 0, 19, 0, 20, 0, 22, 0, 31, 0, 19, 0, 16, 0, 4, 0, 17, 0, 13, 0, 8, 0, 19, 0, 22, 0, 32, 0, 7, 0, 25, 0, 19, 0, 33, 0, 6, 0, 7, 0, 19, 0, 21, 0, 26, 0, 2, 0, 3, 0, 19, 0, 5, 0, 22, 0, 37, 0, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38],
[0, 9, 0, 7, 0, 19, 0, 4, 0, 8, 0, 7, 0, 19, 0, 0, 0, 19, 0, 26, 0, 8, 0, 19, 0, 22, 0, 4, 0, 25, 0, 19, 0, 13, 0, 26, 0, 5, 0, 5, 0, 18, 0, 29, 0, 37, 0, 19, 0, 33, 0, 22, 0, 0, 0, 7, 0, 29, 0, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38, 38],
],
'attention_mask': [
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]
}
# fmt: on
tokenizer_classes = [self.tokenizer_class]
if self.test_rust_tokenizer:
tokenizer_classes.append(self.rust_tokenizer_class)
for tokenizer_class in tokenizer_classes:
tokenizer = tokenizer_class.from_pretrained(
"facebook/mms-tts-eng",
revision="d188a254c84ae6cfd24deb7a8f5c0c1d349d7d9f", # to pin the tokenizer version
)
encoding = tokenizer(sequences, padding=True, normalize=True)
decoded_sequences = [tokenizer.decode(seq, skip_special_tokens=True) for seq in encoding["input_ids"]]
encoding_data = encoding.data
self.assertDictEqual(encoding_data, expected_encoding)
for expected, decoded in zip(normalized_sequences, decoded_sequences):
self.assertEqual(expected, decoded)
...@@ -190,6 +190,7 @@ def check_attribute_being_used(config_class, attributes, default_value, source_s ...@@ -190,6 +190,7 @@ def check_attribute_being_used(config_class, attributes, default_value, source_s
"use_cache", "use_cache",
"out_features", "out_features",
"out_indices", "out_indices",
"sampling_rate",
] ]
attributes_used_in_generation = ["encoder_no_repeat_ngram_size"] attributes_used_in_generation = ["encoder_no_repeat_ngram_size"]
......
docs/source/en/autoclass_tutorial.md
docs/source/en/model_doc/byt5.md
docs/source/en/model_doc/donut.md
docs/source/en/model_doc/encoder-decoder.md
docs/source/en/model_doc/markuplm.md
docs/source/en/model_doc/speech_to_text.md
docs/source/en/model_doc/switch_transformers.md
docs/source/en/model_doc/t5.md
docs/source/en/model_doc/t5v1.1.md
docs/source/en/model_doc/tapex.md
docs/source/en/pipeline_tutorial.md
docs/source/en/quicktour.md
docs/source/en/task_summary.md
docs/source/es/quicktour.md
src/transformers/generation/configuration_utils.py
src/transformers/generation/tf_utils.py
src/transformers/generation/utils.py
src/transformers/models/albert/configuration_albert.py
src/transformers/models/albert/modeling_albert.py
src/transformers/models/albert/modeling_tf_albert.py
src/transformers/models/albert/tokenization_albert.py
src/transformers/models/albert/tokenization_albert_fast.py
src/transformers/models/align/processing_align.py
src/transformers/models/altclip/processing_altclip.py
src/transformers/models/audio_spectrogram_transformer/feature_extraction_audio_spectrogram_transformer.py
src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py
src/transformers/models/auto/feature_extraction_auto.py
src/transformers/models/auto/image_processing_auto.py
src/transformers/models/auto/processing_auto.py
src/transformers/models/auto/tokenization_auto.py
src/transformers/models/bark/configuration_bark.py
src/transformers/models/bark/modeling_bark.py
src/transformers/models/bark/processing_bark.py
src/transformers/models/bart/configuration_bart.py
src/transformers/models/bart/modeling_bart.py
src/transformers/models/bart/tokenization_bart.py
src/transformers/models/bart/tokenization_bart_fast.py
src/transformers/models/barthez/tokenization_barthez.py
src/transformers/models/barthez/tokenization_barthez_fast.py
src/transformers/models/bartpho/tokenization_bartpho.py
src/transformers/models/beit/configuration_beit.py
src/transformers/models/beit/feature_extraction_beit.py
src/transformers/models/beit/image_processing_beit.py
src/transformers/models/beit/modeling_beit.py
src/transformers/models/bert/configuration_bert.py
src/transformers/models/bert/modeling_bert.py
src/transformers/models/bert/modeling_tf_bert.py
src/transformers/models/bert/tokenization_bert.py
src/transformers/models/bert/tokenization_bert_fast.py
src/transformers/models/bert/tokenization_bert_tf.py
src/transformers/models/bert_generation/configuration_bert_generation.py
src/transformers/models/bert_generation/tokenization_bert_generation.py
src/transformers/models/bert_japanese/tokenization_bert_japanese.py
src/transformers/models/bertweet/tokenization_bertweet.py
src/transformers/models/big_bird/configuration_big_bird.py
src/transformers/models/big_bird/modeling_big_bird.py
src/transformers/models/big_bird/tokenization_big_bird.py
src/transformers/models/big_bird/tokenization_big_bird_fast.py
src/transformers/models/bigbird_pegasus/configuration_bigbird_pegasus.py
src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
src/transformers/models/biogpt/tokenization_biogpt.py
src/transformers/models/bit/image_processing_bit.py
src/transformers/models/blenderbot/configuration_blenderbot.py
src/transformers/models/blenderbot/modeling_blenderbot.py
src/transformers/models/blenderbot/tokenization_blenderbot.py
src/transformers/models/blenderbot/tokenization_blenderbot_fast.py
src/transformers/models/blenderbot_small/configuration_blenderbot_small.py
src/transformers/models/blenderbot_small/modeling_blenderbot_small.py
src/transformers/models/blenderbot_small/tokenization_blenderbot_small.py
src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py
src/transformers/models/blip/image_processing_blip.py
src/transformers/models/blip/modeling_blip.py
src/transformers/models/blip/modeling_tf_blip.py
src/transformers/models/blip/processing_blip.py
src/transformers/models/blip_2/processing_blip_2.py
src/transformers/models/bloom/configuration_bloom.py
src/transformers/models/bloom/tokenization_bloom_fast.py
src/transformers/models/bridgetower/image_processing_bridgetower.py
src/transformers/models/bridgetower/processing_bridgetower.py
src/transformers/models/byt5/tokenization_byt5.py
src/transformers/models/camembert/configuration_camembert.py
src/transformers/models/camembert/tokenization_camembert.py
src/transformers/models/camembert/tokenization_camembert_fast.py
src/transformers/models/canine/configuration_canine.py
src/transformers/models/canine/modeling_canine.py
src/transformers/models/canine/tokenization_canine.py
src/transformers/models/chinese_clip/feature_extraction_chinese_clip.py
src/transformers/models/chinese_clip/image_processing_chinese_clip.py
src/transformers/models/chinese_clip/processing_chinese_clip.py
src/transformers/models/clap/configuration_clap.py
src/transformers/models/clap/feature_extraction_clap.py
src/transformers/models/clap/modeling_clap.py
src/transformers/models/clap/processing_clap.py
src/transformers/models/clip/configuration_clip.py
src/transformers/models/clip/feature_extraction_clip.py
src/transformers/models/clip/image_processing_clip.py
src/transformers/models/clip/processing_clip.py
src/transformers/models/clip/tokenization_clip.py
src/transformers/models/clip/tokenization_clip_fast.py
src/transformers/models/clipseg/modeling_clipseg.py
src/transformers/models/clipseg/processing_clipseg.py
src/transformers/models/codegen/configuration_codegen.py
src/transformers/models/codegen/tokenization_codegen.py
src/transformers/models/codegen/tokenization_codegen_fast.py
src/transformers/models/conditional_detr/configuration_conditional_detr.py
src/transformers/models/conditional_detr/feature_extraction_conditional_detr.py
src/transformers/models/conditional_detr/image_processing_conditional_detr.py
src/transformers/models/conditional_detr/modeling_conditional_detr.py
src/transformers/models/convbert/configuration_convbert.py
src/transformers/models/convbert/tokenization_convbert.py
src/transformers/models/convbert/tokenization_convbert_fast.py
src/transformers/models/convnext/configuration_convnext.py
src/transformers/models/convnext/feature_extraction_convnext.py
src/transformers/models/convnext/image_processing_convnext.py
src/transformers/models/convnext/modeling_convnext.py
src/transformers/models/cpm/tokenization_cpm.py
src/transformers/models/cpm/tokenization_cpm_fast.py
src/transformers/models/ctrl/configuration_ctrl.py
src/transformers/models/ctrl/modeling_ctrl.py
src/transformers/models/ctrl/tokenization_ctrl.py
src/transformers/models/cvt/configuration_cvt.py
src/transformers/models/cvt/modeling_cvt.py
src/transformers/models/data2vec/configuration_data2vec_audio.py
src/transformers/models/data2vec/configuration_data2vec_text.py
src/transformers/models/data2vec/configuration_data2vec_vision.py
src/transformers/models/data2vec/modeling_data2vec_audio.py
src/transformers/models/data2vec/modeling_data2vec_vision.py
src/transformers/models/deberta/configuration_deberta.py
src/transformers/models/deberta/modeling_deberta.py
src/transformers/models/deberta/tokenization_deberta.py
src/transformers/models/deberta/tokenization_deberta_fast.py
src/transformers/models/deberta_v2/configuration_deberta_v2.py
src/transformers/models/deberta_v2/modeling_deberta_v2.py
src/transformers/models/deberta_v2/tokenization_deberta_v2.py
src/transformers/models/deberta_v2/tokenization_deberta_v2_fast.py
src/transformers/models/decision_transformer/configuration_decision_transformer.py
src/transformers/models/deformable_detr/configuration_deformable_detr.py
src/transformers/models/deformable_detr/feature_extraction_deformable_detr.py
src/transformers/models/deformable_detr/image_processing_deformable_detr.py
src/transformers/models/deformable_detr/modeling_deformable_detr.py
src/transformers/models/deit/configuration_deit.py
src/transformers/models/deit/feature_extraction_deit.py
src/transformers/models/deit/image_processing_deit.py
src/transformers/models/deit/modeling_deit.py
src/transformers/models/deit/modeling_tf_deit.py
src/transformers/models/deta/configuration_deta.py
src/transformers/models/deta/image_processing_deta.py
src/transformers/models/deta/modeling_deta.py
src/transformers/models/detr/configuration_detr.py
src/transformers/models/detr/feature_extraction_detr.py
src/transformers/models/detr/image_processing_detr.py
src/transformers/models/detr/modeling_detr.py
src/transformers/models/dinat/configuration_dinat.py
src/transformers/models/dinat/modeling_dinat.py
src/transformers/models/distilbert/configuration_distilbert.py
src/transformers/models/distilbert/tokenization_distilbert.py
src/transformers/models/distilbert/tokenization_distilbert_fast.py
src/transformers/models/donut/feature_extraction_donut.py
src/transformers/models/donut/image_processing_donut.py
src/transformers/models/donut/processing_donut.py
src/transformers/models/dpr/configuration_dpr.py
src/transformers/models/dpr/tokenization_dpr.py
src/transformers/models/dpr/tokenization_dpr_fast.py
src/transformers/models/dpt/feature_extraction_dpt.py
src/transformers/models/dpt/image_processing_dpt.py
src/transformers/models/dpt/modeling_dpt.py
src/transformers/models/efficientformer/image_processing_efficientformer.py
src/transformers/models/efficientformer/modeling_tf_efficientformer.py
src/transformers/models/efficientnet/image_processing_efficientnet.py
src/transformers/models/electra/configuration_electra.py
src/transformers/models/electra/modeling_electra.py
src/transformers/models/electra/modeling_tf_electra.py
src/transformers/models/electra/tokenization_electra.py
src/transformers/models/electra/tokenization_electra_fast.py
src/transformers/models/encodec/feature_extraction_encodec.py
src/transformers/models/encodec/modeling_encodec.py
src/transformers/models/ernie/configuration_ernie.py
src/transformers/models/ernie_m/configuration_ernie_m.py
src/transformers/models/ernie_m/modeling_ernie_m.py
src/transformers/models/ernie_m/tokenization_ernie_m.py
src/transformers/models/esm/tokenization_esm.py
src/transformers/models/flaubert/tokenization_flaubert.py
src/transformers/models/flava/configuration_flava.py
src/transformers/models/flava/feature_extraction_flava.py
src/transformers/models/flava/image_processing_flava.py
src/transformers/models/flava/processing_flava.py
src/transformers/models/fnet/configuration_fnet.py
src/transformers/models/fnet/tokenization_fnet.py
src/transformers/models/fnet/tokenization_fnet_fast.py
src/transformers/models/fsmt/configuration_fsmt.py
src/transformers/models/fsmt/tokenization_fsmt.py
src/transformers/models/funnel/tokenization_funnel.py
src/transformers/models/funnel/tokenization_funnel_fast.py
src/transformers/models/git/modeling_git.py
src/transformers/models/git/processing_git.py
src/transformers/models/glpn/feature_extraction_glpn.py
src/transformers/models/glpn/image_processing_glpn.py
src/transformers/models/glpn/modeling_glpn.py
src/transformers/models/gpt2/configuration_gpt2.py
src/transformers/models/gpt2/modeling_gpt2.py
src/transformers/models/gpt2/tokenization_gpt2.py
src/transformers/models/gpt2/tokenization_gpt2_fast.py
src/transformers/models/gpt2/tokenization_gpt2_tf.py
src/transformers/models/gpt_neo/configuration_gpt_neo.py
src/transformers/models/gpt_neox/configuration_gpt_neox.py
src/transformers/models/gpt_neox/tokenization_gpt_neox_fast.py
src/transformers/models/gpt_neox_japanese/configuration_gpt_neox_japanese.py
src/transformers/models/gpt_neox_japanese/tokenization_gpt_neox_japanese.py
src/transformers/models/gpt_sw3/tokenization_gpt_sw3.py
src/transformers/models/gptj/modeling_gptj.py
src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
src/transformers/models/groupvit/modeling_groupvit.py
src/transformers/models/groupvit/modeling_tf_groupvit.py
src/transformers/models/herbert/tokenization_herbert.py
src/transformers/models/herbert/tokenization_herbert_fast.py
src/transformers/models/hubert/modeling_hubert.py
src/transformers/models/imagegpt/configuration_imagegpt.py
src/transformers/models/imagegpt/feature_extraction_imagegpt.py
src/transformers/models/imagegpt/image_processing_imagegpt.py
src/transformers/models/imagegpt/modeling_imagegpt.py
src/transformers/models/jukebox/tokenization_jukebox.py
src/transformers/models/jukebox/tokenization_jukebox.py
src/transformers/models/layoutlm/configuration_layoutlm.py
src/transformers/models/layoutlm/modeling_layoutlm.py
src/transformers/models/layoutlm/modeling_tf_layoutlm.py
src/transformers/models/layoutlm/tokenization_layoutlm.py
src/transformers/models/layoutlm/tokenization_layoutlm_fast.py
src/transformers/models/layoutlmv2/configuration_layoutlmv2.py
src/transformers/models/layoutlmv2/feature_extraction_layoutlmv2.py
src/transformers/models/layoutlmv2/image_processing_layoutlmv2.py
src/transformers/models/layoutlmv2/modeling_layoutlmv2.py
src/transformers/models/layoutlmv2/processing_layoutlmv2.py
src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
src/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py
src/transformers/models/layoutlmv3/configuration_layoutlmv3.py
src/transformers/models/layoutlmv3/feature_extraction_layoutlmv3.py
src/transformers/models/layoutlmv3/image_processing_layoutlmv3.py
src/transformers/models/layoutlmv3/modeling_layoutlmv3.py
src/transformers/models/layoutlmv3/modeling_tf_layoutlmv3.py
src/transformers/models/layoutlmv3/processing_layoutlmv3.py
src/transformers/models/layoutlmv3/tokenization_layoutlmv3.py
src/transformers/models/layoutlmv3/tokenization_layoutlmv3_fast.py
src/transformers/models/layoutxlm/processing_layoutxlm.py
src/transformers/models/layoutxlm/tokenization_layoutxlm.py
src/transformers/models/layoutxlm/tokenization_layoutxlm_fast.py
src/transformers/models/led/tokenization_led.py
src/transformers/models/led/tokenization_led_fast.py
src/transformers/models/levit/configuration_levit.py
src/transformers/models/levit/feature_extraction_levit.py
src/transformers/models/levit/image_processing_levit.py
src/transformers/models/lilt/modeling_lilt.py
src/transformers/models/llama/tokenization_llama.py
src/transformers/models/longformer/modeling_longformer.py
src/transformers/models/longformer/modeling_tf_longformer.py
src/transformers/models/longformer/tokenization_longformer.py
src/transformers/models/longformer/tokenization_longformer_fast.py
src/transformers/models/longt5/modeling_longt5.py
src/transformers/models/luke/tokenization_luke.py
src/transformers/models/lxmert/tokenization_lxmert.py
src/transformers/models/lxmert/tokenization_lxmert_fast.py
src/transformers/models/m2m_100/configuration_m2m_100.py
src/transformers/models/m2m_100/tokenization_m2m_100.py
src/transformers/models/marian/modeling_marian.py
src/transformers/models/marian/tokenization_marian.py
src/transformers/models/markuplm/modeling_markuplm.py
src/transformers/models/markuplm/processing_markuplm.py
src/transformers/models/markuplm/tokenization_markuplm.py
src/transformers/models/markuplm/tokenization_markuplm_fast.py
src/transformers/models/mask2former/configuration_mask2former.py
src/transformers/models/mask2former/image_processing_mask2former.py
src/transformers/models/mask2former/modeling_mask2former.py
src/transformers/models/maskformer/configuration_maskformer.py
src/transformers/models/maskformer/feature_extraction_maskformer.py
src/transformers/models/maskformer/image_processing_maskformer.py
src/transformers/models/maskformer/modeling_maskformer.py
src/transformers/models/mbart/configuration_mbart.py
src/transformers/models/mbart/modeling_mbart.py
src/transformers/models/mbart/modeling_tf_mbart.py
src/transformers/models/mbart/tokenization_mbart.py
src/transformers/models/mbart/tokenization_mbart_fast.py
src/transformers/models/mbart50/tokenization_mbart50.py
src/transformers/models/mbart50/tokenization_mbart50_fast.py
src/transformers/models/megatron_bert/configuration_megatron_bert.py
src/transformers/models/mgp_str/processing_mgp_str.py
src/transformers/models/mgp_str/tokenization_mgp_str.py
src/transformers/models/mluke/tokenization_mluke.py
src/transformers/models/mobilebert/configuration_mobilebert.py
src/transformers/models/mobilebert/modeling_mobilebert.py
src/transformers/models/mobilebert/modeling_tf_mobilebert.py
src/transformers/models/mobilebert/tokenization_mobilebert.py
src/transformers/models/mobilebert/tokenization_mobilebert_fast.py
src/transformers/models/mobilenet_v1/feature_extraction_mobilenet_v1.py
src/transformers/models/mobilenet_v1/image_processing_mobilenet_v1.py
src/transformers/models/mobilenet_v1/modeling_mobilenet_v1.py
src/transformers/models/mobilenet_v2/feature_extraction_mobilenet_v2.py
src/transformers/models/mobilenet_v2/image_processing_mobilenet_v2.py
src/transformers/models/mobilenet_v2/modeling_mobilenet_v2.py
src/transformers/models/mobilevit/feature_extraction_mobilevit.py
src/transformers/models/mobilevit/image_processing_mobilevit.py
src/transformers/models/mobilevit/modeling_mobilevit.py
src/transformers/models/mobilevit/modeling_tf_mobilevit.py
src/transformers/models/mobilevitv2/configuration_mobilevitv2.py
src/transformers/models/mobilevitv2/modeling_mobilevitv2.py
src/transformers/models/mpnet/tokenization_mpnet.py
src/transformers/models/mpnet/tokenization_mpnet_fast.py
src/transformers/models/musicgen/configuration_musicgen.py
src/transformers/models/musicgen/modeling_musicgen.py
src/transformers/models/musicgen/processing_musicgen.py
src/transformers/models/mvp/configuration_mvp.py
src/transformers/models/mvp/tokenization_mvp.py
src/transformers/models/mvp/tokenization_mvp_fast.py
src/transformers/models/nat/configuration_nat.py
src/transformers/models/nat/modeling_nat.py
src/transformers/models/nezha/configuration_nezha.py
src/transformers/models/nllb/tokenization_nllb.py
src/transformers/models/nllb/tokenization_nllb_fast.py
src/transformers/models/oneformer/configuration_oneformer.py
src/transformers/models/oneformer/image_processing_oneformer.py
src/transformers/models/oneformer/modeling_oneformer.py
src/transformers/models/oneformer/processing_oneformer.py
src/transformers/models/openai/configuration_openai.py
src/transformers/models/openai/tokenization_openai.py
src/transformers/models/openai/tokenization_openai_fast.py
src/transformers/models/opt/configuration_opt.py
src/transformers/models/opt/modeling_opt.py
src/transformers/models/opt/modeling_tf_opt.py
src/transformers/models/owlvit/feature_extraction_owlvit.py
src/transformers/models/owlvit/image_processing_owlvit.py
src/transformers/models/owlvit/modeling_owlvit.py
src/transformers/models/owlvit/processing_owlvit.py
src/transformers/models/pegasus/configuration_pegasus.py
src/transformers/models/pegasus/modeling_pegasus.py
src/transformers/models/pegasus/tokenization_pegasus.py
src/transformers/models/pegasus/tokenization_pegasus_fast.py
src/transformers/models/pegasus_x/configuration_pegasus_x.py
src/transformers/models/perceiver/feature_extraction_perceiver.py
src/transformers/models/perceiver/image_processing_perceiver.py
src/transformers/models/perceiver/modeling_perceiver.py
src/transformers/models/perceiver/tokenization_perceiver.py
src/transformers/models/phobert/tokenization_phobert.py
src/transformers/models/pix2struct/modeling_pix2struct.py
src/transformers/models/plbart/configuration_plbart.py
src/transformers/models/plbart/modeling_plbart.py
src/transformers/models/plbart/tokenization_plbart.py
src/transformers/models/poolformer/configuration_poolformer.py
src/transformers/models/poolformer/feature_extraction_poolformer.py
src/transformers/models/poolformer/image_processing_poolformer.py
src/transformers/models/poolformer/modeling_poolformer.py
src/transformers/models/prophetnet/tokenization_prophetnet.py
src/transformers/models/rag/tokenization_rag.py
src/transformers/models/realm/configuration_realm.py
src/transformers/models/realm/tokenization_realm.py
src/transformers/models/realm/tokenization_realm_fast.py
src/transformers/models/reformer/configuration_reformer.py
src/transformers/models/reformer/modeling_reformer.py
src/transformers/models/reformer/tokenization_reformer.py
src/transformers/models/reformer/tokenization_reformer_fast.py
src/transformers/models/regnet/modeling_regnet.py
src/transformers/models/regnet/modeling_tf_regnet.py
src/transformers/models/rembert/tokenization_rembert.py
src/transformers/models/rembert/tokenization_rembert_fast.py
src/transformers/models/resnet/configuration_resnet.py
src/transformers/models/resnet/modeling_resnet.py
src/transformers/models/resnet/modeling_tf_resnet.py
src/transformers/models/roberta/configuration_roberta.py
src/transformers/models/roberta/modeling_roberta.py
src/transformers/models/roberta/modeling_tf_roberta.py
src/transformers/models/roberta/tokenization_roberta.py
src/transformers/models/roberta/tokenization_roberta_fast.py
src/transformers/models/roberta_prelayernorm/configuration_roberta_prelayernorm.py
src/transformers/models/roberta_prelayernorm/modeling_roberta_prelayernorm.py
src/transformers/models/roberta_prelayernorm/modeling_tf_roberta_prelayernorm.py
src/transformers/models/roc_bert/modeling_roc_bert.py
src/transformers/models/roc_bert/tokenization_roc_bert.py
src/transformers/models/roformer/tokenization_roformer.py
src/transformers/models/roformer/tokenization_roformer_fast.py
src/transformers/models/roformer/tokenization_utils.py
src/transformers/models/segformer/feature_extraction_segformer.py
src/transformers/models/segformer/image_processing_segformer.py
src/transformers/models/segformer/modeling_segformer.py
src/transformers/models/segformer/modeling_tf_segformer.py
src/transformers/models/sew/configuration_sew.py
src/transformers/models/sew/modeling_sew.py
src/transformers/models/sew_d/configuration_sew_d.py
src/transformers/models/sew_d/modeling_sew_d.py
src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py
src/transformers/models/speech_to_text/configuration_speech_to_text.py
src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py
src/transformers/models/speech_to_text/modeling_speech_to_text.py
src/transformers/models/speech_to_text/processing_speech_to_text.py
src/transformers/models/speech_to_text/tokenization_speech_to_text.py
src/transformers/models/speech_to_text_2/configuration_speech_to_text_2.py
src/transformers/models/speech_to_text_2/modeling_speech_to_text_2.py
src/transformers/models/speech_to_text_2/processing_speech_to_text_2.py
src/transformers/models/speech_to_text_2/tokenization_speech_to_text_2.py
src/transformers/models/speecht5/feature_extraction_speecht5.py
src/transformers/models/speecht5/modeling_speecht5.py
src/transformers/models/speecht5/processing_speecht5.py
src/transformers/models/speecht5/tokenization_speecht5.py
src/transformers/models/splinter/tokenization_splinter.py
src/transformers/models/splinter/tokenization_splinter_fast.py
src/transformers/models/squeezebert/configuration_squeezebert.py
src/transformers/models/squeezebert/tokenization_squeezebert.py
src/transformers/models/squeezebert/tokenization_squeezebert_fast.py
src/transformers/models/swin/configuration_swin.py
src/transformers/models/swin/modeling_swin.py
src/transformers/models/swin2sr/image_processing_swin2sr.py
src/transformers/models/swin2sr/modeling_swin2sr.py
src/transformers/models/swinv2/configuration_swinv2.py
src/transformers/models/t5/tokenization_t5.py
src/transformers/models/t5/tokenization_t5_fast.py
src/transformers/models/table_transformer/modeling_table_transformer.py
src/transformers/models/tapas/tokenization_tapas.py
src/transformers/models/time_series_transformer/configuration_time_series_transformer.py
src/transformers/models/time_series_transformer/modeling_time_series_transformer.py
src/transformers/models/timesformer/configuration_timesformer.py
src/transformers/models/timesformer/modeling_timesformer.py
src/transformers/models/transfo_xl/configuration_transfo_xl.py
src/transformers/models/transfo_xl/tokenization_transfo_xl.py
src/transformers/models/trocr/configuration_trocr.py
src/transformers/models/trocr/modeling_trocr.py
src/transformers/models/trocr/processing_trocr.py
src/transformers/models/tvlt/feature_extraction_tvlt.py
src/transformers/models/tvlt/image_processing_tvlt.py
src/transformers/models/tvlt/processing_tvlt.py
src/transformers/models/unispeech/configuration_unispeech.py
src/transformers/models/unispeech/modeling_unispeech.py
src/transformers/models/unispeech_sat/modeling_unispeech_sat.py
src/transformers/models/upernet/modeling_upernet.py
src/transformers/models/videomae/feature_extraction_videomae.py
src/transformers/models/videomae/image_processing_videomae.py
src/transformers/models/videomae/modeling_videomae.py
src/transformers/models/vilt/feature_extraction_vilt.py
src/transformers/models/vilt/image_processing_vilt.py
src/transformers/models/vilt/modeling_vilt.py
src/transformers/models/vilt/processing_vilt.py
src/transformers/models/vision_encoder_decoder/configuration_vision_encoder_decoder.py
src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py
src/transformers/models/vision_text_dual_encoder/configuration_vision_text_dual_encoder.py
src/transformers/models/vision_text_dual_encoder/modeling_tf_vision_text_dual_encoder.py
src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py
src/transformers/models/visual_bert/configuration_visual_bert.py
src/transformers/models/vit/configuration_vit.py
src/transformers/models/vit/feature_extraction_vit.py
src/transformers/models/vit/image_processing_vit.py
src/transformers/models/vit/modeling_tf_vit.py
src/transformers/models/vit/modeling_vit.py
src/transformers/models/vit_hybrid/image_processing_vit_hybrid.py
src/transformers/models/vit_mae/configuration_vit_mae.py
src/transformers/models/vit_mae/modeling_vit_mae.py
src/transformers/models/vit_msn/modeling_vit_msn.py
src/transformers/models/vits/modeling_vits.py
src/transformers/models/vits/tokenization_vits.py
src/transformers/models/wav2vec2/configuration_wav2vec2.py
src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py
src/transformers/models/wav2vec2/modeling_wav2vec2.py
src/transformers/models/wav2vec2/processing_wav2vec2.py
src/transformers/models/wav2vec2/tokenization_wav2vec2.py
src/transformers/models/wav2vec2_conformer/configuration_wav2vec2_conformer.py
src/transformers/models/wav2vec2_conformer/modeling_wav2vec2_conformer.py
src/transformers/models/wav2vec2_phoneme/tokenization_wav2vec2_phoneme.py
src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py
src/transformers/models/wavlm/configuration_wavlm.py
src/transformers/models/wavlm/modeling_wavlm.py
src/transformers/models/whisper/configuration_whisper.py
src/transformers/models/whisper/feature_extraction_whisper.py
src/transformers/models/whisper/modeling_tf_whisper.py
src/transformers/models/whisper/modeling_whisper.py
src/transformers/models/whisper/processing_whisper.py
src/transformers/models/whisper/tokenization_whisper.py
src/transformers/models/whisper/tokenization_whisper_fast.py
src/transformers/models/x_clip/modeling_x_clip.py
src/transformers/models/x_clip/processing_x_clip.py
src/transformers/models/xglm/tokenization_xglm.py
src/transformers/models/xglm/tokenization_xglm_fast.py
src/transformers/models/xlm/configuration_xlm.py
src/transformers/models/xlm/tokenization_xlm.py
src/transformers/models/xlm_prophetnet/tokenization_xlm_prophetnet.py
src/transformers/models/xlm_roberta/configuration_xlm_roberta.py
src/transformers/models/xlm_roberta/tokenization_xlm_roberta.py
src/transformers/models/xlm_roberta/tokenization_xlm_roberta_fast.py
src/transformers/models/xlm_roberta_xl/configuration_xlm_roberta_xl.py
src/transformers/models/xlnet/configuration_xlnet.py
src/transformers/models/xlnet/tokenization_xlnet.py
src/transformers/models/xlnet/tokenization_xlnet_fast.py
src/transformers/models/xmod/configuration_xmod.py
src/transformers/models/xmod/modeling_xmod.py
src/transformers/models/yolos/configuration_yolos.py
src/transformers/models/yolos/feature_extraction_yolos.py
src/transformers/models/yolos/image_processing_yolos.py
src/transformers/models/yolos/modeling_yolos.py
src/transformers/models/yoso/configuration_yoso.py
src/transformers/pipelines/
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment