Unverified Commit 9342c8fb authored by Sylvain Gugger's avatar Sylvain Gugger Committed by GitHub
Browse files

Deprecate models (#24787)



* Deprecate some models

* Fix imports

* Fix inits too

* Remove tests

* Add deprecated banner to documentation

* Remove from init

* Fix auto classes

* Style

* Remote upgrade strategy 1

* Remove site package cache

* Revert this part

* Fix typo...

* Update utils

* Update docs/source/en/model_doc/bort.md
Co-authored-by: default avatarLysandre Debut <lysandre.debut@reseau.eseo.fr>

* Address review comments

* With all files saved

---------
Co-authored-by: default avatarLysandre Debut <lysandre.debut@reseau.eseo.fr>
parent 717dadc6
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# BORT # BORT
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The BORT model was proposed in [Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) by The BORT model was proposed in [Optimal Subarchitecture Extraction for BERT](https://arxiv.org/abs/2010.10499) by
......
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# M-CTC-T # M-CTC-T
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The M-CTC-T model was proposed in [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal. The M-CTC-T model was proposed in [Pseudo-Labeling For Massively Multilingual Speech Recognition](https://arxiv.org/abs/2111.00161) by Loren Lugosch, Tatiana Likhomanenko, Gabriel Synnaeve, and Ronan Collobert. The model is a 1B-param transformer encoder, with a CTC head over 8065 character labels and a language identification head over 60 language ID labels. It is trained on Common Voice (version 6.1, December 2020 release) and VoxPopuli. After training on Common Voice and VoxPopuli, the model is trained on Common Voice only. The labels are unnormalized character-level transcripts (punctuation and capitalization are not removed). The model takes as input Mel filterbank features from a 16Khz audio signal.
......
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# RetriBERT # RetriBERT
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The RetriBERT model was proposed in the blog post [Explain Anything Like I'm Five: A Model for Open Domain Long Form The RetriBERT model was proposed in the blog post [Explain Anything Like I'm Five: A Model for Open Domain Long Form
......
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# TAPEX # TAPEX
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The TAPEX model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu, The TAPEX model was proposed in [TAPEX: Table Pre-training via Learning a Neural SQL Executor](https://arxiv.org/abs/2107.07653) by Qian Liu,
......
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# Trajectory Transformer # Trajectory Transformer
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine. The Trajectory Transformer model was proposed in [Offline Reinforcement Learning as One Big Sequence Modeling Problem](https://arxiv.org/abs/2106.02039) by Michael Janner, Qiyang Li, Sergey Levine.
......
...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer. ...@@ -16,6 +16,15 @@ rendered properly in your Markdown viewer.
# VAN # VAN
<Tip warning={true}>
This model is in maintenance mode only, so we won't accept any new PRs changing its code.
If you run into any issues running this model, please reinstall the last version that supported this model: v4.30.0.
You can do so by running the following command: `pip install -U transformers==4.30.0`.
</Tip>
## Overview ## Overview
The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu. The VAN model was proposed in [Visual Attention Network](https://arxiv.org/abs/2202.09741) by Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, Shi-Min Hu.
......
...@@ -202,7 +202,6 @@ _import_structure = { ...@@ -202,7 +202,6 @@ _import_structure = {
"Blip2VisionConfig", "Blip2VisionConfig",
], ],
"models.bloom": ["BLOOM_PRETRAINED_CONFIG_ARCHIVE_MAP", "BloomConfig"], "models.bloom": ["BLOOM_PRETRAINED_CONFIG_ARCHIVE_MAP", "BloomConfig"],
"models.bort": [],
"models.bridgetower": [ "models.bridgetower": [
"BRIDGETOWER_PRETRAINED_CONFIG_ARCHIVE_MAP", "BRIDGETOWER_PRETRAINED_CONFIG_ARCHIVE_MAP",
"BridgeTowerConfig", "BridgeTowerConfig",
...@@ -263,6 +262,26 @@ _import_structure = { ...@@ -263,6 +262,26 @@ _import_structure = {
"models.decision_transformer": ["DECISION_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "DecisionTransformerConfig"], "models.decision_transformer": ["DECISION_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "DecisionTransformerConfig"],
"models.deformable_detr": ["DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeformableDetrConfig"], "models.deformable_detr": ["DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeformableDetrConfig"],
"models.deit": ["DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeiTConfig"], "models.deit": ["DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP", "DeiTConfig"],
"models.deprecated": [],
"models.deprecated.bort": [],
"models.deprecated.mctct": [
"MCTCT_PRETRAINED_CONFIG_ARCHIVE_MAP",
"MCTCTConfig",
"MCTCTFeatureExtractor",
"MCTCTProcessor",
],
"models.deprecated.mmbt": ["MMBTConfig"],
"models.deprecated.retribert": [
"RETRIBERT_PRETRAINED_CONFIG_ARCHIVE_MAP",
"RetriBertConfig",
"RetriBertTokenizer",
],
"models.deprecated.tapex": ["TapexTokenizer"],
"models.deprecated.trajectory_transformer": [
"TRAJECTORY_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP",
"TrajectoryTransformerConfig",
],
"models.deprecated.van": ["VAN_PRETRAINED_CONFIG_ARCHIVE_MAP", "VanConfig"],
"models.deta": ["DETA_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetaConfig"], "models.deta": ["DETA_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetaConfig"],
"models.detr": ["DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetrConfig"], "models.detr": ["DETR_PRETRAINED_CONFIG_ARCHIVE_MAP", "DetrConfig"],
"models.dialogpt": [], "models.dialogpt": [],
...@@ -390,13 +409,11 @@ _import_structure = { ...@@ -390,13 +409,11 @@ _import_structure = {
"models.maskformer": ["MASKFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "MaskFormerConfig", "MaskFormerSwinConfig"], "models.maskformer": ["MASKFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "MaskFormerConfig", "MaskFormerSwinConfig"],
"models.mbart": ["MBartConfig"], "models.mbart": ["MBartConfig"],
"models.mbart50": [], "models.mbart50": [],
"models.mctct": ["MCTCT_PRETRAINED_CONFIG_ARCHIVE_MAP", "MCTCTConfig", "MCTCTFeatureExtractor", "MCTCTProcessor"],
"models.mega": ["MEGA_PRETRAINED_CONFIG_ARCHIVE_MAP", "MegaConfig"], "models.mega": ["MEGA_PRETRAINED_CONFIG_ARCHIVE_MAP", "MegaConfig"],
"models.megatron_bert": ["MEGATRON_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "MegatronBertConfig"], "models.megatron_bert": ["MEGATRON_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "MegatronBertConfig"],
"models.megatron_gpt2": [], "models.megatron_gpt2": [],
"models.mgp_str": ["MGP_STR_PRETRAINED_CONFIG_ARCHIVE_MAP", "MgpstrConfig", "MgpstrProcessor", "MgpstrTokenizer"], "models.mgp_str": ["MGP_STR_PRETRAINED_CONFIG_ARCHIVE_MAP", "MgpstrConfig", "MgpstrProcessor", "MgpstrTokenizer"],
"models.mluke": [], "models.mluke": [],
"models.mmbt": ["MMBTConfig"],
"models.mobilebert": ["MOBILEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileBertConfig", "MobileBertTokenizer"], "models.mobilebert": ["MOBILEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileBertConfig", "MobileBertTokenizer"],
"models.mobilenet_v1": ["MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileNetV1Config"], "models.mobilenet_v1": ["MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileNetV1Config"],
"models.mobilenet_v2": ["MOBILENET_V2_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileNetV2Config"], "models.mobilenet_v2": ["MOBILENET_V2_PRETRAINED_CONFIG_ARCHIVE_MAP", "MobileNetV2Config"],
...@@ -451,7 +468,6 @@ _import_structure = { ...@@ -451,7 +468,6 @@ _import_structure = {
"models.regnet": ["REGNET_PRETRAINED_CONFIG_ARCHIVE_MAP", "RegNetConfig"], "models.regnet": ["REGNET_PRETRAINED_CONFIG_ARCHIVE_MAP", "RegNetConfig"],
"models.rembert": ["REMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "RemBertConfig"], "models.rembert": ["REMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "RemBertConfig"],
"models.resnet": ["RESNET_PRETRAINED_CONFIG_ARCHIVE_MAP", "ResNetConfig"], "models.resnet": ["RESNET_PRETRAINED_CONFIG_ARCHIVE_MAP", "ResNetConfig"],
"models.retribert": ["RETRIBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "RetriBertConfig", "RetriBertTokenizer"],
"models.roberta": ["ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP", "RobertaConfig", "RobertaTokenizer"], "models.roberta": ["ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP", "RobertaConfig", "RobertaTokenizer"],
"models.roberta_prelayernorm": ["ROBERTA_PRELAYERNORM_PRETRAINED_CONFIG_ARCHIVE_MAP", "RobertaPreLayerNormConfig"], "models.roberta_prelayernorm": ["ROBERTA_PRELAYERNORM_PRETRAINED_CONFIG_ARCHIVE_MAP", "RobertaPreLayerNormConfig"],
"models.roc_bert": ["ROC_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "RoCBertConfig", "RoCBertTokenizer"], "models.roc_bert": ["ROC_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "RoCBertConfig", "RoCBertTokenizer"],
...@@ -498,17 +514,12 @@ _import_structure = { ...@@ -498,17 +514,12 @@ _import_structure = {
"models.t5": ["T5_PRETRAINED_CONFIG_ARCHIVE_MAP", "T5Config"], "models.t5": ["T5_PRETRAINED_CONFIG_ARCHIVE_MAP", "T5Config"],
"models.table_transformer": ["TABLE_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "TableTransformerConfig"], "models.table_transformer": ["TABLE_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "TableTransformerConfig"],
"models.tapas": ["TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP", "TapasConfig", "TapasTokenizer"], "models.tapas": ["TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP", "TapasConfig", "TapasTokenizer"],
"models.tapex": ["TapexTokenizer"],
"models.time_series_transformer": [ "models.time_series_transformer": [
"TIME_SERIES_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "TIME_SERIES_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP",
"TimeSeriesTransformerConfig", "TimeSeriesTransformerConfig",
], ],
"models.timesformer": ["TIMESFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "TimesformerConfig"], "models.timesformer": ["TIMESFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP", "TimesformerConfig"],
"models.timm_backbone": ["TimmBackboneConfig"], "models.timm_backbone": ["TimmBackboneConfig"],
"models.trajectory_transformer": [
"TRAJECTORY_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP",
"TrajectoryTransformerConfig",
],
"models.transfo_xl": [ "models.transfo_xl": [
"TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP", "TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP",
"TransfoXLConfig", "TransfoXLConfig",
...@@ -536,7 +547,6 @@ _import_structure = { ...@@ -536,7 +547,6 @@ _import_structure = {
"UniSpeechSatConfig", "UniSpeechSatConfig",
], ],
"models.upernet": ["UperNetConfig"], "models.upernet": ["UperNetConfig"],
"models.van": ["VAN_PRETRAINED_CONFIG_ARCHIVE_MAP", "VanConfig"],
"models.videomae": ["VIDEOMAE_PRETRAINED_CONFIG_ARCHIVE_MAP", "VideoMAEConfig"], "models.videomae": ["VIDEOMAE_PRETRAINED_CONFIG_ARCHIVE_MAP", "VideoMAEConfig"],
"models.vilt": [ "models.vilt": [
"VILT_PRETRAINED_CONFIG_ARCHIVE_MAP", "VILT_PRETRAINED_CONFIG_ARCHIVE_MAP",
...@@ -783,6 +793,7 @@ else: ...@@ -783,6 +793,7 @@ else:
_import_structure["models.cpm"].append("CpmTokenizerFast") _import_structure["models.cpm"].append("CpmTokenizerFast")
_import_structure["models.deberta"].append("DebertaTokenizerFast") _import_structure["models.deberta"].append("DebertaTokenizerFast")
_import_structure["models.deberta_v2"].append("DebertaV2TokenizerFast") _import_structure["models.deberta_v2"].append("DebertaV2TokenizerFast")
_import_structure["models.deprecated.retribert"].append("RetriBertTokenizerFast")
_import_structure["models.distilbert"].append("DistilBertTokenizerFast") _import_structure["models.distilbert"].append("DistilBertTokenizerFast")
_import_structure["models.dpr"].extend( _import_structure["models.dpr"].extend(
["DPRContextEncoderTokenizerFast", "DPRQuestionEncoderTokenizerFast", "DPRReaderTokenizerFast"] ["DPRContextEncoderTokenizerFast", "DPRQuestionEncoderTokenizerFast", "DPRReaderTokenizerFast"]
...@@ -815,7 +826,6 @@ else: ...@@ -815,7 +826,6 @@ else:
_import_structure["models.realm"].append("RealmTokenizerFast") _import_structure["models.realm"].append("RealmTokenizerFast")
_import_structure["models.reformer"].append("ReformerTokenizerFast") _import_structure["models.reformer"].append("ReformerTokenizerFast")
_import_structure["models.rembert"].append("RemBertTokenizerFast") _import_structure["models.rembert"].append("RemBertTokenizerFast")
_import_structure["models.retribert"].append("RetriBertTokenizerFast")
_import_structure["models.roberta"].append("RobertaTokenizerFast") _import_structure["models.roberta"].append("RobertaTokenizerFast")
_import_structure["models.roformer"].append("RoFormerTokenizerFast") _import_structure["models.roformer"].append("RoFormerTokenizerFast")
_import_structure["models.splinter"].append("SplinterTokenizerFast") _import_structure["models.splinter"].append("SplinterTokenizerFast")
...@@ -1497,6 +1507,33 @@ else: ...@@ -1497,6 +1507,33 @@ else:
"DeiTPreTrainedModel", "DeiTPreTrainedModel",
] ]
) )
_import_structure["models.deprecated.mctct"].extend(
[
"MCTCT_PRETRAINED_MODEL_ARCHIVE_LIST",
"MCTCTForCTC",
"MCTCTModel",
"MCTCTPreTrainedModel",
]
)
_import_structure["models.deprecated.mmbt"].extend(["MMBTForClassification", "MMBTModel", "ModalEmbeddings"])
_import_structure["models.deprecated.retribert"].extend(
["RETRIBERT_PRETRAINED_MODEL_ARCHIVE_LIST", "RetriBertModel", "RetriBertPreTrainedModel"]
)
_import_structure["models.deprecated.trajectory_transformer"].extend(
[
"TRAJECTORY_TRANSFORMER_PRETRAINED_MODEL_ARCHIVE_LIST",
"TrajectoryTransformerModel",
"TrajectoryTransformerPreTrainedModel",
]
)
_import_structure["models.deprecated.van"].extend(
[
"VAN_PRETRAINED_MODEL_ARCHIVE_LIST",
"VanForImageClassification",
"VanModel",
"VanPreTrainedModel",
]
)
_import_structure["models.deta"].extend( _import_structure["models.deta"].extend(
[ [
"DETA_PRETRAINED_MODEL_ARCHIVE_LIST", "DETA_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -2043,14 +2080,6 @@ else: ...@@ -2043,14 +2080,6 @@ else:
"MBartPreTrainedModel", "MBartPreTrainedModel",
] ]
) )
_import_structure["models.mctct"].extend(
[
"MCTCT_PRETRAINED_MODEL_ARCHIVE_LIST",
"MCTCTForCTC",
"MCTCTModel",
"MCTCTPreTrainedModel",
]
)
_import_structure["models.mega"].extend( _import_structure["models.mega"].extend(
[ [
"MEGA_PRETRAINED_MODEL_ARCHIVE_LIST", "MEGA_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -2087,7 +2116,6 @@ else: ...@@ -2087,7 +2116,6 @@ else:
"MgpstrPreTrainedModel", "MgpstrPreTrainedModel",
] ]
) )
_import_structure["models.mmbt"].extend(["MMBTForClassification", "MMBTModel", "ModalEmbeddings"])
_import_structure["models.mobilebert"].extend( _import_structure["models.mobilebert"].extend(
[ [
"MOBILEBERT_PRETRAINED_MODEL_ARCHIVE_LIST", "MOBILEBERT_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -2419,9 +2447,6 @@ else: ...@@ -2419,9 +2447,6 @@ else:
"ResNetPreTrainedModel", "ResNetPreTrainedModel",
] ]
) )
_import_structure["models.retribert"].extend(
["RETRIBERT_PRETRAINED_MODEL_ARCHIVE_LIST", "RetriBertModel", "RetriBertPreTrainedModel"]
)
_import_structure["models.roberta"].extend( _import_structure["models.roberta"].extend(
[ [
"ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST", "ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -2660,13 +2685,6 @@ else: ...@@ -2660,13 +2685,6 @@ else:
] ]
) )
_import_structure["models.timm_backbone"].extend(["TimmBackbone"]) _import_structure["models.timm_backbone"].extend(["TimmBackbone"])
_import_structure["models.trajectory_transformer"].extend(
[
"TRAJECTORY_TRANSFORMER_PRETRAINED_MODEL_ARCHIVE_LIST",
"TrajectoryTransformerModel",
"TrajectoryTransformerPreTrainedModel",
]
)
_import_structure["models.transfo_xl"].extend( _import_structure["models.transfo_xl"].extend(
[ [
"TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_LIST", "TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -2727,14 +2745,6 @@ else: ...@@ -2727,14 +2745,6 @@ else:
"UperNetPreTrainedModel", "UperNetPreTrainedModel",
] ]
) )
_import_structure["models.van"].extend(
[
"VAN_PRETRAINED_MODEL_ARCHIVE_LIST",
"VanForImageClassification",
"VanModel",
"VanPreTrainedModel",
]
)
_import_structure["models.videomae"].extend( _import_structure["models.videomae"].extend(
[ [
"VIDEOMAE_PRETRAINED_MODEL_ARCHIVE_LIST", "VIDEOMAE_PRETRAINED_MODEL_ARCHIVE_LIST",
...@@ -4187,6 +4197,24 @@ if TYPE_CHECKING: ...@@ -4187,6 +4197,24 @@ if TYPE_CHECKING:
) )
from .models.deformable_detr import DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DeformableDetrConfig from .models.deformable_detr import DEFORMABLE_DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DeformableDetrConfig
from .models.deit import DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP, DeiTConfig from .models.deit import DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP, DeiTConfig
from .models.deprecated.mctct import (
MCTCT_PRETRAINED_CONFIG_ARCHIVE_MAP,
MCTCTConfig,
MCTCTFeatureExtractor,
MCTCTProcessor,
)
from .models.deprecated.mmbt import MMBTConfig
from .models.deprecated.retribert import (
RETRIBERT_PRETRAINED_CONFIG_ARCHIVE_MAP,
RetriBertConfig,
RetriBertTokenizer,
)
from .models.deprecated.tapex import TapexTokenizer
from .models.deprecated.trajectory_transformer import (
TRAJECTORY_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP,
TrajectoryTransformerConfig,
)
from .models.deprecated.van import VAN_PRETRAINED_CONFIG_ARCHIVE_MAP, VanConfig
from .models.deta import DETA_PRETRAINED_CONFIG_ARCHIVE_MAP, DetaConfig from .models.deta import DETA_PRETRAINED_CONFIG_ARCHIVE_MAP, DetaConfig
from .models.detr import DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DetrConfig from .models.detr import DETR_PRETRAINED_CONFIG_ARCHIVE_MAP, DetrConfig
from .models.dinat import DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP, DinatConfig from .models.dinat import DINAT_PRETRAINED_CONFIG_ARCHIVE_MAP, DinatConfig
...@@ -4304,11 +4332,9 @@ if TYPE_CHECKING: ...@@ -4304,11 +4332,9 @@ if TYPE_CHECKING:
from .models.mask2former import MASK2FORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, Mask2FormerConfig from .models.mask2former import MASK2FORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, Mask2FormerConfig
from .models.maskformer import MASKFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, MaskFormerConfig, MaskFormerSwinConfig from .models.maskformer import MASKFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, MaskFormerConfig, MaskFormerSwinConfig
from .models.mbart import MBartConfig from .models.mbart import MBartConfig
from .models.mctct import MCTCT_PRETRAINED_CONFIG_ARCHIVE_MAP, MCTCTConfig, MCTCTFeatureExtractor, MCTCTProcessor
from .models.mega import MEGA_PRETRAINED_CONFIG_ARCHIVE_MAP, MegaConfig from .models.mega import MEGA_PRETRAINED_CONFIG_ARCHIVE_MAP, MegaConfig
from .models.megatron_bert import MEGATRON_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, MegatronBertConfig from .models.megatron_bert import MEGATRON_BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, MegatronBertConfig
from .models.mgp_str import MGP_STR_PRETRAINED_CONFIG_ARCHIVE_MAP, MgpstrConfig, MgpstrProcessor, MgpstrTokenizer from .models.mgp_str import MGP_STR_PRETRAINED_CONFIG_ARCHIVE_MAP, MgpstrConfig, MgpstrProcessor, MgpstrTokenizer
from .models.mmbt import MMBTConfig
from .models.mobilebert import MOBILEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileBertConfig, MobileBertTokenizer from .models.mobilebert import MOBILEBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileBertConfig, MobileBertTokenizer
from .models.mobilenet_v1 import MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileNetV1Config from .models.mobilenet_v1 import MOBILENET_V1_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileNetV1Config
from .models.mobilenet_v2 import MOBILENET_V2_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileNetV2Config from .models.mobilenet_v2 import MOBILENET_V2_PRETRAINED_CONFIG_ARCHIVE_MAP, MobileNetV2Config
...@@ -4359,7 +4385,6 @@ if TYPE_CHECKING: ...@@ -4359,7 +4385,6 @@ if TYPE_CHECKING:
from .models.regnet import REGNET_PRETRAINED_CONFIG_ARCHIVE_MAP, RegNetConfig from .models.regnet import REGNET_PRETRAINED_CONFIG_ARCHIVE_MAP, RegNetConfig
from .models.rembert import REMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, RemBertConfig from .models.rembert import REMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, RemBertConfig
from .models.resnet import RESNET_PRETRAINED_CONFIG_ARCHIVE_MAP, ResNetConfig from .models.resnet import RESNET_PRETRAINED_CONFIG_ARCHIVE_MAP, ResNetConfig
from .models.retribert import RETRIBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, RetriBertConfig, RetriBertTokenizer
from .models.roberta import ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, RobertaConfig, RobertaTokenizer from .models.roberta import ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, RobertaConfig, RobertaTokenizer
from .models.roberta_prelayernorm import ( from .models.roberta_prelayernorm import (
ROBERTA_PRELAYERNORM_PRETRAINED_CONFIG_ARCHIVE_MAP, ROBERTA_PRELAYERNORM_PRETRAINED_CONFIG_ARCHIVE_MAP,
...@@ -4409,17 +4434,12 @@ if TYPE_CHECKING: ...@@ -4409,17 +4434,12 @@ if TYPE_CHECKING:
from .models.t5 import T5_PRETRAINED_CONFIG_ARCHIVE_MAP, T5Config from .models.t5 import T5_PRETRAINED_CONFIG_ARCHIVE_MAP, T5Config
from .models.table_transformer import TABLE_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, TableTransformerConfig from .models.table_transformer import TABLE_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, TableTransformerConfig
from .models.tapas import TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP, TapasConfig, TapasTokenizer from .models.tapas import TAPAS_PRETRAINED_CONFIG_ARCHIVE_MAP, TapasConfig, TapasTokenizer
from .models.tapex import TapexTokenizer
from .models.time_series_transformer import ( from .models.time_series_transformer import (
TIME_SERIES_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, TIME_SERIES_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP,
TimeSeriesTransformerConfig, TimeSeriesTransformerConfig,
) )
from .models.timesformer import TIMESFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, TimesformerConfig from .models.timesformer import TIMESFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP, TimesformerConfig
from .models.timm_backbone import TimmBackboneConfig from .models.timm_backbone import TimmBackboneConfig
from .models.trajectory_transformer import (
TRAJECTORY_TRANSFORMER_PRETRAINED_CONFIG_ARCHIVE_MAP,
TrajectoryTransformerConfig,
)
from .models.transfo_xl import ( from .models.transfo_xl import (
TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP, TRANSFO_XL_PRETRAINED_CONFIG_ARCHIVE_MAP,
TransfoXLConfig, TransfoXLConfig,
...@@ -4432,7 +4452,6 @@ if TYPE_CHECKING: ...@@ -4432,7 +4452,6 @@ if TYPE_CHECKING:
from .models.unispeech import UNISPEECH_PRETRAINED_CONFIG_ARCHIVE_MAP, UniSpeechConfig from .models.unispeech import UNISPEECH_PRETRAINED_CONFIG_ARCHIVE_MAP, UniSpeechConfig
from .models.unispeech_sat import UNISPEECH_SAT_PRETRAINED_CONFIG_ARCHIVE_MAP, UniSpeechSatConfig from .models.unispeech_sat import UNISPEECH_SAT_PRETRAINED_CONFIG_ARCHIVE_MAP, UniSpeechSatConfig
from .models.upernet import UperNetConfig from .models.upernet import UperNetConfig
from .models.van import VAN_PRETRAINED_CONFIG_ARCHIVE_MAP, VanConfig
from .models.videomae import VIDEOMAE_PRETRAINED_CONFIG_ARCHIVE_MAP, VideoMAEConfig from .models.videomae import VIDEOMAE_PRETRAINED_CONFIG_ARCHIVE_MAP, VideoMAEConfig
from .models.vilt import ( from .models.vilt import (
VILT_PRETRAINED_CONFIG_ARCHIVE_MAP, VILT_PRETRAINED_CONFIG_ARCHIVE_MAP,
...@@ -4667,6 +4686,7 @@ if TYPE_CHECKING: ...@@ -4667,6 +4686,7 @@ if TYPE_CHECKING:
from .models.cpm import CpmTokenizerFast from .models.cpm import CpmTokenizerFast
from .models.deberta import DebertaTokenizerFast from .models.deberta import DebertaTokenizerFast
from .models.deberta_v2 import DebertaV2TokenizerFast from .models.deberta_v2 import DebertaV2TokenizerFast
from .models.deprecated.retribert import RetriBertTokenizerFast
from .models.distilbert import DistilBertTokenizerFast from .models.distilbert import DistilBertTokenizerFast
from .models.dpr import DPRContextEncoderTokenizerFast, DPRQuestionEncoderTokenizerFast, DPRReaderTokenizerFast from .models.dpr import DPRContextEncoderTokenizerFast, DPRQuestionEncoderTokenizerFast, DPRReaderTokenizerFast
from .models.electra import ElectraTokenizerFast from .models.electra import ElectraTokenizerFast
...@@ -4697,7 +4717,6 @@ if TYPE_CHECKING: ...@@ -4697,7 +4717,6 @@ if TYPE_CHECKING:
from .models.realm import RealmTokenizerFast from .models.realm import RealmTokenizerFast
from .models.reformer import ReformerTokenizerFast from .models.reformer import ReformerTokenizerFast
from .models.rembert import RemBertTokenizerFast from .models.rembert import RemBertTokenizerFast
from .models.retribert import RetriBertTokenizerFast
from .models.roberta import RobertaTokenizerFast from .models.roberta import RobertaTokenizerFast
from .models.roformer import RoFormerTokenizerFast from .models.roformer import RoFormerTokenizerFast
from .models.splinter import SplinterTokenizerFast from .models.splinter import SplinterTokenizerFast
...@@ -5262,6 +5281,29 @@ if TYPE_CHECKING: ...@@ -5262,6 +5281,29 @@ if TYPE_CHECKING:
DeiTModel, DeiTModel,
DeiTPreTrainedModel, DeiTPreTrainedModel,
) )
from .models.deprecated.mctct import (
MCTCT_PRETRAINED_MODEL_ARCHIVE_LIST,
MCTCTForCTC,
MCTCTModel,
MCTCTPreTrainedModel,
)
from .models.deprecated.mmbt import MMBTForClassification, MMBTModel, ModalEmbeddings
from .models.deprecated.retribert import (
RETRIBERT_PRETRAINED_MODEL_ARCHIVE_LIST,
RetriBertModel,
RetriBertPreTrainedModel,
)
from .models.deprecated.trajectory_transformer import (
TRAJECTORY_TRANSFORMER_PRETRAINED_MODEL_ARCHIVE_LIST,
TrajectoryTransformerModel,
TrajectoryTransformerPreTrainedModel,
)
from .models.deprecated.van import (
VAN_PRETRAINED_MODEL_ARCHIVE_LIST,
VanForImageClassification,
VanModel,
VanPreTrainedModel,
)
from .models.deta import ( from .models.deta import (
DETA_PRETRAINED_MODEL_ARCHIVE_LIST, DETA_PRETRAINED_MODEL_ARCHIVE_LIST,
DetaForObjectDetection, DetaForObjectDetection,
...@@ -5698,7 +5740,6 @@ if TYPE_CHECKING: ...@@ -5698,7 +5740,6 @@ if TYPE_CHECKING:
MBartModel, MBartModel,
MBartPreTrainedModel, MBartPreTrainedModel,
) )
from .models.mctct import MCTCT_PRETRAINED_MODEL_ARCHIVE_LIST, MCTCTForCTC, MCTCTModel, MCTCTPreTrainedModel
from .models.mega import ( from .models.mega import (
MEGA_PRETRAINED_MODEL_ARCHIVE_LIST, MEGA_PRETRAINED_MODEL_ARCHIVE_LIST,
MegaForCausalLM, MegaForCausalLM,
...@@ -5729,7 +5770,6 @@ if TYPE_CHECKING: ...@@ -5729,7 +5770,6 @@ if TYPE_CHECKING:
MgpstrModel, MgpstrModel,
MgpstrPreTrainedModel, MgpstrPreTrainedModel,
) )
from .models.mmbt import MMBTForClassification, MMBTModel, ModalEmbeddings
from .models.mobilebert import ( from .models.mobilebert import (
MOBILEBERT_PRETRAINED_MODEL_ARCHIVE_LIST, MOBILEBERT_PRETRAINED_MODEL_ARCHIVE_LIST,
MobileBertForMaskedLM, MobileBertForMaskedLM,
...@@ -6011,7 +6051,6 @@ if TYPE_CHECKING: ...@@ -6011,7 +6051,6 @@ if TYPE_CHECKING:
ResNetModel, ResNetModel,
ResNetPreTrainedModel, ResNetPreTrainedModel,
) )
from .models.retribert import RETRIBERT_PRETRAINED_MODEL_ARCHIVE_LIST, RetriBertModel, RetriBertPreTrainedModel
from .models.roberta import ( from .models.roberta import (
ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST, ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST,
RobertaForCausalLM, RobertaForCausalLM,
...@@ -6204,11 +6243,6 @@ if TYPE_CHECKING: ...@@ -6204,11 +6243,6 @@ if TYPE_CHECKING:
TimesformerPreTrainedModel, TimesformerPreTrainedModel,
) )
from .models.timm_backbone import TimmBackbone from .models.timm_backbone import TimmBackbone
from .models.trajectory_transformer import (
TRAJECTORY_TRANSFORMER_PRETRAINED_MODEL_ARCHIVE_LIST,
TrajectoryTransformerModel,
TrajectoryTransformerPreTrainedModel,
)
from .models.transfo_xl import ( from .models.transfo_xl import (
TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_LIST, TRANSFO_XL_PRETRAINED_MODEL_ARCHIVE_LIST,
AdaptiveEmbedding, AdaptiveEmbedding,
...@@ -6252,12 +6286,6 @@ if TYPE_CHECKING: ...@@ -6252,12 +6286,6 @@ if TYPE_CHECKING:
UniSpeechSatPreTrainedModel, UniSpeechSatPreTrainedModel,
) )
from .models.upernet import UperNetForSemanticSegmentation, UperNetPreTrainedModel from .models.upernet import UperNetForSemanticSegmentation, UperNetPreTrainedModel
from .models.van import (
VAN_PRETRAINED_MODEL_ARCHIVE_LIST,
VanForImageClassification,
VanModel,
VanPreTrainedModel,
)
from .models.videomae import ( from .models.videomae import (
VIDEOMAE_PRETRAINED_MODEL_ARCHIVE_LIST, VIDEOMAE_PRETRAINED_MODEL_ARCHIVE_LIST,
VideoMAEForPreTraining, VideoMAEForPreTraining,
......
...@@ -36,7 +36,6 @@ from . import ( ...@@ -36,7 +36,6 @@ from . import (
blip, blip,
blip_2, blip_2,
bloom, bloom,
bort,
bridgetower, bridgetower,
byt5, byt5,
camembert, camembert,
...@@ -60,6 +59,7 @@ from . import ( ...@@ -60,6 +59,7 @@ from . import (
decision_transformer, decision_transformer,
deformable_detr, deformable_detr,
deit, deit,
deprecated,
deta, deta,
detr, detr,
dialogpt, dialogpt,
...@@ -122,13 +122,11 @@ from . import ( ...@@ -122,13 +122,11 @@ from . import (
maskformer, maskformer,
mbart, mbart,
mbart50, mbart50,
mctct,
mega, mega,
megatron_bert, megatron_bert,
megatron_gpt2, megatron_gpt2,
mgp_str, mgp_str,
mluke, mluke,
mmbt,
mobilebert, mobilebert,
mobilenet_v1, mobilenet_v1,
mobilenet_v2, mobilenet_v2,
...@@ -164,7 +162,6 @@ from . import ( ...@@ -164,7 +162,6 @@ from . import (
regnet, regnet,
rembert, rembert,
resnet, resnet,
retribert,
roberta, roberta,
roberta_prelayernorm, roberta_prelayernorm,
roc_bert, roc_bert,
...@@ -188,11 +185,9 @@ from . import ( ...@@ -188,11 +185,9 @@ from . import (
t5, t5,
table_transformer, table_transformer,
tapas, tapas,
tapex,
time_series_transformer, time_series_transformer,
timesformer, timesformer,
timm_backbone, timm_backbone,
trajectory_transformer,
transfo_xl, transfo_xl,
trocr, trocr,
tvlt, tvlt,
...@@ -200,7 +195,6 @@ from . import ( ...@@ -200,7 +195,6 @@ from . import (
unispeech, unispeech,
unispeech_sat, unispeech_sat,
upernet, upernet,
van,
videomae, videomae,
vilt, vilt,
vision_encoder_decoder, vision_encoder_decoder,
......
...@@ -640,6 +640,15 @@ MODEL_NAMES_MAPPING = OrderedDict( ...@@ -640,6 +640,15 @@ MODEL_NAMES_MAPPING = OrderedDict(
] ]
) )
DEPRECATED_MODELS = [
"bort",
"mctct",
"mmbt",
"retribert",
"trajectory_transformer",
"van",
]
SPECIAL_MODEL_TYPE_TO_MODULE_NAME = OrderedDict( SPECIAL_MODEL_TYPE_TO_MODULE_NAME = OrderedDict(
[ [
("openai-gpt", "openai"), ("openai-gpt", "openai"),
...@@ -659,7 +668,11 @@ def model_type_to_module_name(key): ...@@ -659,7 +668,11 @@ def model_type_to_module_name(key):
if key in SPECIAL_MODEL_TYPE_TO_MODULE_NAME: if key in SPECIAL_MODEL_TYPE_TO_MODULE_NAME:
return SPECIAL_MODEL_TYPE_TO_MODULE_NAME[key] return SPECIAL_MODEL_TYPE_TO_MODULE_NAME[key]
return key.replace("-", "_") key = key.replace("-", "_")
if key in DEPRECATED_MODELS:
key = f"deprecated.{key}"
return key
def config_class_to_model_type(config): def config_class_to_model_type(config):
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
# limitations under the License. # limitations under the License.
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = { _import_structure = {
......
...@@ -14,8 +14,8 @@ ...@@ -14,8 +14,8 @@
# limitations under the License. # limitations under the License.
"""M-CTC-T model configuration""" """M-CTC-T model configuration"""
from ...configuration_utils import PretrainedConfig from ....configuration_utils import PretrainedConfig
from ...utils import logging from ....utils import logging
logger = logging.get_logger(__name__) logger = logging.get_logger(__name__)
......
...@@ -20,11 +20,11 @@ from typing import List, Optional, Union ...@@ -20,11 +20,11 @@ from typing import List, Optional, Union
import numpy as np import numpy as np
from ...audio_utils import mel_filter_bank, optimal_fft_length, spectrogram, window_function from ....audio_utils import mel_filter_bank, optimal_fft_length, spectrogram, window_function
from ...feature_extraction_sequence_utils import SequenceFeatureExtractor from ....feature_extraction_sequence_utils import SequenceFeatureExtractor
from ...feature_extraction_utils import BatchFeature from ....feature_extraction_utils import BatchFeature
from ...file_utils import PaddingStrategy, TensorType from ....file_utils import PaddingStrategy, TensorType
from ...utils import logging from ....utils import logging
logger = logging.get_logger(__name__) logger = logging.get_logger(__name__)
......
...@@ -22,17 +22,17 @@ import torch ...@@ -22,17 +22,17 @@ import torch
import torch.utils.checkpoint import torch.utils.checkpoint
from torch import nn from torch import nn
from ...activations import ACT2FN from ....activations import ACT2FN
from ...deepspeed import is_deepspeed_zero3_enabled from ....deepspeed import is_deepspeed_zero3_enabled
from ...file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward from ....file_utils import add_code_sample_docstrings, add_start_docstrings, add_start_docstrings_to_model_forward
from ...modeling_outputs import BaseModelOutput, CausalLMOutput from ....modeling_outputs import BaseModelOutput, CausalLMOutput
from ...modeling_utils import ( from ....modeling_utils import (
PreTrainedModel, PreTrainedModel,
apply_chunking_to_forward, apply_chunking_to_forward,
find_pruneable_heads_and_indices, find_pruneable_heads_and_indices,
prune_linear_layer, prune_linear_layer,
) )
from ...utils import logging from ....utils import logging
from .configuration_mctct import MCTCTConfig from .configuration_mctct import MCTCTConfig
......
...@@ -18,7 +18,7 @@ Speech processor class for M-CTC-T ...@@ -18,7 +18,7 @@ Speech processor class for M-CTC-T
import warnings import warnings
from contextlib import contextmanager from contextlib import contextmanager
from ...processing_utils import ProcessorMixin from ....processing_utils import ProcessorMixin
class MCTCTProcessor(ProcessorMixin): class MCTCTProcessor(ProcessorMixin):
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available from ....utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available
_import_structure = {"configuration_mmbt": ["MMBTConfig"]} _import_structure = {"configuration_mmbt": ["MMBTConfig"]}
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
# limitations under the License. # limitations under the License.
""" MMBT configuration""" """ MMBT configuration"""
from ...utils import logging from ....utils import logging
logger = logging.get_logger(__name__) logger = logging.get_logger(__name__)
......
...@@ -20,9 +20,9 @@ import torch ...@@ -20,9 +20,9 @@ import torch
from torch import nn from torch import nn
from torch.nn import CrossEntropyLoss, MSELoss from torch.nn import CrossEntropyLoss, MSELoss
from ...modeling_outputs import BaseModelOutputWithPooling, SequenceClassifierOutput from ....modeling_outputs import BaseModelOutputWithPooling, SequenceClassifierOutput
from ...modeling_utils import ModuleUtilsMixin from ....modeling_utils import ModuleUtilsMixin
from ...utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings from ....utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings
logger = logging.get_logger(__name__) logger = logging.get_logger(__name__)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment