Unverified Commit b324557a authored by amyeroberts's avatar amyeroberts Committed by GitHub
Browse files

Removal of deprecated vision methods and specify deprecation versions (#24570)

* Removal of deprecated methods and specify versions

* Fix tests
parent 77db28dc
...@@ -43,7 +43,6 @@ This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The o ...@@ -43,7 +43,6 @@ This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The o
[[autodoc]] ConditionalDetrImageProcessor [[autodoc]] ConditionalDetrImageProcessor
- preprocess - preprocess
- pad_and_create_pixel_mask
- post_process_object_detection - post_process_object_detection
- post_process_instance_segmentation - post_process_instance_segmentation
- post_process_semantic_segmentation - post_process_semantic_segmentation
...@@ -53,7 +52,6 @@ This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The o ...@@ -53,7 +52,6 @@ This model was contributed by [DepuMeng](https://huggingface.co/DepuMeng). The o
[[autodoc]] ConditionalDetrFeatureExtractor [[autodoc]] ConditionalDetrFeatureExtractor
- __call__ - __call__
- pad_and_create_pixel_mask
- post_process_object_detection - post_process_object_detection
- post_process_instance_segmentation - post_process_instance_segmentation
- post_process_semantic_segmentation - post_process_semantic_segmentation
......
...@@ -52,14 +52,12 @@ If you're interested in submitting a resource to be included here, please feel f ...@@ -52,14 +52,12 @@ If you're interested in submitting a resource to be included here, please feel f
[[autodoc]] DeformableDetrImageProcessor [[autodoc]] DeformableDetrImageProcessor
- preprocess - preprocess
- pad_and_create_pixel_mask
- post_process_object_detection - post_process_object_detection
## DeformableDetrFeatureExtractor ## DeformableDetrFeatureExtractor
[[autodoc]] DeformableDetrFeatureExtractor [[autodoc]] DeformableDetrFeatureExtractor
- __call__ - __call__
- pad_and_create_pixel_mask
- post_process_object_detection - post_process_object_detection
## DeformableDetrConfig ## DeformableDetrConfig
......
...@@ -190,7 +190,6 @@ If you're interested in submitting a resource to be included here, please feel f ...@@ -190,7 +190,6 @@ If you're interested in submitting a resource to be included here, please feel f
[[autodoc]] DetrFeatureExtractor [[autodoc]] DetrFeatureExtractor
- __call__ - __call__
- pad_and_create_pixel_mask
- post_process_object_detection - post_process_object_detection
- post_process_semantic_segmentation - post_process_semantic_segmentation
- post_process_instance_segmentation - post_process_instance_segmentation
......
...@@ -449,13 +449,13 @@ or segmentation maps. ...@@ -449,13 +449,13 @@ or segmentation maps.
### Pad ### Pad
In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training
time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad_and_create_pixel_mask`] time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad`]
from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together. from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together.
```py ```py
>>> def collate_fn(batch): >>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch] ... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") ... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch] ... labels = [item["labels"] for item in batch]
... batch = {} ... batch = {}
... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_values"] = encoding["pixel_values"]
......
...@@ -305,7 +305,7 @@ to indicate which pixels are real (1) and which are padding (0). ...@@ -305,7 +305,7 @@ to indicate which pixels are real (1) and which are padding (0).
```py ```py
>>> def collate_fn(batch): >>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch] ... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") ... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch] ... labels = [item["labels"] for item in batch]
... batch = {} ... batch = {}
... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_values"] = encoding["pixel_values"]
......
...@@ -456,12 +456,12 @@ array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0], ...@@ -456,12 +456,12 @@ array([[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0],
예를 들어, [DETR](./model_doc/detr)와 같은 경우에는 모델이 훈련할 때 크기 조정 증강을 적용합니다. 예를 들어, [DETR](./model_doc/detr)와 같은 경우에는 모델이 훈련할 때 크기 조정 증강을 적용합니다.
이로 인해 배치 내 이미지 크기가 달라질 수 있습니다. 이로 인해 배치 내 이미지 크기가 달라질 수 있습니다.
[`DetrImageProcessor`]의 [`DetrImageProcessor.pad_and_create_pixel_mask`]를 사용하고 사용자 정의 `collate_fn`을 정의해서 배치 이미지를 처리할 수 있습니다. [`DetrImageProcessor`]의 [`DetrImageProcessor.pad`]를 사용하고 사용자 정의 `collate_fn`을 정의해서 배치 이미지를 처리할 수 있습니다.
```py ```py
>>> def collate_fn(batch): >>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch] ... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") ... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch] ... labels = [item["labels"] for item in batch]
... batch = {} ... batch = {}
... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_values"] = encoding["pixel_values"]
......
...@@ -297,7 +297,7 @@ DatasetDict({ ...@@ -297,7 +297,7 @@ DatasetDict({
```py ```py
>>> def collate_fn(batch): >>> def collate_fn(batch):
... pixel_values = [item["pixel_values"] for item in batch] ... pixel_values = [item["pixel_values"] for item in batch]
... encoding = image_processor.pad_and_create_pixel_mask(pixel_values, return_tensors="pt") ... encoding = image_processor.pad(pixel_values, return_tensors="pt")
... labels = [item["labels"] for item in batch] ... labels = [item["labels"] for item in batch]
... batch = {} ... batch = {}
... batch["pixel_values"] = encoding["pixel_values"] ... batch["pixel_values"] = encoding["pixel_values"]
......
...@@ -24,7 +24,6 @@ from .image_utils import ( ...@@ -24,7 +24,6 @@ from .image_utils import (
get_channel_dimension_axis, get_channel_dimension_axis,
get_image_size, get_image_size,
infer_channel_dimension_format, infer_channel_dimension_format,
to_numpy_array,
) )
from .utils import ExplicitEnum, TensorType, is_jax_tensor, is_tf_tensor, is_torch_tensor from .utils import ExplicitEnum, TensorType, is_jax_tensor, is_tf_tensor, is_torch_tensor
from .utils.import_utils import ( from .utils.import_utils import (
...@@ -345,18 +344,6 @@ def normalize( ...@@ -345,18 +344,6 @@ def normalize(
data_format (`ChannelDimension`, *optional*): data_format (`ChannelDimension`, *optional*):
The channel dimension format of the output image. If unset, will use the inferred format from the input. The channel dimension format of the output image. If unset, will use the inferred format from the input.
""" """
requires_backends(normalize, ["vision"])
if isinstance(image, PIL.Image.Image):
warnings.warn(
"PIL.Image.Image inputs are deprecated and will be removed in v4.26.0. Please use numpy arrays instead.",
FutureWarning,
)
# Convert PIL image to numpy array with the same logic as in the previous feature extractor normalize -
# casting to numpy array and dividing by 255.
image = to_numpy_array(image)
image = rescale(image, scale=1 / 255)
if not isinstance(image, np.ndarray): if not isinstance(image, np.ndarray):
raise ValueError("image must be a numpy array") raise ValueError("image must be a numpy array")
...@@ -418,14 +405,9 @@ def center_crop( ...@@ -418,14 +405,9 @@ def center_crop(
""" """
requires_backends(center_crop, ["vision"]) requires_backends(center_crop, ["vision"])
if isinstance(image, PIL.Image.Image): if return_numpy is not None:
warnings.warn( warnings.warn("return_numpy is deprecated and will be removed in v.4.33", FutureWarning)
"PIL.Image.Image inputs are deprecated and will be removed in v4.26.0. Please use numpy arrays instead.",
FutureWarning,
)
image = to_numpy_array(image)
return_numpy = False if return_numpy is None else return_numpy
else:
return_numpy = True if return_numpy is None else return_numpy return_numpy = True if return_numpy is None else return_numpy
if not isinstance(image, np.ndarray): if not isinstance(image, np.ndarray):
......
...@@ -128,15 +128,6 @@ class BeitImageProcessor(BaseImageProcessor): ...@@ -128,15 +128,6 @@ class BeitImageProcessor(BaseImageProcessor):
self.image_std = image_std if image_std is not None else IMAGENET_STANDARD_STD self.image_std = image_std if image_std is not None else IMAGENET_STANDARD_STD
self.do_reduce_labels = do_reduce_labels self.do_reduce_labels = do_reduce_labels
@property
def reduce_labels(self) -> bool:
warnings.warn(
"The `reduce_labels` property is deprecated and will be removed in v4.27. Please use"
" `do_reduce_labels` instead.",
FutureWarning,
)
return self.do_reduce_labels
@classmethod @classmethod
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
""" """
......
...@@ -14,7 +14,6 @@ ...@@ -14,7 +14,6 @@
# limitations under the License. # limitations under the License.
"""Image processor class for BridgeTower.""" """Image processor class for BridgeTower."""
import warnings
from typing import Any, Dict, Iterable, List, Optional, Tuple, Union from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
import numpy as np import numpy as np
...@@ -352,42 +351,6 @@ class BridgeTowerImageProcessor(BaseImageProcessor): ...@@ -352,42 +351,6 @@ class BridgeTowerImageProcessor(BaseImageProcessor):
return BatchFeature(data=data, tensor_type=return_tensors) return BatchFeature(data=data, tensor_type=return_tensors)
# Copied from transformers.models.vilt.image_processing_vilt.ViltImageProcessor.pad_and_create_pixel_mask
def pad_and_create_pixel_mask(
self,
pixel_values_list: List[ImageInput],
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
) -> BatchFeature:
"""
Pads a batch of images with zeros to the size of largest height and width in the batch and returns their
corresponding pixel mask.
Args:
images (`List[np.ndarray]`):
Batch of images to pad.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
"""
warnings.warn(
"This method is deprecated and will be removed in v4.26.0. Please use pad instead.", FutureWarning
)
# pad expects a list of np.ndarray, but the previous feature extractors expected torch tensors
images = [to_numpy_array(image) for image in pixel_values_list]
return self.pad(
images=images,
return_pixel_mask=True,
return_tensors=return_tensors,
data_format=data_format,
)
def preprocess( def preprocess(
self, self,
images: ImageInput, images: ImageInput,
......
...@@ -820,15 +820,6 @@ class ConditionalDetrImageProcessor(BaseImageProcessor): ...@@ -820,15 +820,6 @@ class ConditionalDetrImageProcessor(BaseImageProcessor):
self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
self.do_pad = do_pad self.do_pad = do_pad
@property
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.max_size
def max_size(self):
logger.warning(
"The `max_size` parameter is deprecated and will be removed in v4.27. "
"Please specify in `size['longest_edge'] instead`.",
)
return self.size["longest_edge"]
@classmethod @classmethod
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.from_dict with Detr->ConditionalDetr # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.from_dict with Detr->ConditionalDetr
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
...@@ -873,7 +864,7 @@ class ConditionalDetrImageProcessor(BaseImageProcessor): ...@@ -873,7 +864,7 @@ class ConditionalDetrImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare
def prepare(self, image, target, return_segmentation_masks=False, masks_path=None): def prepare(self, image, target, return_segmentation_masks=False, masks_path=None):
logger.warning_once( logger.warning_once(
"The `prepare` method is deprecated and will be removed in a future version. " "The `prepare` method is deprecated and will be removed in a v4.33. "
"Please use `prepare_annotation` instead. Note: the `prepare_annotation` method " "Please use `prepare_annotation` instead. Note: the `prepare_annotation` method "
"does not return the image anymore.", "does not return the image anymore.",
) )
...@@ -882,23 +873,17 @@ class ConditionalDetrImageProcessor(BaseImageProcessor): ...@@ -882,23 +873,17 @@ class ConditionalDetrImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask
def convert_coco_poly_to_mask(self, *args, **kwargs): def convert_coco_poly_to_mask(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `convert_coco_poly_to_mask` method is deprecated and will be removed in v4.33. ")
"The `convert_coco_poly_to_mask` method is deprecated and will be removed in a future version. "
)
return convert_coco_poly_to_mask(*args, **kwargs) return convert_coco_poly_to_mask(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection with DETR->ConditionalDetr # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection with DETR->ConditionalDetr
def prepare_coco_detection(self, *args, **kwargs): def prepare_coco_detection(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_detection` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_detection` method is deprecated and will be removed in a future version. "
)
return prepare_coco_detection_annotation(*args, **kwargs) return prepare_coco_detection_annotation(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic
def prepare_coco_panoptic(self, *args, **kwargs): def prepare_coco_panoptic(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_panoptic` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_panoptic` method is deprecated and will be removed in a future version. "
)
return prepare_coco_panoptic_annotation(*args, **kwargs) return prepare_coco_panoptic_annotation(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.resize # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.resize
...@@ -979,40 +964,6 @@ class ConditionalDetrImageProcessor(BaseImageProcessor): ...@@ -979,40 +964,6 @@ class ConditionalDetrImageProcessor(BaseImageProcessor):
""" """
return normalize_annotation(annotation, image_size=image_size) return normalize_annotation(annotation, image_size=image_size)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.pad_and_create_pixel_mask
def pad_and_create_pixel_mask(
self,
pixel_values_list: List[ImageInput],
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
) -> BatchFeature:
"""
Pads a batch of images with zeros to the size of largest height and width in the batch and returns their
corresponding pixel mask.
Args:
images (`List[np.ndarray]`):
Batch of images to pad.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
"""
logger.warning_once("This method is deprecated and will be removed in v4.27.0. Please use pad instead.")
# pad expects a list of np.ndarray, but the previous feature extractors expected torch tensors
images = [to_numpy_array(image) for image in pixel_values_list]
return self.pad(
images=images,
return_pixel_mask=True,
return_tensors=return_tensors,
data_format=data_format,
)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image
def _pad_image( def _pad_image(
self, self,
......
...@@ -818,15 +818,6 @@ class DeformableDetrImageProcessor(BaseImageProcessor): ...@@ -818,15 +818,6 @@ class DeformableDetrImageProcessor(BaseImageProcessor):
self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
self.do_pad = do_pad self.do_pad = do_pad
@property
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.max_size
def max_size(self):
logger.warning(
"The `max_size` parameter is deprecated and will be removed in v4.27. "
"Please specify in `size['longest_edge'] instead`.",
)
return self.size["longest_edge"]
@classmethod @classmethod
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.from_dict with Detr->DeformableDetr # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.from_dict with Detr->DeformableDetr
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
...@@ -871,7 +862,7 @@ class DeformableDetrImageProcessor(BaseImageProcessor): ...@@ -871,7 +862,7 @@ class DeformableDetrImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare
def prepare(self, image, target, return_segmentation_masks=None, masks_path=None): def prepare(self, image, target, return_segmentation_masks=None, masks_path=None):
logger.warning_once( logger.warning_once(
"The `prepare` method is deprecated and will be removed in a future version. " "The `prepare` method is deprecated and will be removed in a v4.33. "
"Please use `prepare_annotation` instead. Note: the `prepare_annotation` method " "Please use `prepare_annotation` instead. Note: the `prepare_annotation` method "
"does not return the image anymore.", "does not return the image anymore.",
) )
...@@ -880,23 +871,17 @@ class DeformableDetrImageProcessor(BaseImageProcessor): ...@@ -880,23 +871,17 @@ class DeformableDetrImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask
def convert_coco_poly_to_mask(self, *args, **kwargs): def convert_coco_poly_to_mask(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `convert_coco_poly_to_mask` method is deprecated and will be removed in v4.33. ")
"The `convert_coco_poly_to_mask` method is deprecated and will be removed in a future version. "
)
return convert_coco_poly_to_mask(*args, **kwargs) return convert_coco_poly_to_mask(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection
def prepare_coco_detection(self, *args, **kwargs): def prepare_coco_detection(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_detection` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_detection` method is deprecated and will be removed in a future version. "
)
return prepare_coco_detection_annotation(*args, **kwargs) return prepare_coco_detection_annotation(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic
def prepare_coco_panoptic(self, *args, **kwargs): def prepare_coco_panoptic(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_panoptic` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_panoptic` method is deprecated and will be removed in a future version. "
)
return prepare_coco_panoptic_annotation(*args, **kwargs) return prepare_coco_panoptic_annotation(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.resize # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.resize
...@@ -977,40 +962,6 @@ class DeformableDetrImageProcessor(BaseImageProcessor): ...@@ -977,40 +962,6 @@ class DeformableDetrImageProcessor(BaseImageProcessor):
""" """
return normalize_annotation(annotation, image_size=image_size) return normalize_annotation(annotation, image_size=image_size)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.pad_and_create_pixel_mask
def pad_and_create_pixel_mask(
self,
pixel_values_list: List[ImageInput],
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
) -> BatchFeature:
"""
Pads a batch of images with zeros to the size of largest height and width in the batch and returns their
corresponding pixel mask.
Args:
images (`List[np.ndarray]`):
Batch of images to pad.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
"""
logger.warning_once("This method is deprecated and will be removed in v4.27.0. Please use pad instead.")
# pad expects a list of np.ndarray, but the previous feature extractors expected torch tensors
images = [to_numpy_array(image) for image in pixel_values_list]
return self.pad(
images=images,
return_pixel_mask=True,
return_tensors=return_tensors,
data_format=data_format,
)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image
def _pad_image( def _pad_image(
self, self,
......
...@@ -544,7 +544,7 @@ class DetaImageProcessor(BaseImageProcessor): ...@@ -544,7 +544,7 @@ class DetaImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare
def prepare(self, image, target, return_segmentation_masks=None, masks_path=None): def prepare(self, image, target, return_segmentation_masks=None, masks_path=None):
logger.warning_once( logger.warning_once(
"The `prepare` method is deprecated and will be removed in a future version. " "The `prepare` method is deprecated and will be removed in a v4.33. "
"Please use `prepare_annotation` instead. Note: the `prepare_annotation` method " "Please use `prepare_annotation` instead. Note: the `prepare_annotation` method "
"does not return the image anymore.", "does not return the image anymore.",
) )
...@@ -553,23 +553,17 @@ class DetaImageProcessor(BaseImageProcessor): ...@@ -553,23 +553,17 @@ class DetaImageProcessor(BaseImageProcessor):
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.convert_coco_poly_to_mask
def convert_coco_poly_to_mask(self, *args, **kwargs): def convert_coco_poly_to_mask(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `convert_coco_poly_to_mask` method is deprecated and will be removed in v4.33. ")
"The `convert_coco_poly_to_mask` method is deprecated and will be removed in a future version. "
)
return convert_coco_poly_to_mask(*args, **kwargs) return convert_coco_poly_to_mask(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_detection
def prepare_coco_detection(self, *args, **kwargs): def prepare_coco_detection(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_detection` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_detection` method is deprecated and will be removed in a future version. "
)
return prepare_coco_detection_annotation(*args, **kwargs) return prepare_coco_detection_annotation(*args, **kwargs)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.prepare_coco_panoptic
def prepare_coco_panoptic(self, *args, **kwargs): def prepare_coco_panoptic(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_panoptic` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_panoptic` method is deprecated and will be removed in a future version. "
)
return prepare_coco_panoptic_annotation(*args, **kwargs) return prepare_coco_panoptic_annotation(*args, **kwargs)
def resize( def resize(
...@@ -641,40 +635,6 @@ class DetaImageProcessor(BaseImageProcessor): ...@@ -641,40 +635,6 @@ class DetaImageProcessor(BaseImageProcessor):
""" """
return normalize_annotation(annotation, image_size=image_size) return normalize_annotation(annotation, image_size=image_size)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor.pad_and_create_pixel_mask
def pad_and_create_pixel_mask(
self,
pixel_values_list: List[ImageInput],
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
) -> BatchFeature:
"""
Pads a batch of images with zeros to the size of largest height and width in the batch and returns their
corresponding pixel mask.
Args:
images (`List[np.ndarray]`):
Batch of images to pad.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
"""
logger.warning_once("This method is deprecated and will be removed in v4.27.0. Please use pad instead.")
# pad expects a list of np.ndarray, but the previous feature extractors expected torch tensors
images = [to_numpy_array(image) for image in pixel_values_list]
return self.pad(
images=images,
return_pixel_mask=True,
return_tensors=return_tensors,
data_format=data_format,
)
# Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image # Copied from transformers.models.detr.image_processing_detr.DetrImageProcessor._pad_image
def _pad_image( def _pad_image(
self, self,
......
...@@ -802,14 +802,6 @@ class DetrImageProcessor(BaseImageProcessor): ...@@ -802,14 +802,6 @@ class DetrImageProcessor(BaseImageProcessor):
self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
self.do_pad = do_pad self.do_pad = do_pad
@property
def max_size(self):
logger.warning(
"The `max_size` parameter is deprecated and will be removed in v4.27. "
"Please specify in `size['longest_edge'] instead`.",
)
return self.size["longest_edge"]
@classmethod @classmethod
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
""" """
...@@ -851,7 +843,7 @@ class DetrImageProcessor(BaseImageProcessor): ...@@ -851,7 +843,7 @@ class DetrImageProcessor(BaseImageProcessor):
def prepare(self, image, target, return_segmentation_masks=None, masks_path=None): def prepare(self, image, target, return_segmentation_masks=None, masks_path=None):
logger.warning_once( logger.warning_once(
"The `prepare` method is deprecated and will be removed in a future version. " "The `prepare` method is deprecated and will be removed in a v4.33. "
"Please use `prepare_annotation` instead. Note: the `prepare_annotation` method " "Please use `prepare_annotation` instead. Note: the `prepare_annotation` method "
"does not return the image anymore.", "does not return the image anymore.",
) )
...@@ -859,21 +851,15 @@ class DetrImageProcessor(BaseImageProcessor): ...@@ -859,21 +851,15 @@ class DetrImageProcessor(BaseImageProcessor):
return image, target return image, target
def convert_coco_poly_to_mask(self, *args, **kwargs): def convert_coco_poly_to_mask(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `convert_coco_poly_to_mask` method is deprecated and will be removed in v4.33. ")
"The `convert_coco_poly_to_mask` method is deprecated and will be removed in a future version. "
)
return convert_coco_poly_to_mask(*args, **kwargs) return convert_coco_poly_to_mask(*args, **kwargs)
def prepare_coco_detection(self, *args, **kwargs): def prepare_coco_detection(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_detection` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_detection` method is deprecated and will be removed in a future version. "
)
return prepare_coco_detection_annotation(*args, **kwargs) return prepare_coco_detection_annotation(*args, **kwargs)
def prepare_coco_panoptic(self, *args, **kwargs): def prepare_coco_panoptic(self, *args, **kwargs):
logger.warning_once( logger.warning_once("The `prepare_coco_panoptic` method is deprecated and will be removed in v4.33. ")
"The `prepare_coco_panoptic` method is deprecated and will be removed in a future version. "
)
return prepare_coco_panoptic_annotation(*args, **kwargs) return prepare_coco_panoptic_annotation(*args, **kwargs)
def resize( def resize(
...@@ -949,39 +935,6 @@ class DetrImageProcessor(BaseImageProcessor): ...@@ -949,39 +935,6 @@ class DetrImageProcessor(BaseImageProcessor):
""" """
return normalize_annotation(annotation, image_size=image_size) return normalize_annotation(annotation, image_size=image_size)
def pad_and_create_pixel_mask(
self,
pixel_values_list: List[ImageInput],
return_tensors: Optional[Union[str, TensorType]] = None,
data_format: Optional[ChannelDimension] = None,
) -> BatchFeature:
"""
Pads a batch of images with zeros to the size of largest height and width in the batch and returns their
corresponding pixel mask.
Args:
images (`List[np.ndarray]`):
Batch of images to pad.
return_tensors (`str` or `TensorType`, *optional*):
The type of tensors to return. Can be one of:
- Unset: Return a list of `np.ndarray`.
- `TensorType.TENSORFLOW` or `'tf'`: Return a batch of type `tf.Tensor`.
- `TensorType.PYTORCH` or `'pt'`: Return a batch of type `torch.Tensor`.
- `TensorType.NUMPY` or `'np'`: Return a batch of type `np.ndarray`.
- `TensorType.JAX` or `'jax'`: Return a batch of type `jax.numpy.ndarray`.
data_format (`str` or `ChannelDimension`, *optional*):
The channel dimension format of the image. If not provided, it will be the same as the input image.
"""
logger.warning_once("This method is deprecated and will be removed in v4.27.0. Please use pad instead.")
# pad expects a list of np.ndarray, but the previous feature extractors expected torch tensors
images = [to_numpy_array(image) for image in pixel_values_list]
return self.pad(
images=images,
return_pixel_mask=True,
return_tensors=return_tensors,
data_format=data_format,
)
def _pad_image( def _pad_image(
self, self,
image: np.ndarray, image: np.ndarray,
......
...@@ -151,12 +151,6 @@ class DonutImageProcessor(BaseImageProcessor): ...@@ -151,12 +151,6 @@ class DonutImageProcessor(BaseImageProcessor):
return image return image
def rotate_image(self, *args, **kwargs):
logger.info(
"rotate_image is deprecated and will be removed in version 4.27. Please use align_long_axis instead."
)
return self.align_long_axis(*args, **kwargs)
def pad_image( def pad_image(
self, self,
image: np.ndarray, image: np.ndarray,
......
...@@ -29,7 +29,6 @@ from ...image_transforms import ( ...@@ -29,7 +29,6 @@ from ...image_transforms import (
rescale, rescale,
resize, resize,
to_channel_dimension_format, to_channel_dimension_format,
to_numpy_array,
) )
from ...image_utils import ( from ...image_utils import (
ChannelDimension, ChannelDimension,
...@@ -38,6 +37,7 @@ from ...image_utils import ( ...@@ -38,6 +37,7 @@ from ...image_utils import (
get_image_size, get_image_size,
infer_channel_dimension_format, infer_channel_dimension_format,
is_batched, is_batched,
to_numpy_array,
valid_images, valid_images,
) )
from ...utils import ( from ...utils import (
...@@ -441,24 +441,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor): ...@@ -441,24 +441,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor):
image_processor_dict["size_divisibility"] = kwargs.pop("size_divisibility") image_processor_dict["size_divisibility"] = kwargs.pop("size_divisibility")
return super().from_dict(image_processor_dict, **kwargs) return super().from_dict(image_processor_dict, **kwargs)
@property
def size_divisibility(self):
warnings.warn(
"The `size_divisibility` property is deprecated and will be removed in v4.27. Please use "
"`size_divisor` instead.",
FutureWarning,
)
return self.size_divisor
@property
def max_size(self):
warnings.warn(
"The `max_size` property is deprecated and will be removed in v4.27. Please use size['longest_edge']"
" instead.",
FutureWarning,
)
return self.size["longest_edge"]
def resize( def resize(
self, self,
image: np.ndarray, image: np.ndarray,
...@@ -789,7 +771,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor): ...@@ -789,7 +771,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor):
ignore_index: Optional[int] = None, ignore_index: Optional[int] = None,
reduce_labels: bool = False, reduce_labels: bool = False,
return_tensors: Optional[Union[str, TensorType]] = None, return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs,
): ):
""" """
Pad images up to the largest image in a batch and create a corresponding `pixel_mask`. Pad images up to the largest image in a batch and create a corresponding `pixel_mask`.
...@@ -840,12 +821,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor): ...@@ -840,12 +821,6 @@ class Mask2FormerImageProcessor(BaseImageProcessor):
""" """
ignore_index = self.ignore_index if ignore_index is None else ignore_index ignore_index = self.ignore_index if ignore_index is None else ignore_index
reduce_labels = self.reduce_labels if reduce_labels is None else reduce_labels reduce_labels = self.reduce_labels if reduce_labels is None else reduce_labels
if "pad_and_return_pixel_mask" in kwargs:
warnings.warn(
"The `pad_and_return_pixel_mask` argument has no effect and will be removed in v4.27", FutureWarning
)
pixel_values_list = [to_numpy_array(pixel_values) for pixel_values in pixel_values_list] pixel_values_list = [to_numpy_array(pixel_values) for pixel_values in pixel_values_list]
encoded_inputs = self.pad(pixel_values_list, return_tensors=return_tensors) encoded_inputs = self.pad(pixel_values_list, return_tensors=return_tensors)
......
...@@ -29,7 +29,6 @@ from ...image_transforms import ( ...@@ -29,7 +29,6 @@ from ...image_transforms import (
rescale, rescale,
resize, resize,
to_channel_dimension_format, to_channel_dimension_format,
to_numpy_array,
) )
from ...image_utils import ( from ...image_utils import (
ChannelDimension, ChannelDimension,
...@@ -38,6 +37,7 @@ from ...image_utils import ( ...@@ -38,6 +37,7 @@ from ...image_utils import (
get_image_size, get_image_size,
infer_channel_dimension_format, infer_channel_dimension_format,
make_list_of_images, make_list_of_images,
to_numpy_array,
valid_images, valid_images,
) )
from ...utils import ( from ...utils import (
...@@ -452,33 +452,6 @@ class MaskFormerImageProcessor(BaseImageProcessor): ...@@ -452,33 +452,6 @@ class MaskFormerImageProcessor(BaseImageProcessor):
image_processor_dict["size_divisibility"] = kwargs.pop("size_divisibility") image_processor_dict["size_divisibility"] = kwargs.pop("size_divisibility")
return super().from_dict(image_processor_dict, **kwargs) return super().from_dict(image_processor_dict, **kwargs)
@property
def size_divisibility(self):
warnings.warn(
"The `size_divisibility` property is deprecated and will be removed in v4.27. Please use "
"`size_divisor` instead.",
FutureWarning,
)
return self.size_divisor
@property
def max_size(self):
warnings.warn(
"The `max_size` property is deprecated and will be removed in v4.27. Please use size['longest_edge']"
" instead.",
FutureWarning,
)
return self.size["longest_edge"]
@property
def reduce_labels(self):
warnings.warn(
"The `reduce_labels` property is deprecated and will be removed in v4.27. Please use "
"`do_reduce_labels` instead.",
FutureWarning,
)
return self.do_reduce_labels
def resize( def resize(
self, self,
image: np.ndarray, image: np.ndarray,
...@@ -820,7 +793,6 @@ class MaskFormerImageProcessor(BaseImageProcessor): ...@@ -820,7 +793,6 @@ class MaskFormerImageProcessor(BaseImageProcessor):
ignore_index: Optional[int] = None, ignore_index: Optional[int] = None,
reduce_labels: bool = False, reduce_labels: bool = False,
return_tensors: Optional[Union[str, TensorType]] = None, return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs,
): ):
""" """
Pad images up to the largest image in a batch and create a corresponding `pixel_mask`. Pad images up to the largest image in a batch and create a corresponding `pixel_mask`.
...@@ -869,10 +841,6 @@ class MaskFormerImageProcessor(BaseImageProcessor): ...@@ -869,10 +841,6 @@ class MaskFormerImageProcessor(BaseImageProcessor):
`annotations` are provided). They identify the labels of `mask_labels`, e.g. the label of `annotations` are provided). They identify the labels of `mask_labels`, e.g. the label of
`mask_labels[i][j]` if `class_labels[i][j]`. `mask_labels[i][j]` if `class_labels[i][j]`.
""" """
if "pad_and_return_pixel_mask" in kwargs:
warnings.warn(
"The `pad_and_return_pixel_mask` argument has no effect and will be removed in v4.27", FutureWarning
)
ignore_index = self.ignore_index if ignore_index is None else ignore_index ignore_index = self.ignore_index if ignore_index is None else ignore_index
reduce_labels = self.do_reduce_labels if reduce_labels is None else reduce_labels reduce_labels = self.do_reduce_labels if reduce_labels is None else reduce_labels
......
...@@ -30,7 +30,6 @@ from ...image_transforms import ( ...@@ -30,7 +30,6 @@ from ...image_transforms import (
rescale, rescale,
resize, resize,
to_channel_dimension_format, to_channel_dimension_format,
to_numpy_array,
) )
from ...image_utils import ( from ...image_utils import (
ChannelDimension, ChannelDimension,
...@@ -39,6 +38,7 @@ from ...image_utils import ( ...@@ -39,6 +38,7 @@ from ...image_utils import (
get_image_size, get_image_size,
infer_channel_dimension_format, infer_channel_dimension_format,
make_list_of_images, make_list_of_images,
to_numpy_array,
valid_images, valid_images,
) )
from ...utils import ( from ...utils import (
...@@ -881,7 +881,6 @@ class OneFormerImageProcessor(BaseImageProcessor): ...@@ -881,7 +881,6 @@ class OneFormerImageProcessor(BaseImageProcessor):
ignore_index: Optional[int] = None, ignore_index: Optional[int] = None,
reduce_labels: bool = False, reduce_labels: bool = False,
return_tensors: Optional[Union[str, TensorType]] = None, return_tensors: Optional[Union[str, TensorType]] = None,
**kwargs,
): ):
""" """
Pad images up to the largest image in a batch and create a corresponding `pixel_mask`. Pad images up to the largest image in a batch and create a corresponding `pixel_mask`.
...@@ -935,11 +934,6 @@ class OneFormerImageProcessor(BaseImageProcessor): ...@@ -935,11 +934,6 @@ class OneFormerImageProcessor(BaseImageProcessor):
- **text_inputs** -- Optional list of text string entries to be fed to a model (when `annotations` are - **text_inputs** -- Optional list of text string entries to be fed to a model (when `annotations` are
provided). They identify the binary masks present in the image. provided). They identify the binary masks present in the image.
""" """
if "pad_and_return_pixel_mask" in kwargs:
warnings.warn(
"The `pad_and_return_pixel_mask` argument has no effect and will be removed in v4.27", FutureWarning
)
ignore_index = self.ignore_index if ignore_index is None else ignore_index ignore_index = self.ignore_index if ignore_index is None else ignore_index
reduce_labels = self.do_reduce_labels if reduce_labels is None else reduce_labels reduce_labels = self.do_reduce_labels if reduce_labels is None else reduce_labels
pixel_values_list = [to_numpy_array(pixel_values) for pixel_values in pixel_values_list] pixel_values_list = [to_numpy_array(pixel_values) for pixel_values in pixel_values_list]
......
...@@ -27,7 +27,6 @@ from ...image_transforms import ( ...@@ -27,7 +27,6 @@ from ...image_transforms import (
rescale, rescale,
resize, resize,
to_channel_dimension_format, to_channel_dimension_format,
to_numpy_array,
) )
from ...image_utils import ( from ...image_utils import (
OPENAI_CLIP_MEAN, OPENAI_CLIP_MEAN,
...@@ -36,6 +35,7 @@ from ...image_utils import ( ...@@ -36,6 +35,7 @@ from ...image_utils import (
ImageInput, ImageInput,
PILImageResampling, PILImageResampling,
make_list_of_images, make_list_of_images,
to_numpy_array,
valid_images, valid_images,
) )
from ...utils import TensorType, is_torch_available, logging from ...utils import TensorType, is_torch_available, logging
......
...@@ -116,15 +116,6 @@ class SegformerImageProcessor(BaseImageProcessor): ...@@ -116,15 +116,6 @@ class SegformerImageProcessor(BaseImageProcessor):
self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD self.image_std = image_std if image_std is not None else IMAGENET_DEFAULT_STD
self.do_reduce_labels = do_reduce_labels self.do_reduce_labels = do_reduce_labels
@property
def reduce_labels(self):
warnings.warn(
"The `reduce_labels` property is deprecated and will be removed in a v4.27. Please use "
"`do_reduce_labels` instead.",
FutureWarning,
)
return self.do_reduce_labels
@classmethod @classmethod
def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs): def from_dict(cls, image_processor_dict: Dict[str, Any], **kwargs):
""" """
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment