Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
386ef34e
Unverified
Commit
386ef34e
authored
Apr 11, 2024
by
NielsRogge
Committed by
GitHub
Apr 11, 2024
Browse files
[Processor classes] Update docs (#29698)
Update docs
parent
e516d1b1
Changes
13
Hide whitespace changes
Inline
Side-by-side
Showing
13 changed files
with
13 additions
and
26 deletions
+13
-26
src/transformers/models/align/processing_align.py
src/transformers/models/align/processing_align.py
+1
-2
src/transformers/models/altclip/processing_altclip.py
src/transformers/models/altclip/processing_altclip.py
+1
-2
src/transformers/models/chinese_clip/processing_chinese_clip.py
...ansformers/models/chinese_clip/processing_chinese_clip.py
+1
-2
src/transformers/models/clip/processing_clip.py
src/transformers/models/clip/processing_clip.py
+1
-2
src/transformers/models/clipseg/processing_clipseg.py
src/transformers/models/clipseg/processing_clipseg.py
+1
-2
src/transformers/models/fuyu/processing_fuyu.py
src/transformers/models/fuyu/processing_fuyu.py
+1
-2
src/transformers/models/git/processing_git.py
src/transformers/models/git/processing_git.py
+1
-2
src/transformers/models/llava/processing_llava.py
src/transformers/models/llava/processing_llava.py
+1
-2
src/transformers/models/oneformer/processing_oneformer.py
src/transformers/models/oneformer/processing_oneformer.py
+1
-2
src/transformers/models/owlv2/processing_owlv2.py
src/transformers/models/owlv2/processing_owlv2.py
+1
-2
src/transformers/models/owlvit/processing_owlvit.py
src/transformers/models/owlvit/processing_owlvit.py
+1
-2
src/transformers/models/siglip/processing_siglip.py
src/transformers/models/siglip/processing_siglip.py
+1
-2
src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py
..._text_dual_encoder/processing_vision_text_dual_encoder.py
+1
-2
No files found.
src/transformers/models/align/processing_align.py
View file @
386ef34e
...
...
@@ -57,8 +57,7 @@ class AlignProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `max_length`):
Activates and controls padding for tokenization of input text. Choose between [`True` or `'longest'`,
`'max_length'`, `False` or `'do_not_pad'`]
...
...
src/transformers/models/altclip/processing_altclip.py
View file @
386ef34e
...
...
@@ -73,8 +73,7 @@ class AltCLIPProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
...
...
src/transformers/models/chinese_clip/processing_chinese_clip.py
View file @
386ef34e
...
...
@@ -75,8 +75,7 @@ class ChineseCLIPProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
...
...
src/transformers/models/clip/processing_clip.py
View file @
386ef34e
...
...
@@ -73,8 +73,7 @@ class CLIPProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
...
...
src/transformers/models/clipseg/processing_clipseg.py
View file @
386ef34e
...
...
@@ -73,8 +73,7 @@ class CLIPSegProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
visual_prompt (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The visual prompt image or batch of images to be prepared. Each visual prompt image can be a PIL image,
NumPy array or PyTorch tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape
...
...
src/transformers/models/fuyu/processing_fuyu.py
View file @
386ef34e
...
...
@@ -482,8 +482,7 @@ class FuyuProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `List[PIL.Image.Image]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
Returns:
[`FuyuBatchEncoding`]: A [`FuyuBatchEncoding`] with the following fields:
...
...
src/transformers/models/git/processing_git.py
View file @
386ef34e
...
...
@@ -57,8 +57,7 @@ class GitProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
...
...
src/transformers/models/llava/processing_llava.py
View file @
386ef34e
...
...
@@ -70,8 +70,7 @@ class LlavaProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
...
...
src/transformers/models/oneformer/processing_oneformer.py
View file @
386ef34e
...
...
@@ -91,8 +91,7 @@ class OneFormerProcessor(ProcessorMixin):
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`,
`List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
segmentation_maps (`ImageInput`, *optional*):
The corresponding semantic segmentation maps with the pixel-wise annotations.
...
...
src/transformers/models/owlv2/processing_owlv2.py
View file @
386ef34e
...
...
@@ -62,8 +62,7 @@ class Owlv2Processor(ProcessorMixin):
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`,
`List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
query_images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The query image to be prepared, one query image is expected per target image to be queried. Each image
can be a PIL image, NumPy array or PyTorch tensor. In case of a NumPy array/PyTorch tensor, each image
...
...
src/transformers/models/owlvit/processing_owlvit.py
View file @
386ef34e
...
...
@@ -77,8 +77,7 @@ class OwlViTProcessor(ProcessorMixin):
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`,
`List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
query_images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The query image to be prepared, one query image is expected per target image to be queried. Each image
can be a PIL image, NumPy array or PyTorch tensor. In case of a NumPy array/PyTorch tensor, each image
...
...
src/transformers/models/siglip/processing_siglip.py
View file @
386ef34e
...
...
@@ -69,8 +69,7 @@ class SiglipProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
padding (`bool`, `str` or [`~utils.PaddingStrategy`], *optional*, defaults to `False`):
Select a strategy to pad the returned sequences (according to the model's padding side and padding
index) among:
...
...
src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py
View file @
386ef34e
...
...
@@ -76,8 +76,7 @@ class VisionTextDualEncoderProcessor(ProcessorMixin):
`is_split_into_words=True` (to lift the ambiguity with a batch of sequences).
images (`PIL.Image.Image`, `np.ndarray`, `torch.Tensor`, `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]`):
The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch
tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a
number of channels, H and W are image height and width.
tensor. Both channels-first and channels-last formats are supported.
return_tensors (`str` or [`~utils.TensorType`], *optional*):
If set, will return tensors of a particular framework. Acceptable values are:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment