vilt.md 4.35 KB
Newer Older
NielsRogge's avatar
NielsRogge committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2021 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

NielsRogge's avatar
NielsRogge committed
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
-->

# ViLT

## Overview

The ViLT model was proposed in [ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision](https://arxiv.org/abs/2102.03334)
by Wonjae Kim, Bokyung Son, Ildoo Kim. ViLT incorporates text embeddings into a Vision Transformer (ViT), allowing it to have a minimal design
for Vision-and-Language Pre-training (VLP).

The abstract from the paper is the following:

*Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks.
Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision
(e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we
find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more
computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive
power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model,
Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically
simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of
times faster than previous VLP models, yet with competitive or better downstream task performance.*

Tips:

NielsRogge's avatar
NielsRogge committed
39
40
- The quickest way to get started with ViLT is by checking the [example notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ViLT)
  (which showcase both inference and fine-tuning on custom data).
NielsRogge's avatar
NielsRogge committed
41
- ViLT is a model that takes both `pixel_values` and `input_ids` as input. One can use [`ViltProcessor`] to prepare data for the model.
42
  This processor wraps a image processor (for the image modality) and a tokenizer (for the language modality) into one.
NielsRogge's avatar
NielsRogge committed
43
44
- ViLT is trained with images of various sizes: the authors resize the shorter edge of input images to 384 and limit the longer edge to
  under 640 while preserving the aspect ratio. To make batching of images possible, the authors use a `pixel_mask` that indicates
amyeroberts's avatar
amyeroberts committed
45
46
  which pixel values are real and which are padding. [`ViltProcessor`] automatically creates this for you.
- The design of ViLT is very similar to that of a standard Vision Transformer (ViT). The only difference is that the model includes
NielsRogge's avatar
NielsRogge committed
47
48
49
  additional embedding layers for the language modality.

<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/vilt_architecture.jpg"
amyeroberts's avatar
amyeroberts committed
50
alt="drawing" width="600"/>
NielsRogge's avatar
NielsRogge committed
51
52
53
54
55

<small> ViLT architecture. Taken from the <a href="https://arxiv.org/abs/2102.03334">original paper</a>. </small>

This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/dandelin/ViLT).

56
57
58
59
60

Tips:

- The PyTorch version of this model is only available in torch 1.10 and higher.

NielsRogge's avatar
NielsRogge committed
61
62
63
64
65
66
67
68
69
## ViltConfig

[[autodoc]] ViltConfig

## ViltFeatureExtractor

[[autodoc]] ViltFeatureExtractor
    - __call__

amyeroberts's avatar
amyeroberts committed
70
71
72
73
74
## ViltImageProcessor

[[autodoc]] ViltImageProcessor
    - preprocess

NielsRogge's avatar
NielsRogge committed
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
## ViltProcessor

[[autodoc]] ViltProcessor
    - __call__

## ViltModel

[[autodoc]] ViltModel
    - forward

## ViltForMaskedLM

[[autodoc]] ViltForMaskedLM
    - forward

## ViltForQuestionAnswering

[[autodoc]] ViltForQuestionAnswering
    - forward

## ViltForImagesAndTextClassification

[[autodoc]] ViltForImagesAndTextClassification
    - forward

## ViltForImageAndTextRetrieval

[[autodoc]] ViltForImageAndTextRetrieval
    - forward
104
105
106
107
108

## ViltForTokenClassification

[[autodoc]] ViltForTokenClassification
    - forward