deit.md 9.04 KB
Newer Older
1
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
NielsRogge's avatar
NielsRogge committed
2

3
4
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
NielsRogge's avatar
NielsRogge committed
5

6
http://www.apache.org/licenses/LICENSE-2.0
NielsRogge's avatar
NielsRogge committed
7

8
9
10
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

15
-->
NielsRogge's avatar
NielsRogge committed
16

17
# DeiT
NielsRogge's avatar
NielsRogge committed
18

19
## Overview
NielsRogge's avatar
NielsRogge committed
20

21
22
The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre
Sablayrolles, Herv茅 J茅gou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) has shown that one can match or even outperform existing convolutional neural
Sylvain Gugger's avatar
Sylvain Gugger committed
23
24
25
26
networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on
expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more
efficiently trained transformers for image classification, requiring far less data and far less computing resources
compared to the original ViT models.
NielsRogge's avatar
NielsRogge committed
27
28
29
30
31
32
33
34
35
36
37
38
39
40

The abstract from the paper is the following:

*Recently, neural networks purely based on attention were shown to address image understanding tasks such as image
classification. However, these visual transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision
transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
token ensuring that the student learns from the teacher through attention. We show the interest of this token-based
distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.*

41
42
43
This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts).

## Usage tips
NielsRogge's avatar
NielsRogge committed
44
45
46
47
48
49
50
51
52
53
54
55

- Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the
  DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with
  the class ([CLS]) and patch tokens through the self-attention layers.
- There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
  of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a
  prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction
  head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
  distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the
  distillation head and the label predicted by the teacher). At inference time, one takes the average prediction
  between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a
  teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to
56
57
  [`DeiTForImageClassification`] and (2) corresponds to
  [`DeiTForImageClassificationWithTeacher`].
NielsRogge's avatar
NielsRogge committed
58
59
60
61
62
63
- Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
  trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results.
- All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in
  contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
  pre-training.
- The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into
64
  [`ViTModel`] or [`ViTForImageClassification`]. Techniques like data
NielsRogge's avatar
NielsRogge committed
65
66
  augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
  (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
67
  *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and
68
  *facebook/deit-base-patch16-384*. Note that one should use [`DeiTImageProcessor`] in order to
NielsRogge's avatar
NielsRogge committed
69
70
  prepare images for the model.

71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
### Using Scaled Dot Product Attention (SDPA)

PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function 
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the 
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) 
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
page for more information.

SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set 
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.

```
from transformers import DeiTForImageClassification
model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224", attn_implementation="sdpa", torch_dtype=torch.float16)
...
```

For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).

On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `facebook/deit-base-distilled-patch16-224` model, we saw the following speedups during inference.

|   Batch size |   Average inference time (ms), eager mode |   Average inference time (ms), sdpa model |   Speed up, Sdpa / Eager (x) |
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
|            1 |                                         8 |                                         6 |                      1.33 |
|            2 |                                         9 |                                         6 |                      1.5  |
|            4 |                                         9 |                                         6 |                      1.5  |
|            8 |                                         8 |                                         6 |                      1.33 |

NielsRogge's avatar
NielsRogge committed
99
100
101
102
103
104
105
## Resources

A list of official Hugging Face and community (indicated by 馃寧) resources to help you get started with DeiT.

<PipelineTag pipeline="image-classification"/>

- [`DeiTForImageClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/image_classification.ipynb).
106
- See also: [Image classification task guide](../tasks/image_classification)
NielsRogge's avatar
NielsRogge committed
107
108
109
110
111
112

Besides that:

- [`DeiTForMaskedImageModeling`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-pretraining).

If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
NielsRogge's avatar
NielsRogge committed
113

114
## DeiTConfig
NielsRogge's avatar
NielsRogge committed
115

116
[[autodoc]] DeiTConfig
NielsRogge's avatar
NielsRogge committed
117

118
## DeiTFeatureExtractor
NielsRogge's avatar
NielsRogge committed
119

120
121
[[autodoc]] DeiTFeatureExtractor
    - __call__
NielsRogge's avatar
NielsRogge committed
122

amyeroberts's avatar
amyeroberts committed
123
124
125
126
127
## DeiTImageProcessor

[[autodoc]] DeiTImageProcessor
    - preprocess

128
129
130
<frameworkcontent>
<pt>

131
## DeiTModel
NielsRogge's avatar
NielsRogge committed
132

133
134
[[autodoc]] DeiTModel
    - forward
NielsRogge's avatar
NielsRogge committed
135

NielsRogge's avatar
NielsRogge committed
136
137
138
139
140
## DeiTForMaskedImageModeling

[[autodoc]] DeiTForMaskedImageModeling
    - forward

141
## DeiTForImageClassification
NielsRogge's avatar
NielsRogge committed
142

143
144
[[autodoc]] DeiTForImageClassification
    - forward
NielsRogge's avatar
NielsRogge committed
145

146
## DeiTForImageClassificationWithTeacher
NielsRogge's avatar
NielsRogge committed
147

148
149
[[autodoc]] DeiTForImageClassificationWithTeacher
    - forward
150

151
152
153
</pt>
<tf>

154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
## TFDeiTModel

[[autodoc]] TFDeiTModel
    - call

## TFDeiTForMaskedImageModeling

[[autodoc]] TFDeiTForMaskedImageModeling
    - call

## TFDeiTForImageClassification

[[autodoc]] TFDeiTForImageClassification
    - call

## TFDeiTForImageClassificationWithTeacher

[[autodoc]] TFDeiTForImageClassificationWithTeacher
    - call
173
174
175

</tf>
</frameworkcontent>