"docs/source/es/training.mdx" did not exist on "77321481247787c97568c3b9f64b19e22351bab8"
deit.mdx 6.11 KB
Newer Older
1
<!--Copyright 2021 The HuggingFace Team. All rights reserved.
NielsRogge's avatar
NielsRogge committed
2

3
4
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
NielsRogge's avatar
NielsRogge committed
5

6
http://www.apache.org/licenses/LICENSE-2.0
NielsRogge's avatar
NielsRogge committed
7

8
9
10
11
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
NielsRogge's avatar
NielsRogge committed
12

13
# DeiT
NielsRogge's avatar
NielsRogge committed
14

15
<Tip>
NielsRogge's avatar
NielsRogge committed
16

17
18
This is a recently introduced model so the API hasn't been tested extensively. There may be some bugs or slight
breaking changes to fix it in the future. If you see something strange, file a [Github Issue](https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title).
NielsRogge's avatar
NielsRogge committed
19

20
</Tip>
NielsRogge's avatar
NielsRogge committed
21

22
## Overview
NielsRogge's avatar
NielsRogge committed
23

24
25
The DeiT model was proposed in [Training data-efficient image transformers & distillation through attention](https://arxiv.org/abs/2012.12877) by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre
Sablayrolles, Herv茅 J茅gou. The [Vision Transformer (ViT)](vit) introduced in [Dosovitskiy et al., 2020](https://arxiv.org/abs/2010.11929) has shown that one can match or even outperform existing convolutional neural
Sylvain Gugger's avatar
Sylvain Gugger committed
26
27
28
29
networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on
expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more
efficiently trained transformers for image classification, requiring far less data and far less computing resources
compared to the original ViT models.
NielsRogge's avatar
NielsRogge committed
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56

The abstract from the paper is the following:

*Recently, neural networks purely based on attention were shown to address image understanding tasks such as image
classification. However, these visual transformers are pre-trained with hundreds of millions of images using an
expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free
transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision
transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external
data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation
token ensuring that the student learns from the teacher through attention. We show the interest of this token-based
distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets
for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and
models.*

Tips:

- Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the
  DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with
  the class ([CLS]) and patch tokens through the self-attention layers.
- There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top
  of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a
  prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction
  head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the
  distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the
  distillation head and the label predicted by the teacher). At inference time, one takes the average prediction
  between both heads as final prediction. (2) is also called "fine-tuning with distillation", because one relies on a
  teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to
57
58
  [`DeiTForImageClassification`] and (2) corresponds to
  [`DeiTForImageClassificationWithTeacher`].
NielsRogge's avatar
NielsRogge committed
59
60
61
62
63
64
- Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is
  trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results.
- All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in
  contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for
  pre-training.
- The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into
65
  [`ViTModel`] or [`ViTForImageClassification`]. Techniques like data
NielsRogge's avatar
NielsRogge committed
66
67
  augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset
  (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes):
68
69
  *facebook/deit-tiny-patch16-224*, *facebook/deit-small-patch16-224*, *facebook/deit-base-patch16-224* and
  *facebook/deit-base-patch16-384*. Note that one should use [`DeiTFeatureExtractor`] in order to
NielsRogge's avatar
NielsRogge committed
70
71
  prepare images for the model.

72
This model was contributed by [nielsr](https://huggingface.co/nielsr). The TensorFlow version of this model was added by [amyeroberts](https://huggingface.co/amyeroberts).
73

NielsRogge's avatar
NielsRogge committed
74

75
## DeiTConfig
NielsRogge's avatar
NielsRogge committed
76

77
[[autodoc]] DeiTConfig
NielsRogge's avatar
NielsRogge committed
78

79
## DeiTFeatureExtractor
NielsRogge's avatar
NielsRogge committed
80

81
82
[[autodoc]] DeiTFeatureExtractor
    - __call__
NielsRogge's avatar
NielsRogge committed
83

84
## DeiTModel
NielsRogge's avatar
NielsRogge committed
85

86
87
[[autodoc]] DeiTModel
    - forward
NielsRogge's avatar
NielsRogge committed
88

NielsRogge's avatar
NielsRogge committed
89
90
91
92
93
## DeiTForMaskedImageModeling

[[autodoc]] DeiTForMaskedImageModeling
    - forward

94
## DeiTForImageClassification
NielsRogge's avatar
NielsRogge committed
95

96
97
[[autodoc]] DeiTForImageClassification
    - forward
NielsRogge's avatar
NielsRogge committed
98

99
## DeiTForImageClassificationWithTeacher
NielsRogge's avatar
NielsRogge committed
100

101
102
[[autodoc]] DeiTForImageClassificationWithTeacher
    - forward
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122

## TFDeiTModel

[[autodoc]] TFDeiTModel
    - call

## TFDeiTForMaskedImageModeling

[[autodoc]] TFDeiTForMaskedImageModeling
    - call

## TFDeiTForImageClassification

[[autodoc]] TFDeiTForImageClassification
    - call

## TFDeiTForImageClassificationWithTeacher

[[autodoc]] TFDeiTForImageClassificationWithTeacher
    - call