vision-text-dual-encoder.md 2.28 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2021 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Sylvain Gugger's avatar
Sylvain Gugger committed
15
16
17
18
19
20
21
22
23
24
25
26
27
-->

# VisionTextDualEncoder

## Overview

The [`VisionTextDualEncoderModel`] can be used to initialize a vision-text dual encoder model with
any pretrained vision autoencoding model as the vision encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit)) and any pretrained text autoencoding model as the text encoder (*e.g.* [RoBERTa](roberta), [BERT](bert)). Two projection layers are added on top of both the vision and text encoder to project the output embeddings
to a shared latent space. The projection layers are randomly initialized so the model should be fine-tuned on a
downstream task. This model can be used to align the vision-text embeddings using CLIP like contrastive image-text
training and then can be used for zero-shot vision tasks such image-classification or retrieval.

In [LiT: Zero-Shot Transfer with Locked-image Text Tuning](https://arxiv.org/abs/2111.07991) it is shown how
28
leveraging pre-trained (locked/frozen) image and text model for contrastive learning yields significant improvement on
Sylvain Gugger's avatar
Sylvain Gugger committed
29
30
31
32
33
34
35
36
37
38
new zero-shot vision tasks such as image classification or retrieval.

## VisionTextDualEncoderConfig

[[autodoc]] VisionTextDualEncoderConfig

## VisionTextDualEncoderProcessor

[[autodoc]] VisionTextDualEncoderProcessor

39
40
41
<frameworkcontent>
<pt>

Sylvain Gugger's avatar
Sylvain Gugger committed
42
43
44
45
46
## VisionTextDualEncoderModel

[[autodoc]] VisionTextDualEncoderModel
    - forward

47
48
49
</pt>
<tf>

Sylvain Gugger's avatar
Sylvain Gugger committed
50
51
52
53
## FlaxVisionTextDualEncoderModel

[[autodoc]] FlaxVisionTextDualEncoderModel
    - __call__
Matt's avatar
Matt committed
54

55
56
57
</tf>
<jax>

Matt's avatar
Matt committed
58
59
60
61
## TFVisionTextDualEncoderModel

[[autodoc]] TFVisionTextDualEncoderModel
    - call
62
63
64

</jax>
</frameworkcontent>