t5.rst 7.84 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
.. 
    Copyright 2020 The HuggingFace Team. All rights reserved.

    Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
    the License. You may obtain a copy of the License at

        http://www.apache.org/licenses/LICENSE-2.0

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
    an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
    specific language governing permissions and limitations under the License.

Patrick von Platen's avatar
Patrick von Platen committed
13
T5
Sylvain Gugger's avatar
Sylvain Gugger committed
14
15
16
17
-----------------------------------------------------------------------------------------------------------------------

**DISCLAIMER:** This model is still a work in progress, if you see something strange, file a `Github Issue
<https://github.com/huggingface/transformers/issues/new?assignees=&labels=&template=bug-report.md&title>`__.
Patrick von Platen's avatar
Patrick von Platen committed
18
19

Overview
Sylvain Gugger's avatar
Sylvain Gugger committed
20
21
22
23
24
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The T5 model was presented in `Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
<https://arxiv.org/pdf/1910.10683.pdf>`_ by Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.
Sylvain Gugger's avatar
Sylvain Gugger committed
25

Sylvain Gugger's avatar
Sylvain Gugger committed
26
The abstract from the paper is the following:
Patrick von Platen's avatar
Patrick von Platen committed
27

Sylvain Gugger's avatar
Sylvain Gugger committed
28
29
30
31
*Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream
task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning
has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of
transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a
32
text-to-text format. Our systematic study compares pretraining objectives, architectures, unlabeled datasets, transfer
Sylvain Gugger's avatar
Sylvain Gugger committed
33
34
35
36
approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration
with scale and our new "Colossal Clean Crawled Corpus", we achieve state-of-the-art results on many benchmarks covering
summarization, question answering, text classification, and more. To facilitate future work on transfer learning for
NLP, we release our dataset, pre-trained models, and code.*
Patrick von Platen's avatar
Patrick von Platen committed
37

Sylvain Gugger's avatar
Sylvain Gugger committed
38
39
Tips:

Sylvain Gugger's avatar
Sylvain Gugger committed
40
41
42
43
- T5 is an encoder-decoder model pre-trained on a multi-task mixture of unsupervised and supervised tasks and for which
  each task is converted into a text-to-text format. T5 works well on a variety of tasks out-of-the-box by prepending a
  different prefix to the input corresponding to each task, e.g., for translation: *translate English to German: ...*,
  for summarization: *summarize: ...*.
Sylvain Gugger's avatar
Sylvain Gugger committed
44

Sylvain Gugger's avatar
Sylvain Gugger committed
45
  For more information about which prefix to use, it is easiest to look into Appendix D of the `paper
Sylvain Gugger's avatar
Sylvain Gugger committed
46
  <https://arxiv.org/pdf/1910.10683.pdf>`__. - For sequence-to-sequence generation, it is recommended to use
Stas Bekman's avatar
Stas Bekman committed
47
48
49
  :obj:`T5ForConditionalGeneration.generate()`. This method takes care of feeding the encoded input via cross-attention
  layers to the decoder and auto-regressively generates the decoder output. - T5 uses relative scalar embeddings.
  Encoder input padding can be done on the left and on the right.
Sylvain Gugger's avatar
Sylvain Gugger committed
50

51
52
This model was contributed by `thomwolf <https://huggingface.co/thomwolf>`__. The original code can be found `here
<https://github.com/google-research/text-to-text-transfer-transformer>`__.
Patrick von Platen's avatar
Patrick von Platen committed
53

54
Training
Sylvain Gugger's avatar
Sylvain Gugger committed
55
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sylvain Gugger's avatar
Sylvain Gugger committed
56

Sylvain Gugger's avatar
Sylvain Gugger committed
57
58
T5 is an encoder-decoder model and converts all NLP problems into a text-to-text format. It is trained using teacher
forcing. This means that for training we always need an input sequence and a target sequence. The input sequence is fed
Stas Bekman's avatar
Stas Bekman committed
59
to the model using :obj:`input_ids`. The target sequence is shifted to the right, i.e., prepended by a start-sequence
Sylvain Gugger's avatar
Sylvain Gugger committed
60
61
62
token and fed to the decoder using the :obj:`decoder_input_ids`. In teacher-forcing style, the target sequence is then
appended by the EOS token and corresponds to the :obj:`labels`. The PAD token is hereby used as the start-sequence
token. T5 can be trained / fine-tuned both in a supervised and unsupervised fashion.
63
64

- Unsupervised denoising training
Lorenzo Ampil's avatar
Lorenzo Ampil committed
65

Sylvain Gugger's avatar
Sylvain Gugger committed
66
67
68
  In this setup spans of the input sequence are masked by so-called sentinel tokens (*a.k.a* unique mask tokens) and
  the output sequence is formed as a concatenation of the same sentinel tokens and the *real* masked tokens. Each
  sentinel token represents a unique mask token for this sentence and should start with :obj:`<extra_id_0>`,
Sylvain Gugger's avatar
Sylvain Gugger committed
69
70
  :obj:`<extra_id_1>`, ... up to :obj:`<extra_id_99>`. As a default, 100 sentinel tokens are available in
  :class:`~transformers.T5Tokenizer`.
Sylvain Gugger's avatar
Sylvain Gugger committed
71

Sylvain Gugger's avatar
Sylvain Gugger committed
72
  For instance, the sentence "The cute dog walks in the park" with the masks put on "cute dog" and "the" should be
Sylvain Gugger's avatar
Sylvain Gugger committed
73
  processed as follows:
74

Sylvain Gugger's avatar
Sylvain Gugger committed
75
.. code-block::
76

77
78
79
80
    input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_tensors='pt').input_ids
    labels = tokenizer('<extra_id_0> cute dog <extra_id_1> the <extra_id_2>', return_tensors='pt').input_ids
    # the forward function automatically creates the correct decoder_input_ids
    loss = model(input_ids=input_ids, labels=labels).loss
81
82

- Supervised training
Lorenzo Ampil's avatar
Lorenzo Ampil committed
83

Sylvain Gugger's avatar
Sylvain Gugger committed
84
85
  In this setup the input sequence and output sequence are standard sequence-to-sequence input output mapping. In
  translation, for instance with the input sequence "The house is wonderful." and output sequence "Das Haus ist
Sylvain Gugger's avatar
Sylvain Gugger committed
86
  wunderbar.", the sentences should be processed as follows:
Sylvain Gugger's avatar
Sylvain Gugger committed
87

Sylvain Gugger's avatar
Sylvain Gugger committed
88
.. code-block::
89

90
91
92
93
    input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
    labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
    # the forward function automatically creates the correct decoder_input_ids
    loss = model(input_ids=input_ids, labels=labels).loss
94

Patrick von Platen's avatar
Patrick von Platen committed
95
96

T5Config
Sylvain Gugger's avatar
Sylvain Gugger committed
97
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
98
99
100
101
102
103

.. autoclass:: transformers.T5Config
    :members:


T5Tokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
104
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
105
106
107

.. autoclass:: transformers.T5Tokenizer
    :members: build_inputs_with_special_tokens, get_special_tokens_mask,
108
        create_token_type_ids_from_sequences, save_vocabulary
Patrick von Platen's avatar
Patrick von Platen committed
109
110


111
112
113
114
115
116
117
T5TokenizerFast
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.T5TokenizerFast
    :members:


Patrick von Platen's avatar
Patrick von Platen committed
118
T5Model
Sylvain Gugger's avatar
Sylvain Gugger committed
119
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
120
121

.. autoclass:: transformers.T5Model
122
    :members: forward, parallelize, deparallelize
Patrick von Platen's avatar
Patrick von Platen committed
123
124
125


T5ForConditionalGeneration
Sylvain Gugger's avatar
Sylvain Gugger committed
126
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
127
128

.. autoclass:: transformers.T5ForConditionalGeneration
129
    :members: forward, parallelize, deparallelize
Patrick von Platen's avatar
Patrick von Platen committed
130

131
132
133
134
T5EncoderModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.T5EncoderModel
135
    :members: forward, parallelize, deparallelize
Patrick von Platen's avatar
Patrick von Platen committed
136
137

TFT5Model
Sylvain Gugger's avatar
Sylvain Gugger committed
138
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
139
140

.. autoclass:: transformers.TFT5Model
Sylvain Gugger's avatar
Sylvain Gugger committed
141
    :members: call
Patrick von Platen's avatar
Patrick von Platen committed
142
143
144


TFT5ForConditionalGeneration
Sylvain Gugger's avatar
Sylvain Gugger committed
145
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Patrick von Platen's avatar
Patrick von Platen committed
146
147

.. autoclass:: transformers.TFT5ForConditionalGeneration
Sylvain Gugger's avatar
Sylvain Gugger committed
148
    :members: call
149
150
151
152
153
154

TFT5EncoderModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.TFT5EncoderModel
    :members: call