transformerxl.rst 3.97 KB
Newer Older
1
Transformer XL
Sylvain Gugger's avatar
Sylvain Gugger committed
2
-----------------------------------------------------------------------------------------------------------------------
3

Lysandre's avatar
Lysandre committed
4
Overview
Sylvain Gugger's avatar
Sylvain Gugger committed
5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Lysandre's avatar
Lysandre committed
6

Sylvain Gugger's avatar
Sylvain Gugger committed
7
8
9
10
11
The Transformer-XL model was proposed in `Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
<https://arxiv.org/abs/1901.02860>`__ by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinuso茂dal) embeddings which can
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
inputs and outputs (tied).
Lysandre's avatar
Lysandre committed
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27

The abstract from the paper is the following:

*Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and
a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves
the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and
450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up
to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results
of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on
Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.*

Tips:

Lysandre's avatar
Lysandre committed
28
29
30
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right.
  The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
- Transformer-XL is one of the few models that has no sequence length limit.
Lysandre's avatar
Lysandre committed
31

Sylvain Gugger's avatar
Sylvain Gugger committed
32
The original code can be found `here <https://github.com/kimiyoung/transformer-xl>`__.
33

34

Lysandre's avatar
Lysandre committed
35
TransfoXLConfig
Sylvain Gugger's avatar
Sylvain Gugger committed
36
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
37

38
.. autoclass:: transformers.TransfoXLConfig
39
    :members:
40
41


Lysandre's avatar
Lysandre committed
42
TransfoXLTokenizer
Sylvain Gugger's avatar
Sylvain Gugger committed
43
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
44

45
.. autoclass:: transformers.TransfoXLTokenizer
Lysandre Debut's avatar
Lysandre Debut committed
46
    :members: save_vocabulary
47
48


49
TransfoXL specific outputs
Sylvain Gugger's avatar
Sylvain Gugger committed
50
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51
52
53
54
55
56
57

.. autoclass:: transformers.modeling_transfo_xl.TransfoXLModelOutput
    :members:

.. autoclass:: transformers.modeling_transfo_xl.TransfoXLLMHeadModelOutput
    :members:

Sylvain Gugger's avatar
Sylvain Gugger committed
58
59
60
61
62
63
.. autoclass:: transformers.modeling_tf_transfo_xl.TFTransfoXLModelOutput
    :members:

.. autoclass:: transformers.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
    :members:

64

Lysandre's avatar
Lysandre committed
65
TransfoXLModel
Sylvain Gugger's avatar
Sylvain Gugger committed
66
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
67

68
.. autoclass:: transformers.TransfoXLModel
Sylvain Gugger's avatar
Sylvain Gugger committed
69
    :members: forward
70
71


Lysandre's avatar
Lysandre committed
72
TransfoXLLMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
73
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
74

75
.. autoclass:: transformers.TransfoXLLMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
76
    :members: forward
LysandreJik's avatar
LysandreJik committed
77
78


Lysandre's avatar
Lysandre committed
79
TFTransfoXLModel
Sylvain Gugger's avatar
Sylvain Gugger committed
80
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LysandreJik's avatar
LysandreJik committed
81

82
.. autoclass:: transformers.TFTransfoXLModel
Sylvain Gugger's avatar
Sylvain Gugger committed
83
    :members: call
LysandreJik's avatar
LysandreJik committed
84
85


Lysandre's avatar
Lysandre committed
86
TFTransfoXLLMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
87
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
LysandreJik's avatar
LysandreJik committed
88

89
.. autoclass:: transformers.TFTransfoXLLMHeadModel
Sylvain Gugger's avatar
Sylvain Gugger committed
90
    :members: call