transformerxl.rst 2.67 KB
Newer Older
1
2
3
Transformer XL
----------------------------------------------------

Lysandre's avatar
Lysandre committed
4
5
6
Overview
~~~~~~~~~~~~~~~~~~~~~

Lysandre's avatar
Lysandre committed
7
The Transformer-XL model was proposed in
Lysandre's avatar
Lysandre committed
8
`Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context <https://arxiv.org/abs/1901.02860>`__
Lysandre's avatar
Lysandre committed
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov.
It's a causal (uni-directional) transformer with relative positioning (sinuso茂dal) embeddings which can reuse
previously computed hidden-states to attend to longer context (memory).
This model also uses adaptive softmax inputs and outputs (tied).

The abstract from the paper is the following:

*Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and
a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves
the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and
450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up
to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results
of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on
Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.*

Tips:

Lysandre's avatar
Lysandre committed
29
30
31
- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right.
  The original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
- Transformer-XL is one of the few models that has no sequence length limit.
Lysandre's avatar
Lysandre committed
32

33

Lysandre's avatar
Lysandre committed
34
TransfoXLConfig
35
~~~~~~~~~~~~~~~~~~~~~
36

37
.. autoclass:: transformers.TransfoXLConfig
38
    :members:
39
40


Lysandre's avatar
Lysandre committed
41
TransfoXLTokenizer
42
43
~~~~~~~~~~~~~~~~~~~~~~~~~~

44
.. autoclass:: transformers.TransfoXLTokenizer
Lysandre Debut's avatar
Lysandre Debut committed
45
    :members: save_vocabulary
46
47


Lysandre's avatar
Lysandre committed
48
TransfoXLModel
49
50
~~~~~~~~~~~~~~~~~~~~~~~~~~

51
.. autoclass:: transformers.TransfoXLModel
52
53
54
    :members:


Lysandre's avatar
Lysandre committed
55
TransfoXLLMHeadModel
56
57
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

58
.. autoclass:: transformers.TransfoXLLMHeadModel
59
    :members:
LysandreJik's avatar
LysandreJik committed
60
61


Lysandre's avatar
Lysandre committed
62
TFTransfoXLModel
LysandreJik's avatar
LysandreJik committed
63
64
~~~~~~~~~~~~~~~~~~~~~~~~~~

65
.. autoclass:: transformers.TFTransfoXLModel
LysandreJik's avatar
LysandreJik committed
66
67
68
    :members:


Lysandre's avatar
Lysandre committed
69
TFTransfoXLLMHeadModel
LysandreJik's avatar
LysandreJik committed
70
71
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

72
.. autoclass:: transformers.TFTransfoXLLMHeadModel
LysandreJik's avatar
LysandreJik committed
73
    :members: