transfo-xl.md 6.75 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Sylvain Gugger's avatar
Sylvain Gugger committed
15
16
17
18
-->

# Transformer XL

Yih-Dar's avatar
Yih-Dar committed
19
20
21
22
23
24
<Tip warning={true}>

This model is in maintenance mode only, so we won't accept any new PRs changing its code. This model was deprecated due to security issues linked to `pickle.load`.

We recommend switching to more recent models for improved security.

25
In case you would still like to use `TransfoXL` in your experiments, we recommend using the [Hub checkpoint](https://huggingface.co/transfo-xl-wt103) with a specific revision to ensure you are downloading safe files from the Hub.
Yih-Dar's avatar
Yih-Dar committed
26

27
28
29
30
31
You will need to set the environment variable `TRUST_REMOTE_CODE` to `True` in order to allow the
usage of `pickle.load()`:

```python
import os
Yih-Dar's avatar
Yih-Dar committed
32
33
from transformers import TransfoXLTokenizer, TransfoXLLMHeadModel

34
35
os.environ["TRUST_REMOTE_CODE"] = "True"

Yih-Dar's avatar
Yih-Dar committed
36
37
38
39
40
41
42
43
44
45
46
47
checkpoint = 'transfo-xl-wt103'
revision = '40a186da79458c9f9de846edfaea79c412137f97'

tokenizer = TransfoXLTokenizer.from_pretrained(checkpoint, revision=revision)
model = TransfoXLLMHeadModel.from_pretrained(checkpoint, revision=revision)
```

If you run into any issues running this model, please reinstall the last version that supported this model: v4.35.0.
You can do so by running the following command: `pip install -U transformers==4.35.0`.

</Tip>

Steven Liu's avatar
Steven Liu committed
48
49
50
51
52
53
54
55
56
<div class="flex flex-wrap space-x-1">
<a href="https://huggingface.co/models?filter=transfo-xl">
<img alt="Models" src="https://img.shields.io/badge/All_model_pages-transfo--xl-blueviolet">
</a>
<a href="https://huggingface.co/spaces/docs-demos/transfo-xl-wt103">
<img alt="Spaces" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue">
</a>
</div>

Sylvain Gugger's avatar
Sylvain Gugger committed
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
## Overview

The Transformer-XL model was proposed in [Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](https://arxiv.org/abs/1901.02860) by Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan
Salakhutdinov. It's a causal (uni-directional) transformer with relative positioning (sinuso茂dal) embeddings which can
reuse previously computed hidden-states to attend to longer context (memory). This model also uses adaptive softmax
inputs and outputs (tied).

The abstract from the paper is the following:

*Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the
setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency
beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a
novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the
context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450%
longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+
times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of
bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn
Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably
coherent, novel text articles with thousands of tokens.*

77
78
79
This model was contributed by [thomwolf](https://huggingface.co/thomwolf). The original code can be found [here](https://github.com/kimiyoung/transformer-xl).

## Usage tips
Sylvain Gugger's avatar
Sylvain Gugger committed
80
81
82
83

- Transformer-XL uses relative sinusoidal positional embeddings. Padding can be done on the left or on the right. The
  original implementation trains on SQuAD with padding on the left, therefore the padding defaults are set to left.
- Transformer-XL is one of the few models that has no sequence length limit.
Steven Liu's avatar
Steven Liu committed
84
85
86
- Same as a regular GPT model, but introduces a recurrence mechanism for two consecutive segments (similar to a regular RNNs with two consecutive inputs). In this context, a segment is a number of consecutive tokens (for instance 512) that may span across multiple documents, and segments are fed in order to the model.
- Basically, the hidden states of the previous segment are concatenated to the current input to compute the attention scores. This allows the model to pay attention to information that was in the previous segment as well as the current one. By stacking multiple attention layers, the receptive field can be increased to multiple previous segments.
- This changes the positional embeddings to positional relative embeddings (as the regular positional embeddings would give the same results in the current input and the current hidden state at a given position) and needs to make some adjustments in the way attention scores are computed.
Sylvain Gugger's avatar
Sylvain Gugger committed
87
88
89
90
91
92
93
94


<Tip warning={true}>

TransformerXL does **not** work with *torch.nn.DataParallel* due to a bug in PyTorch, see [issue #36035](https://github.com/pytorch/pytorch/issues/36035)

</Tip>

95
## Resources
96

97
98
- [Text classification task guide](../tasks/sequence_classification)
- [Causal language modeling task guide](../tasks/language_modeling)
Sylvain Gugger's avatar
Sylvain Gugger committed
99
100
101
102
103
104
105
106
107
108
109
110

## TransfoXLConfig

[[autodoc]] TransfoXLConfig

## TransfoXLTokenizer

[[autodoc]] TransfoXLTokenizer
    - save_vocabulary

## TransfoXL specific outputs

Yih-Dar's avatar
Yih-Dar committed
111
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLModelOutput
Sylvain Gugger's avatar
Sylvain Gugger committed
112

Yih-Dar's avatar
Yih-Dar committed
113
[[autodoc]] models.deprecated.transfo_xl.modeling_transfo_xl.TransfoXLLMHeadModelOutput
Sylvain Gugger's avatar
Sylvain Gugger committed
114

Yih-Dar's avatar
Yih-Dar committed
115
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLModelOutput
Sylvain Gugger's avatar
Sylvain Gugger committed
116

Yih-Dar's avatar
Yih-Dar committed
117
[[autodoc]] models.deprecated.transfo_xl.modeling_tf_transfo_xl.TFTransfoXLLMHeadModelOutput
Sylvain Gugger's avatar
Sylvain Gugger committed
118

119
120
121
<frameworkcontent>
<pt>

Sylvain Gugger's avatar
Sylvain Gugger committed
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
## TransfoXLModel

[[autodoc]] TransfoXLModel
    - forward

## TransfoXLLMHeadModel

[[autodoc]] TransfoXLLMHeadModel
    - forward

## TransfoXLForSequenceClassification

[[autodoc]] TransfoXLForSequenceClassification
    - forward

137
138
139
</pt>
<tf>

Sylvain Gugger's avatar
Sylvain Gugger committed
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
## TFTransfoXLModel

[[autodoc]] TFTransfoXLModel
    - call

## TFTransfoXLLMHeadModel

[[autodoc]] TFTransfoXLLMHeadModel
    - call

## TFTransfoXLForSequenceClassification

[[autodoc]] TFTransfoXLForSequenceClassification
    - call

155
156
157
</tf>
</frameworkcontent>

Sylvain Gugger's avatar
Sylvain Gugger committed
158
159
160
161
162
## Internal Layers

[[autodoc]] AdaptiveEmbedding

[[autodoc]] TFAdaptiveEmbedding