README.md 1.5 KB
Newer Older
Zhilin Yang's avatar
init  
Zhilin Yang committed
1
2
3
# Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

This repository contains the code in both **PyTorch** and **TensorFlow** for our paper
Zhilin Yang's avatar
Zhilin Yang committed
4
>[Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context](http://arxiv.org/abs/1901.02860)
Zhilin Yang's avatar
init  
Zhilin Yang committed
5

Zhilin Yang's avatar
Zhilin Yang committed
6
>Zihang Dai\*, Zhilin Yang\*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov (*: equal contribution)
Zhilin Yang's avatar
init  
Zhilin Yang committed
7
8
9

>Preprint 2018

Zhilin Yang's avatar
Zhilin Yang committed
10
## TensorFlow
Zhilin Yang's avatar
init  
Zhilin Yang committed
11
12
13
14
15

- The source code is in the `tf/` folder, supporting (1) single-node multi-gpu training, and (2) multi-host TPU training.
- Besides the source code, we also provide pretrained "TensorFlow" models with state-of-the-art (SoTA) performances reported in the paper.
- Please refer to `tf/README.md` for details.

Zhilin Yang's avatar
Zhilin Yang committed
16
## PyTorch
Zhilin Yang's avatar
init  
Zhilin Yang committed
17

cbockman's avatar
cbockman committed
18
- The source code is in the `pytorch/` folder, supporting single-node multi-gpu training via the module `nn.DataParallel`.
Zhilin Yang's avatar
init  
Zhilin Yang committed
19
20
- Please refer to `pytorch/README.md` for details.

Zhilin Yang's avatar
Zhilin Yang committed
21
## Results
Zhilin Yang's avatar
init  
Zhilin Yang committed
22

Zhilin Yang's avatar
Zhilin Yang committed
23
Transformer-XL achieves new state-of-the-art results on multiple language modeling benchmarks. Transformer-XL is also the first to break through the 1.0 barrier on char-level language modeling. Below is a summary.
Zhilin Yang's avatar
init  
Zhilin Yang committed
24
25
26
27
28

Method | enwiki8 | text8 | One Billion Word | WT-103 | PTB (w/o finetuning)
-- | -- | -- | -- | -- | -- 
Previous Best | 1.06 | 1.13 | 23.7 | 20.5 | 55.5
Transformer-XL | **0.99** | **1.08** | **21.8** | **18.3** | **54.5**
Zihang Dai's avatar
Zihang Dai committed
29
30
31
32
33



## Acknowledgement

Zhilin Yang's avatar
Zhilin Yang committed
34
A large portion of the `getdata.sh` script comes from the [awd-lstm](https://github.com/salesforce/awd-lstm-lm/) repo. Happy Language Modeling :)