README.md 7.69 KB
Newer Older
Xin Pan's avatar
Xin Pan committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
<font size=4><b>Language Model on One Billion Word Benchmark</b></font>

<b>Authors:</b>

Oriol Vinyals (vinyals@google.com, github: OriolVinyals),
Xin Pan (xpan@google.com, github: panyx0718)

<b>Paper Authors:</b>

Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu

<b>TL;DR</b>

This is a pretrained model on One Billion Word Benchmark.
If you use this model in your publication, please cite the original paper:

@article{jozefowicz2016exploring,
  title={Exploring the Limits of Language Modeling},
  author={Jozefowicz, Rafal and Vinyals, Oriol and Schuster, Mike
          and Shazeer, Noam and Wu, Yonghui},
  journal={arXiv preprint arXiv:1602.02410},
  year={2016}
}

<b>Introduction</b>

In this release, we open source a model trained on the One Billion Word
Benchmark (http://arxiv.org/abs/1312.3005), a large language corpus in English
which was released in 2013. This dataset contains about one billion words, and
has a vocabulary size of about 800K words. It contains mostly news data. Since
sentences in the training set are shuffled, models can ignore the context and
focus on sentence level language modeling.

In the original release and subsequent work, people have used the same test set
to train models on this dataset as a standard benchmark for language modeling.
Recently, we wrote an article (http://arxiv.org/abs/1602.02410) describing a
model hybrid between character CNN, a large and deep LSTM, and a specific
Softmax architecture which allowed us to train the best model on this dataset
thus far, almost halving the best perplexity previously obtained by others.

<b>Code Release</b>

The open-sourced components include:

* TensorFlow GraphDef proto buffer text file.
* TensorFlow pre-trained checkpoint shards.
* Code used to evaluate the pre-trained model.
* Vocabulary file.
* Test set from LM-1B evaluation.

The code supports 4 evaluation modes:

* Given provided dataset, calculate the model's perplexity.
* Given a prefix sentence, predict the next words.
* Dump the softmax embedding, character-level CNN word embeddings.
* Give a sentence, dump the embedding from the LSTM state.

<b>Results</b>

Model | Test Perplexity | Number of Params [billions]
------|-----------------|----------------------------
Sigmoid-RNN-2048 [Blackout] | 68.3 | 4.1
Interpolated KN 5-gram, 1.1B n-grams [chelba2013one] | 67.6 | 1.76
Sparse Non-Negative Matrix LM [shazeer2015sparse] | 52.9 | 33
RNN-1024 + MaxEnt 9-gram features [chelba2013one] | 51.3 | 20
LSTM-512-512 | 54.1 | 0.82
LSTM-1024-512 | 48.2 | 0.82
LSTM-2048-512 | 43.7 | 0.83
LSTM-8192-2048 (No Dropout) | 37.9 | 3.3
LSTM-8192-2048 (50\% Dropout) | 32.2 | 3.3
2-Layer LSTM-8192-1024 (BIG LSTM) | 30.6 | 1.8
(THIS RELEASE) BIG LSTM+CNN Inputs | <b>30.0</b> | <b>1.04</b>

<b>How To Run</b>

Neal Wu's avatar
Neal Wu committed
76
Prerequisites:
Xin Pan's avatar
Xin Pan committed
77
78
79
80
81

* Install TensorFlow.
* Install Bazel.
* Download the data files:
  * Model GraphDef file:
Xin Pan's avatar
Xin Pan committed
82
  [link](http://download.tensorflow.org/models/LM_LSTM_CNN/graph-2016-09-10.pbtxt)
Xin Pan's avatar
Xin Pan committed
83
  * Model Checkpoint sharded file:
Xin Pan's avatar
Xin Pan committed
84
85
86
87
88
89
90
91
92
93
94
95
  [1](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-base)
  [2](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-char-embedding)
  [3](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-lstm)
  [4](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax0)
  [5](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax1)
  [6](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax2)
  [7](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax3)
  [8](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax4)
  [9](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax5)
  [10](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax6)
  [11](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax7)
  [12](http://download.tensorflow.org/models/LM_LSTM_CNN/all_shards-2016-09-10/ckpt-softmax8)
Xin Pan's avatar
Xin Pan committed
96
  * Vocabulary file:
Xin Pan's avatar
Xin Pan committed
97
  [link](http://download.tensorflow.org/models/LM_LSTM_CNN/vocab-2016-09-10.txt)
Xin Pan's avatar
Xin Pan committed
98
  * test dataset: link
Xin Pan's avatar
Xin Pan committed
99
  [link](http://download.tensorflow.org/models/LM_LSTM_CNN/test/news.en.heldout-00000-of-00050)
Neal Wu's avatar
Neal Wu committed
100
* It is recommended to run on a modern desktop instead of a laptop.
Xin Pan's avatar
Xin Pan committed
101
102
103
104
105
106
107

```shell
# 1. Clone the code to your workspace.
# 2. Download the data to your workspace.
# 3. Create an empty WORKSPACE file in your workspace.
# 4. Create an empty output directory in your workspace.
# Example directory structure below:
Neal Wu's avatar
Neal Wu committed
108
$ ls -R
Xin Pan's avatar
Xin Pan committed
109
110
111
112
.:
data  lm_1b  output  WORKSPACE

./data:
Xin Pan's avatar
Xin Pan committed
113
114
115
116
ckpt-base            ckpt-lstm      ckpt-softmax1  ckpt-softmax3  ckpt-softmax5
ckpt-softmax7  graph-2016-09-10.pbtxt          vocab-2016-09-10.txt
ckpt-char-embedding  ckpt-softmax0  ckpt-softmax2  ckpt-softmax4  ckpt-softmax6
ckpt-softmax8  news.en.heldout-00000-of-00050
Xin Pan's avatar
Xin Pan committed
117
118

./lm_1b:
Xin Pan's avatar
Xin Pan committed
119
BUILD  data_utils.py  lm_1b_eval.py  README.md
Xin Pan's avatar
Xin Pan committed
120
121
122
123

./output:

# Build the codes.
Neal Wu's avatar
Neal Wu committed
124
$ bazel build -c opt lm_1b/...
Xin Pan's avatar
Xin Pan committed
125
# Run sample mode:
Neal Wu's avatar
Neal Wu committed
126
127
128
129
130
$ bazel-bin/lm_1b/lm_1b_eval --mode sample \
                             --prefix "I love that I" \
                             --pbtxt data/graph-2016-09-10.pbtxt \
                             --vocab_file data/vocab-2016-09-10.txt  \
                             --ckpt 'data/ckpt-*'
Xin Pan's avatar
Xin Pan committed
131
132
133
134
135
136
137
138
139
140
...(omitted some TensorFlow output)
I love
I love that
I love that I
I love that I find
I love that I find that
I love that I find that amazing
...(omitted)

# Run eval mode:
Neal Wu's avatar
Neal Wu committed
141
142
143
144
145
$ bazel-bin/lm_1b/lm_1b_eval --mode eval \
                             --pbtxt data/graph-2016-09-10.pbtxt \
                             --vocab_file data/vocab-2016-09-10.txt  \
                             --input_data data/news.en.heldout-00000-of-00050 \
                             --ckpt 'data/ckpt-*'
Xin Pan's avatar
Xin Pan committed
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
...(omitted some TensorFlow output)
Loaded step 14108582.
# perplexity is high initially because words without context are harder to
# predict.
Eval Step: 0, Average Perplexity: 2045.512297.
Eval Step: 1, Average Perplexity: 229.478699.
Eval Step: 2, Average Perplexity: 208.116787.
Eval Step: 3, Average Perplexity: 338.870601.
Eval Step: 4, Average Perplexity: 228.950107.
Eval Step: 5, Average Perplexity: 197.685857.
Eval Step: 6, Average Perplexity: 156.287063.
Eval Step: 7, Average Perplexity: 124.866189.
Eval Step: 8, Average Perplexity: 147.204975.
Eval Step: 9, Average Perplexity: 90.124864.
Eval Step: 10, Average Perplexity: 59.897914.
Eval Step: 11, Average Perplexity: 42.591137.
...(omitted)
Eval Step: 4529, Average Perplexity: 29.243668.
Eval Step: 4530, Average Perplexity: 29.302362.
Eval Step: 4531, Average Perplexity: 29.285674.
...(omitted. At convergence, it should be around 30.)

# Run dump_emb mode:
Neal Wu's avatar
Neal Wu committed
169
170
171
172
173
$ bazel-bin/lm_1b/lm_1b_eval --mode dump_emb \
                             --pbtxt data/graph-2016-09-10.pbtxt \
                             --vocab_file data/vocab-2016-09-10.txt  \
                             --ckpt 'data/ckpt-*' \
                             --save_dir output
Xin Pan's avatar
Xin Pan committed
174
175
176
177
178
179
...(omitted some TensorFlow output)
Finished softmax weights
Finished word embedding 0/793471
Finished word embedding 1/793471
Finished word embedding 2/793471
...(omitted)
Neal Wu's avatar
Neal Wu committed
180
$ ls output/
Xin Pan's avatar
Xin Pan committed
181
182
183
embeddings_softmax.npy ...

# Run dump_lstm_emb mode:
Neal Wu's avatar
Neal Wu committed
184
185
186
187
188
189
190
$ bazel-bin/lm_1b/lm_1b_eval --mode dump_lstm_emb \
                             --pbtxt data/graph-2016-09-10.pbtxt \
                             --vocab_file data/vocab-2016-09-10.txt \
                             --ckpt 'data/ckpt-*' \
                             --sentence "I love who I am ." \
                             --save_dir output
$ ls output/
Xin Pan's avatar
Xin Pan committed
191
192
193
194
lstm_emb_step_0.npy  lstm_emb_step_2.npy  lstm_emb_step_4.npy
lstm_emb_step_6.npy  lstm_emb_step_1.npy  lstm_emb_step_3.npy
lstm_emb_step_5.npy
```