README.md 4.55 KB
Newer Older
1
2
3
4
# Neural Language Modeling

## Pre-trained models

Myle Ott's avatar
Myle Ott committed
5
6
7
8
9
10
11
Model | Description | Dataset | Download
---|---|---|---
`transformer_lm.gbw.adaptive_huge` | Adaptive Inputs <br> ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) <br> 1026M params | [Google Billion Words](https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_gbw_huge.tar.bz2)
`transformer_lm.wiki103.adaptive` | Adaptive Inputs <br> ([Baevski and Auli, 2018](https://arxiv.org/abs/1809.10853)) <br> 247M params | [WikiText-103](https://einstein.ai/research/the-wikitext-long-term-dependency-language-modeling-dataset) | [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/lm/adaptive_lm_wiki103.tar.bz2)
`transformer_lm.wmt19.en` | English LM <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.en.tar.gz)
`transformer_lm.wmt19.de` | German LM <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.de.tar.gz)
`transformer_lm.wmt19.ru` | Russian LM <br> ([Ng et al., 2019](https://arxiv.org/abs/1907.06616)) | [WMT News Crawl](http://data.statmt.org/news-crawl/) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/lm/wmt19.ru.tar.gz)
Myle Ott's avatar
Myle Ott committed
12

13
## Example usage
14

Myle Ott's avatar
Myle Ott committed
15
16
17
Sampling from a language model using PyTorch Hub:
```python
import torch
Myle Ott's avatar
Myle Ott committed
18

Myle Ott's avatar
Myle Ott committed
19
20
# List available models
torch.hub.list('pytorch/fairseq')  # [..., 'transformer_lm.wmt19.en', ...]
Myle Ott's avatar
Myle Ott committed
21

Myle Ott's avatar
Myle Ott committed
22
23
24
25
26
27
28
# Load an English LM trained on WMT'19 News Crawl data
en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt19.en', tokenizer='moses', bpe='fastbpe')

# Sample from the language model
en_lm.sample('Barack Obama', beam=1, sampling=True, sampling_topk=10, temperature=0.8)
# "Barack Obama is coming to Sydney and New Zealand (...)"
```
Myle Ott's avatar
Myle Ott committed
29
30
31

## Training a new model with the CLI tools

32
33
These scripts provide an example of pre-processing data for the Language Modeling task.

34
### prepare-wikitext-103.sh
35

36
Provides an example of pre-processing for [WikiText-103 language modeling task](https://www.salesforce.com/products/einstein/ai-research/the-wikitext-dependency-language-modeling-dataset/):
37
38

Example usage:
Myle Ott's avatar
Myle Ott committed
39
40

Prepare data:
Myle Ott's avatar
Myle Ott committed
41
42
43
44
```bash
cd examples/language_model/
bash prepare-wikitext-103.sh
cd ../..
45
46

# Binarize the dataset:
Myle Ott's avatar
Myle Ott committed
47
TEXT=examples/language_model/wikitext-103
48

Myle Ott's avatar
Myle Ott committed
49
50
51
fairseq-preprocess --only-source \
    --trainpref $TEXT/wiki.train.tokens --validpref $TEXT/wiki.valid.tokens --testpref $TEXT/wiki.test.tokens \ 
    --destdir data-bin/wikitext-103
Myle Ott's avatar
Myle Ott committed
52
53
54
```

Train a transformer language model with adaptive inputs ([Baevski and Auli (2018): Adaptive Input Representations for Neural Language Modeling](transformer_lm/README.md)):
Myle Ott's avatar
Myle Ott committed
55
```bash
Myle Ott's avatar
Myle Ott committed
56
# If it runs out of memory, try to reduce max-tokens and tokens-per-sample
Myle Ott's avatar
Myle Ott committed
57
58
59
60
61
62
63
mkdir -p checkpoints/transformer_wikitext-103
fairseq-train --task language_modeling data-bin/wikitext-103 \
    --save-dir checkpoints/transformer_wikitext-103 --arch transformer_lm_wiki103 \
    --max-update 286000 --max-lr 1.0 --t-mult 2 --lr-period-updates 270000 --lr-scheduler cosine --lr-shrink 0.75 \
    --warmup-updates 16000 --warmup-init-lr 1e-07 --min-lr 1e-09 --optimizer nag --lr 0.0001 --clip-norm 0.1 \
    --criterion adaptive_loss --max-tokens 3072 --update-freq 3 --tokens-per-sample 3072 --seed 1 \
    --sample-break-mode none --skip-invalid-size-inputs-valid-test --ddp-backend=no_c10d
64

Myle Ott's avatar
Myle Ott committed
65
# Evaluate:
Myle Ott's avatar
Myle Ott committed
66
67
fairseq-eval-lm data-bin/wikitext-103 --path 'checkpoints/transformer_wiki103/checkpoint_best.pt' \
    --sample-break-mode complete --max-tokens 3072 --context-window 2560 --softmax-batch 1024
Myle Ott's avatar
Myle Ott committed
68
69
70
71
72
```

Train a convolutional language model ([Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks](conv_lm/README.md)):
```
# If it runs out of memory, try to reduce max-tokens and tokens-per-sample
Myle Ott's avatar
Myle Ott committed
73
74
75
76
77
78
79
80
mkdir -p checkpoints/fconv_wikitext-103
fairseq-train --task language_modeling data-bin/wikitext-103 \
    --save-dir checkpoints/fconv_wikitext-103 \
    --max-epoch 35 --arch fconv_lm_dauphin_wikitext103 --optimizer nag \
    --lr 1.0 --lr-scheduler reduce_lr_on_plateau --lr-shrink 0.5 \
    --clip-norm 0.1 --dropout 0.2 --weight-decay 5e-06 --criterion adaptive_loss \
    --adaptive-softmax-cutoff 10000,20000,200000 --max-tokens 1024 --tokens-per-sample 1024 \
    --ddp-backend=no_c10d
81
82

# Evaluate:
Myle Ott's avatar
Myle Ott committed
83
fairseq-eval-lm data-bin/wikitext-103 --path 'checkpoints/fconv_wiki103/checkpoint_best.pt'
84
```