README.md 6.33 KB
Newer Older
Myle Ott's avatar
Myle Ott committed
1
# <img src="fairseq_logo.png" width="30"> Introduction
Sergey Edunov's avatar
Sergey Edunov committed
2

Myle Ott's avatar
Myle Ott committed
3
4
Fairseq(-py) is a sequence modeling toolkit that allows researchers and
developers to train custom models for translation, summarization, language
Myle Ott's avatar
Myle Ott committed
5
6
7
8
modeling and other text generation tasks.

### What's New:

Myle Ott's avatar
Myle Ott committed
9
- August 2019: [WMT'19 models released](examples/wmt19/README.md)
10
- July 2019: fairseq relicensed under MIT license
Myle Ott's avatar
Myle Ott committed
11
12
- July 2019: [RoBERTa models and code released](examples/roberta/README.md)
- June 2019: [wav2vec models and code released](examples/wav2vec/README.md)
Myle Ott's avatar
Myle Ott committed
13
14
15
16

### Features:

Fairseq provides reference implementations of various sequence-to-sequence models, including:
Myle Ott's avatar
Myle Ott committed
17
- **Convolutional Neural Networks (CNN)**
Myle Ott's avatar
Myle Ott committed
18
19
20
21
22
  - [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
  - [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
  - [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
  - [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
  - [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
23
- **LightConv and DynamicConv models**
Myle Ott's avatar
Myle Ott committed
24
  - [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
Myle Ott's avatar
Myle Ott committed
25
- **Long Short-Term Memory (LSTM) networks**
Myle Ott's avatar
Myle Ott committed
26
  - Effective Approaches to Attention-based Neural Machine Translation (Luong et al., 2015)
Myle Ott's avatar
Myle Ott committed
27
- **Transformer (self-attention) networks**
Myle Ott's avatar
Myle Ott committed
28
29
30
31
32
33
  - Attention Is All You Need (Vaswani et al., 2017)
  - [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
  - [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
  - [Adaptive Input Representations for Neural Language Modeling (Baevski and Auli, 2018)](examples/language_model/transformer_lm/README.md)
  - [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
  - [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
Myle Ott's avatar
Myle Ott committed
34
  - [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
35

Myle Ott's avatar
Myle Ott committed
36
**Additionally:**
Myle Ott's avatar
Myle Ott committed
37
- multi-GPU (distributed) training on one machine or across multiple machines
Myle Ott's avatar
Myle Ott committed
38
39
40
- fast generation on both CPU and GPU with multiple search algorithms implemented:
  - beam search
  - Diverse Beam Search ([Vijayakumar et al., 2016](https://arxiv.org/abs/1610.02424))
41
  - sampling (unconstrained, top-k and top-p/nucleus)
Myle Ott's avatar
Myle Ott committed
42
- large mini-batch training even on a single GPU via delayed updates
43
- mixed precision training (trains faster with less GPU memory on [NVIDIA tensor cores](https://developer.nvidia.com/tensor-cores))
Myle Ott's avatar
Myle Ott committed
44
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
Myle Ott's avatar
Myle Ott committed
45

46
We also provide [pre-trained models](#pre-trained-models-and-examples) for several benchmark
Myle Ott's avatar
Myle Ott committed
47
translation and language modeling datasets.
Sergey Edunov's avatar
Sergey Edunov committed
48
49
50
51

![Model](fairseq.gif)

# Requirements and Installation
Myle Ott's avatar
Myle Ott committed
52

Myle Ott's avatar
Myle Ott committed
53
* [PyTorch](http://pytorch.org/) version >= 1.1.0
Bairen Yi's avatar
Bairen Yi committed
54
* Python version >= 3.5
Myle Ott's avatar
Myle Ott committed
55
* For training new models, you'll also need an NVIDIA GPU and [NCCL](https://github.com/NVIDIA/nccl)
Myle Ott's avatar
Myle Ott committed
56
* **For faster training** install NVIDIA's [apex](https://github.com/NVIDIA/apex) library with the `--cuda_ext` option
Sergey Edunov's avatar
Sergey Edunov committed
57

Myle Ott's avatar
Myle Ott committed
58
59
To install fairseq:
```bash
Myle Ott's avatar
Myle Ott committed
60
61
pip install fairseq
```
Myle Ott's avatar
Myle Ott committed
62
63
64

On MacOS:
```bash
65
66
CFLAGS="-stdlib=libc++" pip install fairseq
```
Myle Ott's avatar
Myle Ott committed
67
68
69
70

If you use Docker make sure to increase the shared memory size either with
`--ipc=host` or `--shm-size` as command line options to `nvidia-docker run`.

Myle Ott's avatar
Myle Ott committed
71
72
73
**Installing from source**

To install fairseq from source and develop locally:
Myle Ott's avatar
Myle Ott committed
74
```bash
Myle Ott's avatar
Myle Ott committed
75
76
77
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .
Sergey Edunov's avatar
Sergey Edunov committed
78
79
```

Myle Ott's avatar
Myle Ott committed
80
# Getting Started
81

Myle Ott's avatar
Myle Ott committed
82
83
84
The [full documentation](https://fairseq.readthedocs.io/) contains instructions
for getting started, training new models and extending fairseq with new model
types and tasks.
Sergey Edunov's avatar
Sergey Edunov committed
85

86
# Pre-trained models and examples
Sergey Edunov's avatar
Sergey Edunov committed
87

88
89
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below,
as well as example training and evaluation commands.
Sergey Edunov's avatar
Sergey Edunov committed
90

91
- [Translation](examples/translation/README.md): convolutional and transformer models are available
Myle Ott's avatar
Myle Ott committed
92
- [Language Modeling](examples/language_model/README.md): convolutional and transformer models are available
Sergey Edunov's avatar
Sergey Edunov committed
93

94
We also have more detailed READMEs to reproduce results from specific papers:
Myle Ott's avatar
Myle Ott committed
95
- [Facebook FAIR's WMT19 News Translation Task Submission (Ng et al., 2019)](examples/wmt19/README.md)
Myle Ott's avatar
Myle Ott committed
96
97
98
99
100
101
102
103
104
105
- [RoBERTa: A Robustly Optimized BERT Pretraining Approach (Liu et al., 2019)](examples/roberta/README.md)
- [wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)](examples/wav2vec/README.md)
- [Mixture Models for Diverse Machine Translation: Tricks of the Trade (Shen et al., 2019)](examples/translation_moe/README.md)
- [Pay Less Attention with Lightweight and Dynamic Convolutions (Wu et al., 2019)](examples/pay_less_attention_paper/README.md)
- [Understanding Back-Translation at Scale (Edunov et al., 2018)](examples/backtranslation/README.md)
- [Classical Structured Prediction Losses for Sequence to Sequence Learning (Edunov et al., 2018)](https://github.com/pytorch/fairseq/tree/classic_seqlevel)
- [Hierarchical Neural Story Generation (Fan et al., 2018)](examples/stories/README.md)
- [Scaling Neural Machine Translation (Ott et al., 2018)](examples/scaling_nmt/README.md)
- [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](examples/conv_seq2seq/README.md)
- [Language Modeling with Gated Convolutional Networks (Dauphin et al., 2017)](examples/language_model/conv_lm/README.md)
Sergey Edunov's avatar
Sergey Edunov committed
106
107
108
109
110
111
112

# Join the fairseq community

* Facebook page: https://www.facebook.com/groups/fairseq.users
* Google group: https://groups.google.com/forum/#!forum/fairseq-users

# License
113
fairseq(-py) is MIT-licensed.
Sergey Edunov's avatar
Sergey Edunov committed
114
The license applies to the pre-trained models as well.
Myle Ott's avatar
Myle Ott committed
115

Myle Ott's avatar
Myle Ott committed
116
117
118
119
120
121
122
123
124
125
126
127
# Citation

Please cite as:

```bibtex
@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}
```