This is a PyTorch version of [fairseq](https://github.com/facebookresearch/fairseq), a sequence-to-sequence learning toolkit from Facebook AI Research. The original authors of this reimplementation are (in no particular order) Sergey Edunov, Myle Ott, and Sam Gross. The toolkit implements the fully convolutional model described in [Convolutional Sequence to Sequence Learning](https://arxiv.org/abs/1705.03122). The toolkit features multi-GPU training on a single machine as well as fast beam search generation on both CPU and GPU. We provide pre-trained models for English to French and English to German translation.
This is a PyTorch version of [fairseq](https://github.com/facebookresearch/fairseq), a sequence-to-sequence learning toolkit from Facebook AI Research. The original authors of this reimplementation are (in no particular order) Sergey Edunov, Myle Ott, and Sam Gross. The toolkit implements the fully convolutional model described in [Convolutional Sequence to Sequence Learning](https://arxiv.org/abs/1705.03122) and features multi-GPU training on a single machine as well as fast beam search generation on both CPU and GPU. We provide pre-trained models for English to French and English to German translation.


...
@@ -27,8 +27,9 @@ If you use the code in your paper, then please cite it as:
...
@@ -27,8 +27,9 @@ If you use the code in your paper, then please cite it as:
Currently fairseq-py requires PyTorch from the GitHub repository. There are multiple ways of installing it.
Currently fairseq-py requires PyTorch from the GitHub repository. There are multiple ways of installing it.
We suggest using [Miniconda3](https://conda.io/miniconda.html) and the following instructions.
We suggest using [Miniconda3](https://conda.io/miniconda.html) and the following instructions.
* Install Miniconda3 from https://conda.io/miniconda.html create and activate python 3 environment.
* Install Miniconda3 from https://conda.io/miniconda.html; create and activate a Python 3 environment.
By default, `python train.py` will use all available GPUs on your machine.
By default, `python train.py` will use all available GPUs on your machine.
...
@@ -135,7 +132,7 @@ You may need to use a smaller value depending on the available GPU memory on you
...
@@ -135,7 +132,7 @@ You may need to use a smaller value depending on the available GPU memory on you
Once your model is trained, you can generate translations using `python generate.py`**(for binarized data)** or `python generate.py -i`**(for raw text)**:
Once your model is trained, you can generate translations using `python generate.py`**(for binarized data)** or `python generate.py -i`**(for raw text)**: