Commit 19a5852f authored by Zhilin Yang's avatar Zhilin Yang Committed by GitHub
Browse files

Update README.md

parent 6c747fe1
...@@ -7,18 +7,18 @@ This repository contains the code in both **PyTorch** and **TensorFlow** for our ...@@ -7,18 +7,18 @@ This repository contains the code in both **PyTorch** and **TensorFlow** for our
>Preprint 2018 >Preprint 2018
#### TensorFlow ## TensorFlow
- The source code is in the `tf/` folder, supporting (1) single-node multi-gpu training, and (2) multi-host TPU training. - The source code is in the `tf/` folder, supporting (1) single-node multi-gpu training, and (2) multi-host TPU training.
- Besides the source code, we also provide pretrained "TensorFlow" models with state-of-the-art (SoTA) performances reported in the paper. - Besides the source code, we also provide pretrained "TensorFlow" models with state-of-the-art (SoTA) performances reported in the paper.
- Please refer to `tf/README.md` for details. - Please refer to `tf/README.md` for details.
#### PyTorch ## PyTorch
- The source code is in the `pytorch/` folder, supporting single-node multi-gpu training via the module `nn.DataParallel`. - The source code is in the `pytorch/` folder, supporting single-node multi-gpu training via the module `nn.DataParallel`.
- Please refer to `pytorch/README.md` for details. - Please refer to `pytorch/README.md` for details.
#### Results ## Results
Transformer-XL achieves new state-of-the-art results on multipole language modeling benchmarks. Transformer-XL is also the first to break through the 1.0 barrier on char-level language modeling. Below is a summary. Transformer-XL achieves new state-of-the-art results on multipole language modeling benchmarks. Transformer-XL is also the first to break through the 1.0 barrier on char-level language modeling. Below is a summary.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment