"tests/test_modeling_xlm.py" did not exist on "47f0e3cfb7df192ab80215cea9096791fce08694"
README.md 14.7 KB
Newer Older
thomwolf's avatar
thomwolf committed
1
# 馃懢 PyTorch-Transformers
VictorSanh's avatar
VictorSanh committed
2

thomwolf's avatar
thomwolf committed
3
[![CircleCI](https://circleci.com/gh/huggingface/pytorch-pretrained-BERT.svg?style=svg)](https://circleci.com/gh/huggingface/pytorch-pretrained-BERT)
Julien Chaumond's avatar
Julien Chaumond committed
4

thomwolf's avatar
thomwolf committed
5
PyTorch-Transformers is a library of state-of-the-art pre-trained models for Natural Language Processing (NLP). The library currently contains PyTorch implementations, pre-trained model weights, usage scripts and conversion utilities for the following models:
VictorSanh's avatar
VictorSanh committed
6

thomwolf's avatar
thomwolf committed
7
- **[Google's BERT model](https://github.com/google-research/bert)** released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
thomwolf's avatar
thomwolf committed
8
- **[OpenAI's GPT model](https://github.com/openai/finetune-transformer-lm)** released  with the paper [Improving Language Understanding by Generative Pre-Training](https://blog.openai.com/language-unsupervised/) by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever.
thomwolf's avatar
thomwolf committed
9
10
11
12
- **[OpenAI's GPT-2 model](https://blog.openai.com/better-language-models/)** released with the paper [Language Models are Unsupervised Multitask Learners](https://blog.openai.com/better-language-models/) by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
- **[Google/CMU's Transformer-XL model](https://github.com/kimiyoung/transformer-xl)** released with the paper [鈥媂LNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
- **[Google/CMU's XLNet model](https://github.com/zihangdai/xlnet/)** released with the paper [鈥媂LNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Zhilin Yang*, Zihang Dai*, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V. Le.
- **[Facebook's XLM model](https://github.com/facebookresearch/XLM/)** released together with the paper [Cross-lingual Language Model Pretraining](https://arxiv.org/abs/1901.07291) by Guillaume Lample and Alexis Conneau.
thomwolf's avatar
thomwolf committed
13

thomwolf's avatar
thomwolf committed
14
These implementations have been tested on several datasets (see the example scripts) and should match the performances of the original implementations (e.g. ~93 F1 on SQuAD for BERT Whole-Word-Masking, ~88 F1 on RocStories for OpenAI GPT, ~18.3 perplexity on WikiText 103 for Transformer-XL, ~0.916 Peason R coefficient on STS-B for XLNet). You can find more details on the performances in the Examples section of the [documentation](#documentation).
15

thomwolf's avatar
thomwolf committed
16
| Section | Description |
thomwolf's avatar
thomwolf committed
17
|-|-|
thomwolf's avatar
thomwolf committed
18
| [Installation](#installation) | How to install the package |
thomwolf's avatar
thomwolf committed
19
| [Quick tour: Usage](#quick-tour-usage) | Tokenizers & models usage: Bert and GPT-2 |
thomwolf's avatar
thomwolf committed
20
| [Quick tour: Fine-tuning/usage scripts](#quick-tour-fine-tuning/usage-scripts) | Using provided scripts: GLUE, SQuAD and Text generation |
thomwolf's avatar
thomwolf committed
21
| [Documentation](#documentation) | Full API documentation and more |
thomwolf's avatar
thomwolf committed
22

thomwolf's avatar
thomwolf committed
23
## Installation
VictorSanh's avatar
VictorSanh committed
24

thomwolf's avatar
thomwolf committed
25
This repo is tested on Python 2.7 and 3.5+ (examples are tested only on python 3.5+) and PyTorch 0.4.1 to 1.1.0
VictorSanh's avatar
VictorSanh committed
26

thomwolf's avatar
thomwolf committed
27
### With pip
thomwolf's avatar
thomwolf committed
28

thomwolf's avatar
thomwolf committed
29
PyTorch-Transformers can be installed by pip as follows:
thomwolf's avatar
thomwolf committed
30

thomwolf's avatar
thomwolf committed
31
```bash
thomwolf's avatar
thomwolf committed
32
pip install pytorch-transformers
thomwolf's avatar
thomwolf committed
33
```
VictorSanh's avatar
VictorSanh committed
34

thomwolf's avatar
thomwolf committed
35
### From source
thomwolf's avatar
thomwolf committed
36
37

Clone the repository and run:
thomwolf's avatar
thomwolf committed
38

thomwolf's avatar
thomwolf committed
39
40
41
```bash
pip install [--editable] .
```
VictorSanh's avatar
VictorSanh committed
42

thomwolf's avatar
thomwolf committed
43
### Tests
thomwolf's avatar
thomwolf committed
44

thomwolf's avatar
thomwolf committed
45
A series of tests is included for the library and the example scripts. Library tests can be found in the [tests folder](https://github.com/huggingface/pytorch-transformers/tree/master/pytorch_transformers/tests) and examples tests in the [examples folder](https://github.com/huggingface/pytorch-transformers/tree/master/examples).
thomwolf's avatar
thomwolf committed
46

thomwolf's avatar
thomwolf committed
47
These tests can be run using `pytest` (install pytest if needed with `pip install pytest`).
thomwolf's avatar
thomwolf committed
48

thomwolf's avatar
thomwolf committed
49
You can run the tests from the root of the cloned repository with the commands:
thomwolf's avatar
thomwolf committed
50

thomwolf's avatar
thomwolf committed
51
52
53
54
```bash
python -m pytest -sv ./pytorch_transformers/tests/
python -m pytest -sv ./examples/
```
thomwolf's avatar
thomwolf committed
55

thomwolf's avatar
thomwolf committed
56
## Quick tour: Usage
thomwolf's avatar
thomwolf committed
57

thomwolf's avatar
thomwolf committed
58
Here are two quick-start examples using `Bert` and `GPT2` with pre-trained models.
thomwolf's avatar
thomwolf committed
59

thomwolf's avatar
thomwolf committed
60
See the [documentation](#documentation) for the details of all the models and classes.
thomwolf's avatar
thomwolf committed
61

thomwolf's avatar
thomwolf committed
62
### BERT example
thomwolf's avatar
thomwolf committed
63

thomwolf's avatar
thomwolf committed
64
First let's prepare a tokenized input from a text string using `BertTokenizer`
thomwolf's avatar
thomwolf committed
65
66
67

```python
import torch
thomwolf's avatar
thomwolf committed
68
from pytorch_transformers import BertTokenizer, BertModel, BertForMaskedLM
thomwolf's avatar
thomwolf committed
69

thomwolf's avatar
thomwolf committed
70
# OPTIONAL: if you want to have more information on what's happening under the hood, activate the logger as follows
thomwolf's avatar
thomwolf committed
71
72
73
import logging
logging.basicConfig(level=logging.INFO)

thomwolf's avatar
thomwolf committed
74
# Load pre-trained model tokenizer (vocabulary)
thomwolf's avatar
thomwolf committed
75
76
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')

thomwolf's avatar
thomwolf committed
77
# Tokenize input
thomwolf's avatar
thomwolf committed
78
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
thomwolf's avatar
thomwolf committed
79
tokenized_text = tokenizer.tokenize(text)
thomwolf's avatar
thomwolf committed
80
81

# Mask a token that we will try to predict back with `BertForMaskedLM`
Liang Niu's avatar
Liang Niu committed
82
masked_index = 8
thomwolf's avatar
thomwolf committed
83
tokenized_text[masked_index] = '[MASK]'
thomwolf's avatar
thomwolf committed
84
assert tokenized_text == ['[CLS]', 'who', 'was', 'jim', 'henson', '?', '[SEP]', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer', '[SEP]']
thomwolf's avatar
thomwolf committed
85
86
87

# Convert token to vocabulary indices
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
thomwolf's avatar
thomwolf committed
88
# Define sentence A and B indices associated to 1st and 2nd sentences (see paper)
thomwolf's avatar
thomwolf committed
89
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]
thomwolf's avatar
thomwolf committed
90

thomwolf's avatar
thomwolf committed
91
# Convert inputs to PyTorch tensors
thomwolf's avatar
thomwolf committed
92
93
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
thomwolf's avatar
thomwolf committed
94
95
```

thomwolf's avatar
thomwolf committed
96
Let's see how we can use `BertModel` to encode our inputs in hidden-states:
thomwolf's avatar
thomwolf committed
97
98
99
100

```python
# Load pre-trained model (weights)
model = BertModel.from_pretrained('bert-base-uncased')
thomwolf's avatar
thomwolf committed
101
102
103

# Set the model in evaluation mode to desactivate the DropOut modules
# This is IMPORTANT to have reproductible results during evaluation!
thomwolf's avatar
thomwolf committed
104
model.eval()
thomwolf's avatar
thomwolf committed
105

thomwolf's avatar
thomwolf committed
106
107
108
109
110
# If you have a GPU, put everything on cuda
tokens_tensor = tokens_tensor.to('cuda')
segments_tensors = segments_tensors.to('cuda')
model.to('cuda')

thomwolf's avatar
thomwolf committed
111
# Predict hidden states features for each layer
thomwolf's avatar
thomwolf committed
112
with torch.no_grad():
thomwolf's avatar
thomwolf committed
113
114
115
116
117
118
119
120
    # See the models docstrings for the detail of the inputs
    outputs = model(tokens_tensor, token_type_ids=segments_tensors)
    # PyTorch-Transformers models always output tuples.
    # See the models docstrings for the detail of all the outputs
    # In our case, the first element is the hidden state of the last layer of the Bert model
    encoded_layers = outputs[0]
# We have encoded our input sequence in a FloatTensor of shape (batch size, sequence length, model hidden dimension)
assert tuple(encoded_layers.shape) == (1, len(indexed_tokens), model.config.hidden_size)
thomwolf's avatar
thomwolf committed
121
122
```

thomwolf's avatar
thomwolf committed
123
And how to use `BertForMaskedLM` to predict a masked token:
thomwolf's avatar
thomwolf committed
124
125
126
127
128
129

```python
# Load pre-trained model (weights)
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval()

thomwolf's avatar
thomwolf committed
130
131
132
133
134
# If you have a GPU, put everything on cuda
tokens_tensor = tokens_tensor.to('cuda')
segments_tensors = segments_tensors.to('cuda')
model.to('cuda')

thomwolf's avatar
thomwolf committed
135
# Predict all tokens
thomwolf's avatar
thomwolf committed
136
with torch.no_grad():
thomwolf's avatar
thomwolf committed
137
138
    outputs = model(tokens_tensor, token_type_ids=segments_tensors)
    predictions = outputs[0]
thomwolf's avatar
thomwolf committed
139

thomwolf's avatar
thomwolf committed
140
# confirm we were able to predict 'henson'
thomwolf's avatar
thomwolf committed
141
predicted_index = torch.argmax(predictions[0, masked_index]).item()
142
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
thomwolf's avatar
thomwolf committed
143
144
145
assert predicted_token == 'henson'
```

thomwolf's avatar
thomwolf committed
146
### OpenAI GPT-2
thomwolf's avatar
thomwolf committed
147

thomwolf's avatar
thomwolf committed
148
Here is a quick-start example using `GPT2Tokenizer` and `GPT2LMHeadModel` class with OpenAI's pre-trained model to predict the next token from a text prompt.
thomwolf's avatar
thomwolf committed
149

thomwolf's avatar
thomwolf committed
150
First let's prepare a tokenized input from our text string using `GPT2Tokenizer`
thomwolf's avatar
thomwolf committed
151
152
153

```python
import torch
thomwolf's avatar
thomwolf committed
154
from pytorch_transformers import GPT2Tokenizer, GPT2LMHeadModel
thomwolf's avatar
thomwolf committed
155

thomwolf's avatar
thomwolf committed
156
157
158
159
# OPTIONAL: if you want to have more information on what's happening, activate the logger as follows
import logging
logging.basicConfig(level=logging.INFO)

thomwolf's avatar
thomwolf committed
160
# Load pre-trained model tokenizer (vocabulary)
thomwolf's avatar
thomwolf committed
161
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
thomwolf's avatar
thomwolf committed
162

thomwolf's avatar
thomwolf committed
163
# Encode a text inputs
thomwolf's avatar
thomwolf committed
164
165
text = "Who was Jim Henson ? Jim Henson was a"
indexed_tokens = tokenizer.encode(text)
thomwolf's avatar
thomwolf committed
166

thomwolf's avatar
thomwolf committed
167
# Convert indexed tokens in a PyTorch tensor
thomwolf's avatar
thomwolf committed
168
169
170
tokens_tensor = torch.tensor([indexed_tokens])
```

thomwolf's avatar
thomwolf committed
171
Let's see how to use `GPT2LMHeadModel` to generate the next token following our text:
thomwolf's avatar
thomwolf committed
172
173
174

```python
# Load pre-trained model (weights)
thomwolf's avatar
thomwolf committed
175
model = GPT2LMHeadModel.from_pretrained('gpt2')
thomwolf's avatar
thomwolf committed
176

thomwolf's avatar
thomwolf committed
177
178
# Set the model in evaluation mode to desactivate the DropOut modules
# This is IMPORTANT to have reproductible results during evaluation!
thomwolf's avatar
thomwolf committed
179
180
model.eval()

thomwolf's avatar
thomwolf committed
181
182
183
184
# If you have a GPU, put everything on cuda
tokens_tensor = tokens_tensor.to('cuda')
model.to('cuda')

thomwolf's avatar
thomwolf committed
185
# Predict all tokens
thomwolf's avatar
thomwolf committed
186
with torch.no_grad():
thomwolf's avatar
thomwolf committed
187
188
    outputs = model(tokens_tensor)
    predictions = outputs[0]
thomwolf's avatar
thomwolf committed
189

thomwolf's avatar
thomwolf committed
190
# get the predicted next sub-word (in our case, the word 'man')
thomwolf's avatar
thomwolf committed
191
predicted_index = torch.argmax(predictions[0, -1, :]).item()
thomwolf's avatar
thomwolf committed
192
193
predicted_text = tokenizer.decode(indexed_tokens + [predicted_index])
assert predicted_text == 'Who was Jim Henson? Jim Henson was a man'
thomwolf's avatar
thomwolf committed
194
195
```

thomwolf's avatar
thomwolf committed
196
Examples for each model class of each model architecture (Bert, GPT, GPT-2, Transformer-XL, XLNet and XLM) can be found in the [documentation](#documentation).
thomwolf's avatar
thomwolf committed
197

thomwolf's avatar
thomwolf committed
198
## Quick tour: Fine-tuning/usage scripts
thomwolf's avatar
thomwolf committed
199

thomwolf's avatar
thomwolf committed
200
The library comprises several example scripts with SOTA performances for NLU and NLG tasks:
thomwolf's avatar
thomwolf committed
201

thomwolf's avatar
thomwolf committed
202
203
204
- fine-tuning Bert/XLNet/XLM with a *sequence-level classifier* on nine different GLUE tasks,
- fine-tuning Bert/XLNet/XLM with a *token-level classifier* on the question answering dataset SQuAD 2.0, and
- using GPT/GPT-2/Transformer-XL and XLNet for conditional language generation.
thomwolf's avatar
thomwolf committed
205

thomwolf's avatar
thomwolf committed
206
Here are three quick usage examples for these scripts:
thomwolf's avatar
thomwolf committed
207

thomwolf's avatar
thomwolf committed
208
### Fine-tuning for sequence classification: GLUE tasks examples
thomwolf's avatar
thomwolf committed
209

thomwolf's avatar
thomwolf committed
210
The [General Language Understanding Evaluation (GLUE) benchmark](https://gluebenchmark.com/) is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.
thomwolf's avatar
thomwolf committed
211

thomwolf's avatar
thomwolf committed
212
213
214
215
Before running anyone of these GLUE tasks you should download the
[GLUE data](https://gluebenchmark.com/tasks) by running
[this script](https://gist.github.com/W4ngatang/60c2bdb54d156a41194446737ce03e2e)
and unpack it to some directory `$GLUE_DIR`.
thomwolf's avatar
thomwolf committed
216

thomwolf's avatar
thomwolf committed
217
218
219
```shell
export GLUE_DIR=/path/to/glue
export TASK_NAME=MRPC
thomwolf's avatar
thomwolf committed
220

thomwolf's avatar
thomwolf committed
221
222
223
224
225
226
227
228
229
230
231
232
python run_bert_classifier.py \
  --task_name $TASK_NAME \
  --do_train \
  --do_eval \
  --do_lower_case \
  --data_dir $GLUE_DIR/$TASK_NAME \
  --bert_model bert-base-uncased \
  --max_seq_length 128 \
  --train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 3.0 \
  --output_dir /tmp/$TASK_NAME/
thomwolf's avatar
thomwolf committed
233
234
```

thomwolf's avatar
thomwolf committed
235
where task name can be one of CoLA, SST-2, MRPC, STS-B, QQP, MNLI, QNLI, RTE, WNLI.
thomwolf's avatar
thomwolf committed
236

thomwolf's avatar
thomwolf committed
237
The dev set results will be present within the text file 'eval_results.txt' in the specified output_dir. In case of MNLI, since there are two separate dev sets, matched and mismatched, there will be a separate output folder called '/tmp/MNLI-MM/' in addition to '/tmp/MNLI/'.
thomwolf's avatar
thomwolf committed
238

thomwolf's avatar
thomwolf committed
239
#### Fine-tuning XLNet model on the STS-B regression task
thomwolf's avatar
thomwolf committed
240

thomwolf's avatar
thomwolf committed
241
242
This example code fine-tunes XLNet on the STS-B corpus using parallel training on a server with 4 V100 GPUs.
Parallel training is a simple way to use several GPU (but it is slower and less flexible than distributed training, see below).
thomwolf's avatar
thomwolf committed
243

thomwolf's avatar
thomwolf committed
244
245
```shell
export GLUE_DIR=/path/to/glue
thomwolf's avatar
thomwolf committed
246

thomwolf's avatar
thomwolf committed
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
python ./examples/run_glue.py \
    --model_type xlnet \
    --model_name_or_path xlnet-large-cased \
    --do_train  \
    --task_name=sts-b     \
    --data_dir=${GLUE_DIR}/STS-B  \
    --output_dir=./proc_data/sts-b-110   \
    --max_seq_length=128   \
    --per_gpu_eval_batch_size=8   \
    --per_gpu_train_batch_size=8   \
    --gradient_accumulation_steps=1 \
    --max_steps=1200  \
    --model_name=xlnet-large-cased   \
    --overwrite_output_dir   \
    --overwrite_cache \
    --warmup_steps=120
thomwolf's avatar
thomwolf committed
263
264
```

thomwolf's avatar
thomwolf committed
265
266
On this machine we thus have a batch size of 32, please increase `gradient_accumulation_steps` to reach the same batch size if you have a smaller machine.
These hyper-parameters give evaluation results pearsonr of `0.918`.
thomwolf's avatar
thomwolf committed
267

thomwolf's avatar
thomwolf committed
268
#### Fine-tuning Bert model on the MRPC classification task
thomwolf's avatar
thomwolf committed
269

thomwolf's avatar
thomwolf committed
270
This example code fine-tunes the Bert Whole Word Masking model on the Microsoft Research Paraphrase Corpus (MRPC) corpus using distributed training on 8 V100 GPUs to reach a F1 > 92.
thomwolf's avatar
thomwolf committed
271

thomwolf's avatar
thomwolf committed
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
```bash
python -m torch.distributed.launch --nproc_per_node 8 run_bert_classifier.py   \
    --model_type bert \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name MRPC \
    --do_train   \
    --do_eval   \
    --do_lower_case   \
    --data_dir $GLUE_DIR/MRPC/   \
    --max_seq_length 128   \
    --per_gpu_eval_batch_size=8   \
    --per_gpu_train_batch_size=8   \
    --learning_rate 2e-5   \
    --num_train_epochs 3.0  \
    --output_dir /tmp/mrpc_output/ \
    --overwrite_output_dir   \
    --overwrite_cache \
thomwolf's avatar
thomwolf committed
289
290
```

thomwolf's avatar
thomwolf committed
291
Training with these hyper-parameters gave us the following results:
thomwolf's avatar
thomwolf committed
292

thomwolf's avatar
thomwolf committed
293
294
295
296
297
298
299
```bash
  acc = 0.8823529411764706
  acc_and_f1 = 0.901702786377709
  eval_loss = 0.3418912578906332
  f1 = 0.9210526315789473
  global_step = 174
  loss = 0.07231863956341798
thomwolf's avatar
thomwolf committed
300
301
```

thomwolf's avatar
thomwolf committed
302
### Fine-tuning for question-answering: SQuAD example
thomwolf's avatar
thomwolf committed
303

thomwolf's avatar
thomwolf committed
304
This example code fine-tunes BERT on the SQuAD dataset using distributed training on 8 V100 GPUs and Bert Whole Word Masking uncased model to reach a F1 > 93 on SQuAD:
thomwolf's avatar
thomwolf committed
305

thomwolf's avatar
thomwolf committed
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
```bash
python -m torch.distributed.launch --nproc_per_node=8 run_squad.py \
    --model_type bert \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --do_train \
    --do_predict \
    --do_lower_case \
    --train_file $SQUAD_DIR/train-v1.1.json \
    --predict_file $SQUAD_DIR/dev-v1.1.json \
    --learning_rate 3e-5 \
    --num_train_epochs 2 \
    --max_seq_length 384 \
    --doc_stride 128 \
    --output_dir ../models/wwm_uncased_finetuned_squad/ \
    --per_gpu_eval_batch_size=3   \
    --per_gpu_train_batch_size=3   \
thomwolf's avatar
thomwolf committed
322
323
```

thomwolf's avatar
thomwolf committed
324
Training with these hyper-parameters gave us the following results:
thomwolf's avatar
thomwolf committed
325

thomwolf's avatar
thomwolf committed
326
327
328
```bash
python $SQUAD_DIR/evaluate-v1.1.py $SQUAD_DIR/dev-v1.1.json ../models/wwm_uncased_finetuned_squad/predictions.json
{"exact_match": 86.91579943235573, "f1": 93.1532499015869}
thomwolf's avatar
thomwolf committed
329
330
```

thomwolf's avatar
thomwolf committed
331
This is the model provided as `bert-large-uncased-whole-word-masking-finetuned-squad`.
332

thomwolf's avatar
thomwolf committed
333
### Conditional generation: Text generation with GPT, GPT-2, Transformer-XL and XLNet
334

thomwolf's avatar
thomwolf committed
335
336
A conditional generation script is also included to generate text from a prompt.
The generation script include the [tricks](https://github.com/rusiaaman/XLNet-gen#methodology) proposed by by Aman Rusia to get high quality generation with memory models like Transformer-XL and XLNet (include a predefined text to make short inputs longer).
337

thomwolf's avatar
thomwolf committed
338
Here is how to run the script with the small version of OpenAI GPT-2 model:
339

thomwolf's avatar
thomwolf committed
340
341
342
343
344
```shell
python ./examples/run_glue.py \
    --model_type=gpt2 \
    --length=20 \
    --model_name_or_path=gpt2 \
345
346
```

thomwolf's avatar
thomwolf committed
347
## Documentation
thomwolf's avatar
thomwolf committed
348

thomwolf's avatar
thomwolf committed
349
The full documentation is available at https://huggingface.co/pytorch-transformers/.
thomwolf's avatar
thomwolf committed
350

thomwolf's avatar
thomwolf committed
351
## Citation
thomwolf's avatar
thomwolf committed
352

thomwolf's avatar
thomwolf committed
353
354
At the moment, there is no paper to cite for PyTorch-Transformers but we are working on preparing one.
In the meantime, please include a mention of the library and a link to the present repository if you use this work in a published or open-source project.