README.md 8.02 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

17
18
## Language model training

Sylvain Gugger's avatar
Sylvain Gugger committed
19
20
21
22
23
Fine-tuning (or training from scratch) the library models for language modeling on a text dataset for GPT, GPT-2,
ALBERT, BERT, DistilBERT, RoBERTa, XLNet... GPT and GPT-2 are trained or fine-tuned using a causal language modeling
(CLM) loss while ALBERT, BERT, DistilBERT and RoBERTa are trained or fine-tuned using a masked language modeling (MLM)
loss. XLNet uses permutation language modeling (PLM), you can find more information about the differences between those
objectives in our [model summary](https://huggingface.co/transformers/model_summary.html).
24

Sylvain Gugger's avatar
Sylvain Gugger committed
25
26
These scripts leverage the 🤗 Datasets library and the Trainer API. You can easily customize them to your needs if you
need extra processing on your datasets.
27

Sylvain Gugger's avatar
Sylvain Gugger committed
28
**Note:** The old script `run_language_modeling.py` is still available
29
[here](https://github.com/huggingface/transformers/blob/master/examples/contrib/legacy/run_language_modeling.py).
30

Sylvain Gugger's avatar
Sylvain Gugger committed
31
32
The following examples, will run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own
text files for training and validation. We give examples of both below.
33
34
35
36
37
38
39

### GPT-2/GPT and causal language modeling

The following example fine-tunes GPT-2 on WikiText-2. We're using the raw WikiText-2 (no tokens were replaced before
the tokenization). The loss here is that of causal language modeling.

```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
40
41
42
43
python run_clm.py \
    --model_name_or_path gpt2 \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
44
45
    --do_train \
    --do_eval \
Sylvain Gugger's avatar
Sylvain Gugger committed
46
    --output_dir /tmp/test-clm
47
48
49
50
51
```

This takes about half an hour to train on a single K80 GPU and about one minute for the evaluation to run. It reaches
a score of ~20 perplexity once fine-tuned on the dataset.

Sylvain Gugger's avatar
Sylvain Gugger committed
52
53
54
55
56
57
58
59
60
61
62
63
64
To run on your own training and validation files, use the following command:

```bash
python run_clm.py \
    --model_name_or_path gpt2 \
    --train_file path_to_train_file \
    --validation_file path_to_validation_file \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-clm
```


65
### RoBERTa/BERT/DistilBERT and masked language modeling
66
67
68
69
70

The following example fine-tunes RoBERTa on WikiText-2. Here too, we're using the raw WikiText-2. The loss is different
as BERT/RoBERTa have a bidirectional mechanism; we're therefore using the same loss that was used during their
pre-training: masked language modeling.

Sylvain Gugger's avatar
Sylvain Gugger committed
71
72
In accordance to the RoBERTa paper, we use dynamic masking rather than static masking. The model may, therefore,
converge slightly slower (over-fitting takes more epochs).
73

Sylvain Gugger's avatar
Sylvain Gugger committed
74
75
76
77
78
79
80
81
82
```bash
python run_mlm.py \
    --model_name_or_path roberta-base \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-mlm
```
83

Sylvain Gugger's avatar
Sylvain Gugger committed
84
To run on your own training and validation files, use the following command:
85
86

```bash
87
python run_mlm.py \
Sylvain Gugger's avatar
Sylvain Gugger committed
88
89
90
91
92
    --model_name_or_path roberta-base \
    --train_file path_to_train_file \
    --validation_file path_to_validation_file \
    --do_train \
    --do_eval \
93
    --output_dir /tmp/test-mlm
Sylvain Gugger's avatar
Sylvain Gugger committed
94
95
```

96
97
98
99
100
101
If your dataset is organized with one sample per line, you can use the `--line_by_line` flag (otherwise the script
concatenates all texts and then splits them in blocks of the same length).

**Note:** On TPU, you should use the flag `--pad_to_max_length` in conjunction with the `--line_by_line` flag to make
sure all your batches have the same length.

Sylvain Gugger's avatar
Sylvain Gugger committed
102
103
104
### Whole word masking

The BERT authors released a new version of BERT using Whole Word Masking in May 2019. Instead of masking randomly
105
selected tokens (which may be part of words), they mask randomly selected words (masking all the tokens corresponding
Sylvain Gugger's avatar
Sylvain Gugger committed
106
to that word). This technique has been refined for Chinese in [this paper](https://arxiv.org/abs/1906.08101).
107

Sylvain Gugger's avatar
Sylvain Gugger committed
108
To fine-tune a model using whole word masking, use the following script:
Tim Isbister's avatar
Tim Isbister committed
109
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
110
111
112
113
python run_mlm_wwm.py \
    --model_name_or_path roberta-base \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
114
115
    --do_train \
    --do_eval \
Sylvain Gugger's avatar
Sylvain Gugger committed
116
    --output_dir /tmp/test-mlm-wwm
117
118
```

Sylvain Gugger's avatar
Sylvain Gugger committed
119
120
For Chinese models, we need to generate a reference files (which requires the ltp library), because it's tokenized at
the character level.
121

Sylvain Gugger's avatar
Sylvain Gugger committed
122
**Q :** Why a reference file?
123

Sylvain Gugger's avatar
Sylvain Gugger committed
124
125
126
127
**A :** Suppose we have a Chinese sentence like: `我喜欢你` The original Chinese-BERT will tokenize it as
`['我','喜','欢','你']` (character level). But `喜欢` is a whole word. For whole word masking proxy, we need a result
like `['我','喜','##欢','你']`, so we need a reference file to tell the model which position of the BERT original token
should be added `##`.
128
129
130

**Q :** Why LTP ?

Sylvain Gugger's avatar
Sylvain Gugger committed
131
132
133
**A :** Cause the best known Chinese WWM BERT is [Chinese-BERT-wwm](https://github.com/ymcui/Chinese-BERT-wwm) by HIT.
It works well on so many Chines Task like CLUE (Chinese GLUE). They use LTP, so if we want to fine-tune their model,
we need LTP.
134

135
Now LTP only only works well on `transformers==3.2.0`. So we don't add it to requirements.txt.
136
137
You need to create a separate environment with this version of Transformers to run the `run_chinese_ref.py` script that
will create the reference files. The script is in `examples/contrib`. Once in the proper environment, run the
Sylvain Gugger's avatar
Sylvain Gugger committed
138
following:
139
140


141
142
143
144
145
146
```bash
export TRAIN_FILE=/path/to/dataset/wiki.train.raw
export LTP_RESOURCE=/path/to/ltp/tokenizer
export BERT_RESOURCE=/path/to/bert/tokenizer
export SAVE_PATH=/path/to/data/ref.txt

147
python examples/contrib/run_chinese_ref.py \
Sylvain Gugger's avatar
Sylvain Gugger committed
148
149
150
151
    --file_name=path_to_train_or_eval_file \
    --ltp=path_to_ltp_tokenizer \
    --bert=path_to_bert_tokenizer \
    --save_path=path_to_reference_file
152
153
```

Sylvain Gugger's avatar
Sylvain Gugger committed
154
Then you can run the script like this: 
155

156

Sylvain Gugger's avatar
Sylvain Gugger committed
157
158
159
160
161
162
163
```bash
python run_mlm_wwm.py \
    --model_name_or_path roberta-base \
    --train_file path_to_train_file \
    --validation_file path_to_validation_file \
    --train_ref_file path_to_train_chinese_ref_file \
    --validation_ref_file path_to_validation_chinese_ref_file \
164
165
    --do_train \
    --do_eval \
Sylvain Gugger's avatar
Sylvain Gugger committed
166
    --output_dir /tmp/test-mlm-wwm
167
168
```

169
170
**Note:** On TPU, you should the flag `--pad_to_max_length` to make sure all your batches have the same length.

Lysandre Debut's avatar
Lysandre Debut committed
171
172
173
174
175
176
177
178
179
180
181
182
### XLNet and permutation language modeling

XLNet uses a different training objective, which is permutation language modeling. It is an autoregressive method 
to learn bidirectional contexts by maximizing the expected likelihood over all permutations of the input 
sequence factorization order.

We use the `--plm_probability` flag to define the ratio of length of a span of masked tokens to surrounding 
context length for permutation language modeling.

The `--max_span_length` flag may also be used to limit the length of a span of masked tokens used 
for permutation language modeling.

Matthias's avatar
Matthias committed
183
Here is how to fine-tune XLNet on wikitext-2:
Sylvain Gugger's avatar
Sylvain Gugger committed
184

Lysandre Debut's avatar
Lysandre Debut committed
185
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
186
187
188
189
190
191
192
193
python run_plm.py \
    --model_name_or_path=xlnet-base-cased \
    --dataset_name wikitext \
    --dataset_config_name wikitext-2-raw-v1 \
    --do_train \
    --do_eval \
    --output_dir /tmp/test-plm
```
Lysandre Debut's avatar
Lysandre Debut committed
194

Sylvain Gugger's avatar
Sylvain Gugger committed
195
196
197
198
To fine-tune it on your own training and validation file, run:

```bash
python run_plm.py \
Lysandre Debut's avatar
Lysandre Debut committed
199
    --model_name_or_path=xlnet-base-cased \
Sylvain Gugger's avatar
Sylvain Gugger committed
200
201
    --train_file path_to_train_file \
    --validation_file path_to_validation_file \
Lysandre Debut's avatar
Lysandre Debut committed
202
203
    --do_train \
    --do_eval \
Sylvain Gugger's avatar
Sylvain Gugger committed
204
    --output_dir /tmp/test-plm
Lysandre Debut's avatar
Lysandre Debut committed
205
```
206
207
208
209
210
211

If your dataset is organized with one sample per line, you can use the `--line_by_line` flag (otherwise the script
concatenates all texts and then splits them in blocks of the same length).

**Note:** On TPU, you should use the flag `--pad_to_max_length` in conjunction with the `--line_by_line` flag to make
sure all your batches have the same length.