language_modeling.mdx 19.1 KB
Newer Older
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Steven Liu's avatar
Steven Liu committed
2
3
4
5
6
7
8
9
10
11
12

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->

13
# Causal language modeling
Steven Liu's avatar
Steven Liu committed
14

15
[[open-in-colab]]
Steven Liu's avatar
Steven Liu committed
16

17
18
19
There are two types of language modeling, causal and masked. This guide illustrates causal language modeling.
Causal language models are frequently used for text generation. You can use these models for creative applications like
choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot.
Steven Liu's avatar
Steven Liu committed
20

21
<Youtube id="Vpjb1lu0MDk"/>
Steven Liu's avatar
Steven Liu committed
22

23
24
Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on
the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model.
Steven Liu's avatar
Steven Liu committed
25

26
This guide will show you how to:
Steven Liu's avatar
Steven Liu committed
27

28
1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset.
29
2. Use your finetuned model for inference.
Steven Liu's avatar
Steven Liu committed
30

31
<Tip>
32
33
34
35
You can finetune other architectures for causal language modeling following the same steps in this guide.
Choose one of the following architectures:

<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
Steven Liu's avatar
Steven Liu committed
36

37
[BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeGen](../model_doc/codegen), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [MVP](../model_doc/mvp), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod)
38
39

<!--End of the generated tip-->
Steven Liu's avatar
Steven Liu committed
40
41
42

</Tip>

43
44
45
46
47
48
Before you begin, make sure you have all the necessary libraries installed:

```bash
pip install transformers datasets evaluate
```

49
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
50
51
52
53
54
55
56

```py
>>> from huggingface_hub import notebook_login

>>> notebook_login()
```

Steven Liu's avatar
Steven Liu committed
57
58
## Load ELI5 dataset

59
60
Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 馃 Datasets library.
 This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
Steven Liu's avatar
Steven Liu committed
61
62
63
64
65
66
67

```py
>>> from datasets import load_dataset

>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```

68
Split the dataset's `train_asks` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
Steven Liu's avatar
Steven Liu committed
69
70

```py
71
>>> eli5 = eli5.train_test_split(test_size=0.2)
Steven Liu's avatar
Steven Liu committed
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
```

Then take a look at an example:

```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
  'score': [6, 3],
  'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
   "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
 'answers_urls': {'url': []},
 'document': '',
 'q_id': 'nyxfp',
 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
 'subreddit': 'askscience',
 'title': 'Few questions about this space walk photograph.',
 'title_urls': {'url': []}}
```

92
93
While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling
tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label.
Steven Liu's avatar
Steven Liu committed
94
95
96
97
98

## Preprocess

<Youtube id="ma1TrR7gE7I"/>

99
The next step is to load a DistilGPT2 tokenizer to process the `text` subfield:
Steven Liu's avatar
Steven Liu committed
100
101
102
103
104
105
106

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
```

107
108
You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to
extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method:
Steven Liu's avatar
Steven Liu committed
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126

```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
 'answers.score': [6, 3],
 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
  "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
 'answers_urls.url': [],
 'document': '',
 'q_id': 'nyxfp',
 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
 'subreddit': 'askscience',
 'title': 'Few questions about this space walk photograph.',
 'title_urls.url': []}
```

127
128
Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Steven Liu's avatar
Steven Liu committed
129

130
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
Steven Liu's avatar
Steven Liu committed
131
132
133

```py
>>> def preprocess_function(examples):
134
...     return tokenizer([" ".join(x) for x in examples["answers.text"]])
Steven Liu's avatar
Steven Liu committed
135
136
```

137
To apply this preprocessing function over the entire dataset, use the 馃 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need:
Steven Liu's avatar
Steven Liu committed
138
139
140
141
142
143
144
145
146
147

```py
>>> tokenized_eli5 = eli5.map(
...     preprocess_function,
...     batched=True,
...     num_proc=4,
...     remove_columns=eli5["train"].column_names,
... )
```

148
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
Steven Liu's avatar
Steven Liu committed
149

150
151
152
You can now use a second preprocessing function to
- concatenate all the sequences
- split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. 
Steven Liu's avatar
Steven Liu committed
153
154
155
156
157
158

```py
>>> block_size = 128


>>> def group_texts(examples):
159
...     # Concatenate all texts.
Steven Liu's avatar
Steven Liu committed
160
161
...     concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
...     total_length = len(concatenated_examples[list(examples.keys())[0]])
162
163
164
165
166
...     # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
...     # customize this part to your needs.
...     if total_length >= block_size:
...         total_length = (total_length // block_size) * block_size
...     # Split by chunks of block_size.
Steven Liu's avatar
Steven Liu committed
167
168
169
170
171
172
173
174
175
176
177
178
179
180
...     result = {
...         k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
...         for k, t in concatenated_examples.items()
...     }
...     result["labels"] = result["input_ids"].copy()
...     return result
```

Apply the `group_texts` function over the entire dataset:

```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```

181
182
Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the
sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
Steven Liu's avatar
Steven Liu committed
183

Sylvain Gugger's avatar
Sylvain Gugger committed
184
185
<frameworkcontent>
<pt>
186
Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element:
Steven Liu's avatar
Steven Liu committed
187
188
189
190
191
192
193
194

```py
>>> from transformers import DataCollatorForLanguageModeling

>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```

Sylvain Gugger's avatar
Sylvain Gugger committed
195
196
</pt>
<tf>
197
Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element:
Sylvain Gugger's avatar
Sylvain Gugger committed
198
199
200
201
202
203
204
205
206

```py
>>> from transformers import DataCollatorForLanguageModeling

>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
```

</tf>
</frameworkcontent>
Steven Liu's avatar
Steven Liu committed
207
208


209
## Train
Steven Liu's avatar
Steven Liu committed
210

211
212
<frameworkcontent>
<pt>
213
214
<Tip>

215
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the [basic tutorial](../training#train-with-pytorch-trainer)!
216
217

</Tip>
218

219
You're ready to start training your model now! Load DistilGPT2 with [`AutoModelForCausalLM`]:
Steven Liu's avatar
Steven Liu committed
220
221
222
223
224
225
226
227
228

```py
>>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer

>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
```

At this point, only three steps remain:

229
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model).
Steven Liu's avatar
Steven Liu committed
230
2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator.
231
3. Call [`~Trainer.train`] to finetune your model.
Steven Liu's avatar
Steven Liu committed
232
233
234

```py
>>> training_args = TrainingArguments(
235
...     output_dir="my_awesome_eli5_clm-model",
Steven Liu's avatar
Steven Liu committed
236
237
238
...     evaluation_strategy="epoch",
...     learning_rate=2e-5,
...     weight_decay=0.01,
239
...     push_to_hub=True,
Steven Liu's avatar
Steven Liu committed
240
241
242
243
244
245
246
247
248
249
250
251
... )

>>> trainer = Trainer(
...     model=model,
...     args=training_args,
...     train_dataset=lm_dataset["train"],
...     eval_dataset=lm_dataset["test"],
...     data_collator=data_collator,
... )

>>> trainer.train()
```
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267

Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity:

```py
>>> import math

>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
```

Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:

```py
>>> trainer.push_to_hub()
```
268
269
</pt>
<tf>
270
271
<Tip>

272
If you aren't familiar with finetuning a model with Keras, take a look at the [basic tutorial](../training#train-a-tensorflow-model-with-keras)!
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291

</Tip>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:

```py
>>> from transformers import create_optimizer, AdamWeightDecay

>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```

Then you can load DistilGPT2 with [`TFAutoModelForCausalLM`]:

```py
>>> from transformers import TFAutoModelForCausalLM

>>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2")
```

Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:
Steven Liu's avatar
Steven Liu committed
292
293

```py
Matt's avatar
Matt committed
294
295
>>> tf_train_set = model.prepare_tf_dataset(
...     lm_dataset["train"],
Steven Liu's avatar
Steven Liu committed
296
297
298
299
300
...     shuffle=True,
...     batch_size=16,
...     collate_fn=data_collator,
... )

Matt's avatar
Matt committed
301
302
>>> tf_test_set = model.prepare_tf_dataset(
...     lm_dataset["test"],
Steven Liu's avatar
Steven Liu committed
303
304
305
306
307
308
...     shuffle=False,
...     batch_size=16,
...     collate_fn=data_collator,
... )
```

309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):

```py
>>> import tensorflow as tf

>>> model.compile(optimizer=optimizer)
```

This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:

```py
>>> from transformers.keras_callbacks import PushToHubCallback

>>> callback = PushToHubCallback(
...     output_dir="my_awesome_eli5_clm-model",
...     tokenizer=tokenizer,
... )
```

Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model:

```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```

Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
</tf>
</frameworkcontent>

338
339
<Tip>

340
341
342
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
343
344
345

</Tip>

346
## Inference
347
348
349
350

Great, now that you've finetuned a model, you can use it for inference!

Come up with a prompt you'd like to generate text from:
Steven Liu's avatar
Steven Liu committed
351
352

```py
353
354
>>> prompt = "Somatic hypermutation allows the immune system to"
```
Steven Liu's avatar
Steven Liu committed
355

356
357
358
359
360
361
362
363
The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for text generation with your model, and pass your text to it:

```py
>>> from transformers import pipeline

>>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model")
>>> generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}]
Steven Liu's avatar
Steven Liu committed
364
365
```

366
367
368
<frameworkcontent>
<pt>
Tokenize the text and return the `input_ids` as PyTorch tensors:
Steven Liu's avatar
Steven Liu committed
369
370

```py
371
>>> from transformers import AutoTokenizer
Steven Liu's avatar
Steven Liu committed
372

373
374
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="pt").input_ids
Steven Liu's avatar
Steven Liu committed
375
376
```

377
378
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to generate text.
For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page.
Steven Liu's avatar
Steven Liu committed
379
380

```py
381
>>> from transformers import AutoModelForCausalLM
Steven Liu's avatar
Steven Liu committed
382

383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
>>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```

Decode the generated token ids back into text:

```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"]
```
</pt>
<tf>
Tokenize the text and return the `input_ids` as TensorFlow tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="tf").input_ids
```

404
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page.
405
406
407
408
409
410

```py
>>> from transformers import TFAutoModelForCausalLM

>>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Steven Liu's avatar
Steven Liu committed
411
412
```

413
Decode the generated token ids back into text:
Steven Liu's avatar
Steven Liu committed
414
415

```py
416
417
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for']
Steven Liu's avatar
Steven Liu committed
418
```
419
420
</tf>
</frameworkcontent>