language_modeling.md 19.8 KB
Newer Older
1
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Steven Liu's avatar
Steven Liu committed
2
3
4
5
6
7
8
9
10

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Steven Liu's avatar
Steven Liu committed
15
16
-->

17
# Causal language modeling
Steven Liu's avatar
Steven Liu committed
18

19
[[open-in-colab]]
Steven Liu's avatar
Steven Liu committed
20

21
22
23
There are two types of language modeling, causal and masked. This guide illustrates causal language modeling.
Causal language models are frequently used for text generation. You can use these models for creative applications like
choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot.
Steven Liu's avatar
Steven Liu committed
24

25
<Youtube id="Vpjb1lu0MDk"/>
Steven Liu's avatar
Steven Liu committed
26

27
28
Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on
the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model.
Steven Liu's avatar
Steven Liu committed
29

30
This guide will show you how to:
Steven Liu's avatar
Steven Liu committed
31

32
1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset.
33
2. Use your finetuned model for inference.
Steven Liu's avatar
Steven Liu committed
34

35
<Tip>
36
37
38
39
You can finetune other architectures for causal language modeling following the same steps in this guide.
Choose one of the following architectures:

<!--This tip is automatically generated by `make fix-copies`, do not fill manually!-->
40
[BART](../model_doc/bart), [BERT](../model_doc/bert), [Bert Generation](../model_doc/bert-generation), [BigBird](../model_doc/big_bird), [BigBird-Pegasus](../model_doc/bigbird_pegasus), [BioGpt](../model_doc/biogpt), [Blenderbot](../model_doc/blenderbot), [BlenderbotSmall](../model_doc/blenderbot-small), [BLOOM](../model_doc/bloom), [CamemBERT](../model_doc/camembert), [CodeLlama](../model_doc/code_llama), [CodeGen](../model_doc/codegen), [CPM-Ant](../model_doc/cpmant), [CTRL](../model_doc/ctrl), [Data2VecText](../model_doc/data2vec-text), [ELECTRA](../model_doc/electra), [ERNIE](../model_doc/ernie), [Falcon](../model_doc/falcon), [Fuyu](../model_doc/fuyu), [GIT](../model_doc/git), [GPT-Sw3](../model_doc/gpt-sw3), [OpenAI GPT-2](../model_doc/gpt2), [GPTBigCode](../model_doc/gpt_bigcode), [GPT Neo](../model_doc/gpt_neo), [GPT NeoX](../model_doc/gpt_neox), [GPT NeoX Japanese](../model_doc/gpt_neox_japanese), [GPT-J](../model_doc/gptj), [LLaMA](../model_doc/llama), [Marian](../model_doc/marian), [mBART](../model_doc/mbart), [MEGA](../model_doc/mega), [Megatron-BERT](../model_doc/megatron-bert), [Mistral](../model_doc/mistral), [MPT](../model_doc/mpt), [MusicGen](../model_doc/musicgen), [MVP](../model_doc/mvp), [OpenLlama](../model_doc/open-llama), [OpenAI GPT](../model_doc/openai-gpt), [OPT](../model_doc/opt), [Pegasus](../model_doc/pegasus), [Persimmon](../model_doc/persimmon), [PLBart](../model_doc/plbart), [ProphetNet](../model_doc/prophetnet), [QDQBert](../model_doc/qdqbert), [Reformer](../model_doc/reformer), [RemBERT](../model_doc/rembert), [RoBERTa](../model_doc/roberta), [RoBERTa-PreLayerNorm](../model_doc/roberta-prelayernorm), [RoCBert](../model_doc/roc_bert), [RoFormer](../model_doc/roformer), [RWKV](../model_doc/rwkv), [Speech2Text2](../model_doc/speech_to_text_2), [Transformer-XL](../model_doc/transfo-xl), [TrOCR](../model_doc/trocr), [Whisper](../model_doc/whisper), [XGLM](../model_doc/xglm), [XLM](../model_doc/xlm), [XLM-ProphetNet](../model_doc/xlm-prophetnet), [XLM-RoBERTa](../model_doc/xlm-roberta), [XLM-RoBERTa-XL](../model_doc/xlm-roberta-xl), [XLNet](../model_doc/xlnet), [X-MOD](../model_doc/xmod)
41

Steven Liu's avatar
Steven Liu committed
42

43
44

<!--End of the generated tip-->
Steven Liu's avatar
Steven Liu committed
45
46
47

</Tip>

48
49
50
51
52
53
Before you begin, make sure you have all the necessary libraries installed:

```bash
pip install transformers datasets evaluate
```

54
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in:
55
56
57
58
59
60
61

```py
>>> from huggingface_hub import notebook_login

>>> notebook_login()
```

Steven Liu's avatar
Steven Liu committed
62
63
## Load ELI5 dataset

64
65
Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 馃 Datasets library.
 This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
Steven Liu's avatar
Steven Liu committed
66
67
68
69
70
71
72

```py
>>> from datasets import load_dataset

>>> eli5 = load_dataset("eli5", split="train_asks[:5000]")
```

73
Split the dataset's `train_asks` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:
Steven Liu's avatar
Steven Liu committed
74
75

```py
76
>>> eli5 = eli5.train_test_split(test_size=0.2)
Steven Liu's avatar
Steven Liu committed
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
```

Then take a look at an example:

```py
>>> eli5["train"][0]
{'answers': {'a_id': ['c3d1aib', 'c3d4lya'],
  'score': [6, 3],
  'text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
   "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"]},
 'answers_urls': {'url': []},
 'document': '',
 'q_id': 'nyxfp',
 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
 'selftext_urls': {'url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg']},
 'subreddit': 'askscience',
 'title': 'Few questions about this space walk photograph.',
 'title_urls': {'url': []}}
```

97
98
While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling
tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label.
Steven Liu's avatar
Steven Liu committed
99
100
101
102
103

## Preprocess

<Youtube id="ma1TrR7gE7I"/>

104
The next step is to load a DistilGPT2 tokenizer to process the `text` subfield:
Steven Liu's avatar
Steven Liu committed
105
106
107
108
109
110
111

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
```

112
113
You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to
extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method:
Steven Liu's avatar
Steven Liu committed
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131

```py
>>> eli5 = eli5.flatten()
>>> eli5["train"][0]
{'answers.a_id': ['c3d1aib', 'c3d4lya'],
 'answers.score': [6, 3],
 'answers.text': ["The velocity needed to remain in orbit is equal to the square root of Newton's constant times the mass of earth divided by the distance from the center of the earth. I don't know the altitude of that specific mission, but they're usually around 300 km. That means he's going 7-8 km/s.\n\nIn space there are no other forces acting on either the shuttle or the guy, so they stay in the same position relative to each other. If he were to become unable to return to the ship, he would presumably run out of oxygen, or slowly fall into the atmosphere and burn up.",
  "Hope you don't mind me asking another question, but why aren't there any stars visible in this photo?"],
 'answers_urls.url': [],
 'document': '',
 'q_id': 'nyxfp',
 'selftext': '_URL_0_\n\nThis was on the front page earlier and I have a few questions about it. Is it possible to calculate how fast the astronaut would be orbiting the earth? Also how does he stay close to the shuttle so that he can return safely, i.e is he orbiting at the same speed and can therefore stay next to it? And finally if his propulsion system failed, would he eventually re-enter the atmosphere and presumably die?',
 'selftext_urls.url': ['http://apod.nasa.gov/apod/image/1201/freeflyer_nasa_3000.jpg'],
 'subreddit': 'askscience',
 'title': 'Few questions about this space walk photograph.',
 'title_urls.url': []}
```

132
133
Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them.
Steven Liu's avatar
Steven Liu committed
134

135
Here is a first preprocessing function to join the list of strings for each example and tokenize the result:
Steven Liu's avatar
Steven Liu committed
136
137
138

```py
>>> def preprocess_function(examples):
139
...     return tokenizer([" ".join(x) for x in examples["answers.text"]])
Steven Liu's avatar
Steven Liu committed
140
141
```

142
To apply this preprocessing function over the entire dataset, use the 馃 Datasets [`~datasets.Dataset.map`] method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need:
Steven Liu's avatar
Steven Liu committed
143
144
145
146
147
148
149
150
151
152

```py
>>> tokenized_eli5 = eli5.map(
...     preprocess_function,
...     batched=True,
...     num_proc=4,
...     remove_columns=eli5["train"].column_names,
... )
```

153
This dataset contains the token sequences, but some of these are longer than the maximum input length for the model.
Steven Liu's avatar
Steven Liu committed
154

155
156
You can now use a second preprocessing function to
- concatenate all the sequences
157
- split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM.
Steven Liu's avatar
Steven Liu committed
158
159
160
161
162
163

```py
>>> block_size = 128


>>> def group_texts(examples):
164
...     # Concatenate all texts.
Steven Liu's avatar
Steven Liu committed
165
166
...     concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
...     total_length = len(concatenated_examples[list(examples.keys())[0]])
167
168
169
170
171
...     # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
...     # customize this part to your needs.
...     if total_length >= block_size:
...         total_length = (total_length // block_size) * block_size
...     # Split by chunks of block_size.
Steven Liu's avatar
Steven Liu committed
172
173
174
175
176
177
178
179
180
181
182
183
184
185
...     result = {
...         k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
...         for k, t in concatenated_examples.items()
...     }
...     result["labels"] = result["input_ids"].copy()
...     return result
```

Apply the `group_texts` function over the entire dataset:

```py
>>> lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4)
```

186
187
Now create a batch of examples using [`DataCollatorForLanguageModeling`]. It's more efficient to *dynamically pad* the
sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length.
Steven Liu's avatar
Steven Liu committed
188

Sylvain Gugger's avatar
Sylvain Gugger committed
189
190
<frameworkcontent>
<pt>
191
Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element:
Steven Liu's avatar
Steven Liu committed
192
193
194
195
196
197
198
199

```py
>>> from transformers import DataCollatorForLanguageModeling

>>> tokenizer.pad_token = tokenizer.eos_token
>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
```

Sylvain Gugger's avatar
Sylvain Gugger committed
200
201
</pt>
<tf>
202
Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element:
Sylvain Gugger's avatar
Sylvain Gugger committed
203
204
205
206
207
208
209
210
211

```py
>>> from transformers import DataCollatorForLanguageModeling

>>> data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False, return_tensors="tf")
```

</tf>
</frameworkcontent>
Steven Liu's avatar
Steven Liu committed
212
213


214
## Train
Steven Liu's avatar
Steven Liu committed
215

216
217
<frameworkcontent>
<pt>
218
219
<Tip>

220
If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the [basic tutorial](../training#train-with-pytorch-trainer)!
221
222

</Tip>
223

224
You're ready to start training your model now! Load DistilGPT2 with [`AutoModelForCausalLM`]:
Steven Liu's avatar
Steven Liu committed
225
226
227
228
229
230
231
232
233

```py
>>> from transformers import AutoModelForCausalLM, TrainingArguments, Trainer

>>> model = AutoModelForCausalLM.from_pretrained("distilgpt2")
```

At this point, only three steps remain:

234
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model).
Steven Liu's avatar
Steven Liu committed
235
2. Pass the training arguments to [`Trainer`] along with the model, datasets, and data collator.
236
3. Call [`~Trainer.train`] to finetune your model.
Steven Liu's avatar
Steven Liu committed
237
238
239

```py
>>> training_args = TrainingArguments(
240
...     output_dir="my_awesome_eli5_clm-model",
Steven Liu's avatar
Steven Liu committed
241
242
243
...     evaluation_strategy="epoch",
...     learning_rate=2e-5,
...     weight_decay=0.01,
244
...     push_to_hub=True,
Steven Liu's avatar
Steven Liu committed
245
246
247
248
249
250
251
252
253
254
255
256
... )

>>> trainer = Trainer(
...     model=model,
...     args=training_args,
...     train_dataset=lm_dataset["train"],
...     eval_dataset=lm_dataset["test"],
...     data_collator=data_collator,
... )

>>> trainer.train()
```
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272

Once training is completed, use the [`~transformers.Trainer.evaluate`] method to evaluate your model and get its perplexity:

```py
>>> import math

>>> eval_results = trainer.evaluate()
>>> print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}")
Perplexity: 49.61
```

Then share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:

```py
>>> trainer.push_to_hub()
```
273
274
</pt>
<tf>
275
276
<Tip>

277
If you aren't familiar with finetuning a model with Keras, take a look at the [basic tutorial](../training#train-a-tensorflow-model-with-keras)!
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296

</Tip>
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:

```py
>>> from transformers import create_optimizer, AdamWeightDecay

>>> optimizer = AdamWeightDecay(learning_rate=2e-5, weight_decay_rate=0.01)
```

Then you can load DistilGPT2 with [`TFAutoModelForCausalLM`]:

```py
>>> from transformers import TFAutoModelForCausalLM

>>> model = TFAutoModelForCausalLM.from_pretrained("distilgpt2")
```

Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:
Steven Liu's avatar
Steven Liu committed
297
298

```py
Matt's avatar
Matt committed
299
300
>>> tf_train_set = model.prepare_tf_dataset(
...     lm_dataset["train"],
Steven Liu's avatar
Steven Liu committed
301
302
303
304
305
...     shuffle=True,
...     batch_size=16,
...     collate_fn=data_collator,
... )

Matt's avatar
Matt committed
306
307
>>> tf_test_set = model.prepare_tf_dataset(
...     lm_dataset["test"],
Steven Liu's avatar
Steven Liu committed
308
309
310
311
312
313
...     shuffle=False,
...     batch_size=16,
...     collate_fn=data_collator,
... )
```

314
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method). Note that Transformers models all have a default task-relevant loss function, so you don't need to specify one unless you want to:
315
316
317
318

```py
>>> import tensorflow as tf

319
>>> model.compile(optimizer=optimizer)  # No loss argument!
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
```

This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:

```py
>>> from transformers.keras_callbacks import PushToHubCallback

>>> callback = PushToHubCallback(
...     output_dir="my_awesome_eli5_clm-model",
...     tokenizer=tokenizer,
... )
```

Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model:

```py
>>> model.fit(x=tf_train_set, validation_data=tf_test_set, epochs=3, callbacks=[callback])
```

Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
</tf>
</frameworkcontent>

343
344
<Tip>

345
346
347
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
348
349
350

</Tip>

351
## Inference
352
353
354
355

Great, now that you've finetuned a model, you can use it for inference!

Come up with a prompt you'd like to generate text from:
Steven Liu's avatar
Steven Liu committed
356
357

```py
358
359
>>> prompt = "Somatic hypermutation allows the immune system to"
```
Steven Liu's avatar
Steven Liu committed
360

361
362
363
364
365
366
367
368
The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for text generation with your model, and pass your text to it:

```py
>>> from transformers import pipeline

>>> generator = pipeline("text-generation", model="my_awesome_eli5_clm-model")
>>> generator(prompt)
[{'generated_text': "Somatic hypermutation allows the immune system to be able to effectively reverse the damage caused by an infection.\n\n\nThe damage caused by an infection is caused by the immune system's ability to perform its own self-correcting tasks."}]
Steven Liu's avatar
Steven Liu committed
369
370
```

371
372
373
<frameworkcontent>
<pt>
Tokenize the text and return the `input_ids` as PyTorch tensors:
Steven Liu's avatar
Steven Liu committed
374
375

```py
376
>>> from transformers import AutoTokenizer
Steven Liu's avatar
Steven Liu committed
377

378
379
>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="pt").input_ids
Steven Liu's avatar
Steven Liu committed
380
381
```

382
383
Use the [`~transformers.generation_utils.GenerationMixin.generate`] method to generate text.
For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page.
Steven Liu's avatar
Steven Liu committed
384
385

```py
386
>>> from transformers import AutoModelForCausalLM
Steven Liu's avatar
Steven Liu committed
387

388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
>>> model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
```

Decode the generated token ids back into text:

```py
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
["Somatic hypermutation allows the immune system to react to drugs with the ability to adapt to a different environmental situation. In other words, a system of 'hypermutation' can help the immune system to adapt to a different environmental situation or in some cases even a single life. In contrast, researchers at the University of Massachusetts-Boston have found that 'hypermutation' is much stronger in mice than in humans but can be found in humans, and that it's not completely unknown to the immune system. A study on how the immune system"]
```
</pt>
<tf>
Tokenize the text and return the `input_ids` as TensorFlow tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model")
>>> inputs = tokenizer(prompt, return_tensors="tf").input_ids
```

409
Use the [`~transformers.generation_tf_utils.TFGenerationMixin.generate`] method to create the summarization. For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](../generation_strategies) page.
410
411
412
413
414
415

```py
>>> from transformers import TFAutoModelForCausalLM

>>> model = TFAutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model")
>>> outputs = model.generate(input_ids=inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
Steven Liu's avatar
Steven Liu committed
416
417
```

418
Decode the generated token ids back into text:
Steven Liu's avatar
Steven Liu committed
419
420

```py
421
422
>>> tokenizer.batch_decode(outputs, skip_special_tokens=True)
['Somatic hypermutation allows the immune system to detect the presence of other viruses as they become more prevalent. Therefore, researchers have identified a high proportion of human viruses. The proportion of virus-associated viruses in our study increases with age. Therefore, we propose a simple algorithm to detect the presence of these new viruses in our samples as a sign of improved immunity. A first study based on this algorithm, which will be published in Science on Friday, aims to show that this finding could translate into the development of a better vaccine that is more effective for']
Steven Liu's avatar
Steven Liu committed
423
```
424
425
</tf>
</frameworkcontent>