question_answering.md 15.4 KB
Newer Older
Steven Liu's avatar
Steven Liu committed
1
2
3
4
5
6
7
8
9
10
<!--Copyright 2022 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
11
12
13
14

鈿狅笍 Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.

Steven Liu's avatar
Steven Liu committed
15
16
17
18
-->

# Question answering

19
20
[[open-in-colab]]

Steven Liu's avatar
Steven Liu committed
21
22
<Youtube id="ajPx5LwJD-I"/>

23
Question answering tasks return an answer given a question. If you've ever asked a virtual assistant like Alexa, Siri or Google what the weather is, then you've used a question answering model before. There are two common types of question answering tasks:
Steven Liu's avatar
Steven Liu committed
24
25
26
27

- Extractive: extract the answer from the given context.
- Abstractive: generate an answer from the context that correctly answers the question.

28
29
This guide will show you how to:

30
1. Finetune [DistilBERT](https://huggingface.co/distilbert/distilbert-base-uncased) on the [SQuAD](https://huggingface.co/datasets/squad) dataset for extractive question answering.
31
2. Use your finetuned model for inference.
Steven Liu's avatar
Steven Liu committed
32
33

<Tip>
34

35
To see all architectures and checkpoints compatible with this task, we recommend checking the [task-page](https://huggingface.co/tasks/question-answering)
Steven Liu's avatar
Steven Liu committed
36
37
38

</Tip>

39
40
41
42
43
44
45
46
47
48
49
50
51
52
Before you begin, make sure you have all the necessary libraries installed:

```bash
pip install transformers datasets evaluate
```

We encourage you to login to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to login:

```py
>>> from huggingface_hub import notebook_login

>>> notebook_login()
```

Steven Liu's avatar
Steven Liu committed
53
54
## Load SQuAD dataset

55
Start by loading a smaller subset of the SQuAD dataset from the 馃 Datasets library. This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset.
Steven Liu's avatar
Steven Liu committed
56
57
58
59

```py
>>> from datasets import load_dataset

60
61
62
63
64
65
66
>>> squad = load_dataset("squad", split="train[:5000]")
```

Split the dataset's `train` split into a train and test set with the [`~datasets.Dataset.train_test_split`] method:

```py
>>> squad = squad.train_test_split(test_size=0.2)
Steven Liu's avatar
Steven Liu committed
67
68
69
70
71
72
73
74
75
76
77
78
79
80
```

Then take a look at an example:

```py
>>> squad["train"][0]
{'answers': {'answer_start': [515], 'text': ['Saint Bernadette Soubirous']},
 'context': 'Architecturally, the school has a Catholic character. Atop the Main Building\'s gold dome is a golden statue of the Virgin Mary. Immediately in front of the Main Building and facing it, is a copper statue of Christ with arms upraised with the legend "Venite Ad Me Omnes". Next to the Main Building is the Basilica of the Sacred Heart. Immediately behind the basilica is the Grotto, a Marian place of prayer and reflection. It is a replica of the grotto at Lourdes, France where the Virgin Mary reputedly appeared to Saint Bernadette Soubirous in 1858. At the end of the main drive (and in a direct line that connects through 3 statues and the Gold Dome), is a simple, modern stone statue of Mary.',
 'id': '5733be284776f41900661182',
 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',
 'title': 'University_of_Notre_Dame'
}
```

81
82
83
84
85
There are several important fields here:

- `answers`: the starting location of the answer token and the answer text.
- `context`: background information from which the model needs to extract the answer.
- `question`: the question a model should answer.
Steven Liu's avatar
Steven Liu committed
86
87
88
89
90

## Preprocess

<Youtube id="qgaM0weJHpA"/>

91
The next step is to load a DistilBERT tokenizer to process the `question` and `context` fields:
Steven Liu's avatar
Steven Liu committed
92
93
94
95

```py
>>> from transformers import AutoTokenizer

96
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased")
Steven Liu's avatar
Steven Liu committed
97
98
```

99
There are a few preprocessing steps particular to question answering tasks you should be aware of:
Steven Liu's avatar
Steven Liu committed
100

101
1. Some examples in a dataset may have a very long `context` that exceeds the maximum input length of the model. To deal with longer sequences, truncate only the `context` by setting `truncation="only_second"`.
Steven Liu's avatar
Steven Liu committed
102
103
2. Next, map the start and end positions of the answer to the original `context` by setting
   `return_offset_mapping=True`.
104
3. With the mapping in hand, now you can find the start and end tokens of the answer. Use the [`~tokenizers.Encoding.sequence_ids`] method to
Steven Liu's avatar
Steven Liu committed
105
106
   find which part of the offset corresponds to the `question` and which corresponds to the `context`.

107
Here is how you can create a function to truncate and map the start and end tokens of the `answer` to the `context`:
Steven Liu's avatar
Steven Liu committed
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161

```py
>>> def preprocess_function(examples):
...     questions = [q.strip() for q in examples["question"]]
...     inputs = tokenizer(
...         questions,
...         examples["context"],
...         max_length=384,
...         truncation="only_second",
...         return_offsets_mapping=True,
...         padding="max_length",
...     )

...     offset_mapping = inputs.pop("offset_mapping")
...     answers = examples["answers"]
...     start_positions = []
...     end_positions = []

...     for i, offset in enumerate(offset_mapping):
...         answer = answers[i]
...         start_char = answer["answer_start"][0]
...         end_char = answer["answer_start"][0] + len(answer["text"][0])
...         sequence_ids = inputs.sequence_ids(i)

...         # Find the start and end of the context
...         idx = 0
...         while sequence_ids[idx] != 1:
...             idx += 1
...         context_start = idx
...         while sequence_ids[idx] == 1:
...             idx += 1
...         context_end = idx - 1

...         # If the answer is not fully inside the context, label it (0, 0)
...         if offset[context_start][0] > end_char or offset[context_end][1] < start_char:
...             start_positions.append(0)
...             end_positions.append(0)
...         else:
...             # Otherwise it's the start and end token positions
...             idx = context_start
...             while idx <= context_end and offset[idx][0] <= start_char:
...                 idx += 1
...             start_positions.append(idx - 1)

...             idx = context_end
...             while idx >= context_start and offset[idx][1] >= end_char:
...                 idx -= 1
...             end_positions.append(idx + 1)

...     inputs["start_positions"] = start_positions
...     inputs["end_positions"] = end_positions
...     return inputs
```

162
To apply the preprocessing function over the entire dataset, use 馃 Datasets [`~datasets.Dataset.map`] function. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once. Remove any columns you don't need:
Steven Liu's avatar
Steven Liu committed
163
164
165
166
167

```py
>>> tokenized_squad = squad.map(preprocess_function, batched=True, remove_columns=squad["train"].column_names)
```

168
Now create a batch of examples using [`DefaultDataCollator`]. Unlike other data collators in 馃 Transformers, the [`DefaultDataCollator`] does not apply any additional preprocessing such as padding.
Steven Liu's avatar
Steven Liu committed
169

Sylvain Gugger's avatar
Sylvain Gugger committed
170
171
<frameworkcontent>
<pt>
Steven Liu's avatar
Steven Liu committed
172
173
174
175
```py
>>> from transformers import DefaultDataCollator

>>> data_collator = DefaultDataCollator()
Sylvain Gugger's avatar
Sylvain Gugger committed
176
177
178
179
```
</pt>
<tf>
```py
Steven Liu's avatar
Steven Liu committed
180
181
182
183
>>> from transformers import DefaultDataCollator

>>> data_collator = DefaultDataCollator(return_tensors="tf")
```
Sylvain Gugger's avatar
Sylvain Gugger committed
184
185
</tf>
</frameworkcontent>
Steven Liu's avatar
Steven Liu committed
186

187
## Train
Steven Liu's avatar
Steven Liu committed
188

189
190
<frameworkcontent>
<pt>
191
192
193
194
195
<Tip>

If you aren't familiar with finetuning a model with the [`Trainer`], take a look at the basic tutorial [here](../training#train-with-pytorch-trainer)!

</Tip>
196

197
You're ready to start training your model now! Load DistilBERT with [`AutoModelForQuestionAnswering`]:
Steven Liu's avatar
Steven Liu committed
198
199
200
201

```py
>>> from transformers import AutoModelForQuestionAnswering, TrainingArguments, Trainer

202
>>> model = AutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
Steven Liu's avatar
Steven Liu committed
203
204
205
206
```

At this point, only three steps remain:

207
1. Define your training hyperparameters in [`TrainingArguments`]. The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model).
Steven Liu's avatar
Steven Liu committed
208
2. Pass the training arguments to [`Trainer`] along with the model, dataset, tokenizer, and data collator.
209
3. Call [`~Trainer.train`] to finetune your model.
Steven Liu's avatar
Steven Liu committed
210
211
212

```py
>>> training_args = TrainingArguments(
213
...     output_dir="my_awesome_qa_model",
214
...     eval_strategy="epoch",
Steven Liu's avatar
Steven Liu committed
215
216
217
218
219
...     learning_rate=2e-5,
...     per_device_train_batch_size=16,
...     per_device_eval_batch_size=16,
...     num_train_epochs=3,
...     weight_decay=0.01,
220
...     push_to_hub=True,
Steven Liu's avatar
Steven Liu committed
221
222
223
224
225
226
... )

>>> trainer = Trainer(
...     model=model,
...     args=training_args,
...     train_dataset=tokenized_squad["train"],
227
...     eval_dataset=tokenized_squad["test"],
Steven Liu's avatar
Steven Liu committed
228
229
230
231
232
233
234
...     tokenizer=tokenizer,
...     data_collator=data_collator,
... )

>>> trainer.train()
```

235
Once training is completed, share your model to the Hub with the [`~transformers.Trainer.push_to_hub`] method so everyone can use your model:
Steven Liu's avatar
Steven Liu committed
236

237
238
```py
>>> trainer.push_to_hub()
Steven Liu's avatar
Steven Liu committed
239
```
240
241
</pt>
<tf>
242
243
<Tip>

244
If you aren't familiar with finetuning a model with Keras, take a look at the basic tutorial [here](../training#train-a-tensorflow-model-with-keras)!
245
246

</Tip>
247
To finetune a model in TensorFlow, start by setting up an optimizer function, learning rate schedule, and some training hyperparameters:
Steven Liu's avatar
Steven Liu committed
248
249
250
251
252
253
254
255
256
257
258
259
260
261

```py
>>> from transformers import create_optimizer

>>> batch_size = 16
>>> num_epochs = 2
>>> total_train_steps = (len(tokenized_squad["train"]) // batch_size) * num_epochs
>>> optimizer, schedule = create_optimizer(
...     init_lr=2e-5,
...     num_warmup_steps=0,
...     num_train_steps=total_train_steps,
... )
```

262
Then you can load DistilBERT with [`TFAutoModelForQuestionAnswering`]:
Steven Liu's avatar
Steven Liu committed
263
264
265
266

```py
>>> from transformers import TFAutoModelForQuestionAnswering

267
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("distilbert/distilbert-base-uncased")
Steven Liu's avatar
Steven Liu committed
268
269
```

270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
Convert your datasets to the `tf.data.Dataset` format with [`~transformers.TFPreTrainedModel.prepare_tf_dataset`]:

```py
>>> tf_train_set = model.prepare_tf_dataset(
...     tokenized_squad["train"],
...     shuffle=True,
...     batch_size=16,
...     collate_fn=data_collator,
... )

>>> tf_validation_set = model.prepare_tf_dataset(
...     tokenized_squad["test"],
...     shuffle=False,
...     batch_size=16,
...     collate_fn=data_collator,
... )
```

Steven Liu's avatar
Steven Liu committed
288
289
290
291
292
293
294
295
Configure the model for training with [`compile`](https://keras.io/api/models/model_training_apis/#compile-method):

```py
>>> import tensorflow as tf

>>> model.compile(optimizer=optimizer)
```

296
297
298
299
300
301
302
303
304
305
306
307
The last thing to setup before you start training is to provide a way to push your model to the Hub. This can be done by specifying where to push your model and tokenizer in the [`~transformers.PushToHubCallback`]:

```py
>>> from transformers.keras_callbacks import PushToHubCallback

>>> callback = PushToHubCallback(
...     output_dir="my_awesome_qa_model",
...     tokenizer=tokenizer,
... )
```

Finally, you're ready to start training your model! Call [`fit`](https://keras.io/api/models/model_training_apis/#fit-method) with your training and validation datasets, the number of epochs, and your callback to finetune the model:
Steven Liu's avatar
Steven Liu committed
308
309

```py
310
>>> model.fit(x=tf_train_set, validation_data=tf_validation_set, epochs=3, callbacks=[callback])
Steven Liu's avatar
Steven Liu committed
311
```
312
Once training is completed, your model is automatically uploaded to the Hub so everyone can use it!
313
314
</tf>
</frameworkcontent>
Steven Liu's avatar
Steven Liu committed
315
316
317

<Tip>

318
For a more in-depth example of how to finetune a model for question answering, take a look at the corresponding
319
320
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb)
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb).
Steven Liu's avatar
Steven Liu committed
321

322
323
324
325
326
327
</Tip>

## Evaluate

Evaluation for question answering requires a significant amount of postprocessing. To avoid taking up too much of your time, this guide skips the evaluation step. The [`Trainer`] still calculates the evaluation loss during training so you're not completely in the dark about your model's performance.

328
If have more time and you're interested in how to evaluate your model for question answering, take a look at the [Question answering](https://huggingface.co/course/chapter7/7?fw=pt#post-processing) chapter from the 馃 Hugging Face Course!
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369

## Inference

Great, now that you've finetuned a model, you can use it for inference!

Come up with a question and some context you'd like the model to predict:

```py
>>> question = "How many programming languages does BLOOM support?"
>>> context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
```

The simplest way to try out your finetuned model for inference is to use it in a [`pipeline`]. Instantiate a `pipeline` for question answering with your model, and pass your text to it:

```py
>>> from transformers import pipeline

>>> question_answerer = pipeline("question-answering", model="my_awesome_qa_model")
>>> question_answerer(question=question, context=context)
{'score': 0.2058267742395401,
 'start': 10,
 'end': 95,
 'answer': '176 billion parameters and can generate text in 46 languages natural languages and 13'}
```

You can also manually replicate the results of the `pipeline` if you'd like:

<frameworkcontent>
<pt>
Tokenize the text and return PyTorch tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, context, return_tensors="pt")
```

Pass your inputs to the model and return the `logits`:

```py
370
>>> import torch
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
>>> from transformers import AutoModelForQuestionAnswering

>>> model = AutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> with torch.no_grad():
...     outputs = model(**inputs)
```

Get the highest probability from the model output for the start and end positions:

```py
>>> answer_start_index = outputs.start_logits.argmax()
>>> answer_end_index = outputs.end_logits.argmax()
```

Decode the predicted tokens to get the answer:

```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</pt>
<tf>
Tokenize the text and return TensorFlow tensors:

```py
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("my_awesome_qa_model")
>>> inputs = tokenizer(question, text, return_tensors="tf")
```

Pass your inputs to the model and return the `logits`:

```py
>>> from transformers import TFAutoModelForQuestionAnswering

>>> model = TFAutoModelForQuestionAnswering.from_pretrained("my_awesome_qa_model")
>>> outputs = model(**inputs)
```

Get the highest probability from the model output for the start and end positions:

```py
>>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0])
>>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0])
```

Decode the predicted tokens to get the answer:

```py
>>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1]
>>> tokenizer.decode(predict_answer_tokens)
'176 billion parameters and can generate text in 46 languages natural languages and 13'
```
</tf>
427
</frameworkcontent>