README.md 2.58 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
<!---
Matt's avatar
Matt committed
2
Copyright 2021 The HuggingFace Team. All rights reserved.
Sylvain Gugger's avatar
Sylvain Gugger committed
3
4
5
6
7
8
9
10
11
12
13
14
15
16

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

Matt's avatar
Matt committed
17
18
19
20
# Question answering example

This folder contains the `run_qa.py` script, demonstrating *question answering* with the 馃 Transformers library.
For straightforward use-cases you may be able to use this script without modification, although we have also
chenk's avatar
chenk committed
21
included comments in the code to indicate areas that you may need to adapt to your own projects.
Matt's avatar
Matt committed
22
23

### Usage notes
chenk's avatar
chenk committed
24

Matt's avatar
Matt committed
25
Note that when contexts are long they may be split into multiple training cases, not all of which may contain
chenk's avatar
chenk committed
26
the answer span.
Matt's avatar
Matt committed
27
28
29
30
31

As-is, the example script will train on SQuAD or any other question-answering dataset formatted the same way, and can handle user
inputs as well.

### Multi-GPU and TPU usage
Sylvain Gugger's avatar
Sylvain Gugger committed
32

Matt's avatar
Matt committed
33
34
35
By default, the script uses a `MirroredStrategy` and will use multiple GPUs effectively if they are available. TPUs
can also be used by passing the name of the TPU resource with the `--tpu` argument. There are some issues surrounding
these strategies and our models right now, which are most likely to appear in the evaluation/prediction steps. We're
chenk's avatar
chenk committed
36
actively working on better support for multi-GPU and TPU training in TF, but if you encounter problems a quick
Matt's avatar
Matt committed
37
38
39
40
41
42
43
workaround is to train in the multi-GPU or TPU context and then perform predictions outside of it.

### Memory usage and data loading

One thing to note is that all data is loaded into memory in this script. Most question answering datasets are small
enough that this is not an issue, but if you have a very large dataset you will need to modify the script to handle
data streaming. This is particularly challenging for TPUs, given the stricter requirements and the sheer volume of data
chenk's avatar
chenk committed
44
45
required to keep them fed. A full explanation of all the possible pitfalls is a bit beyond this example script and
README, but for more information you can see the 'Input Datasets' section of
Matt's avatar
Matt committed
46
47
48
[this document](https://www.tensorflow.org/guide/tpu).

### Example command
chenk's avatar
chenk committed
49

50
```bash
Matt's avatar
Matt committed
51
python run_qa.py \
52
--model_name_or_path distilbert/distilbert-base-cased \
Matt's avatar
Matt committed
53
54
55
--output_dir output \
--dataset_name squad \
--do_train \
chenk's avatar
chenk committed
56
--do_eval
Matt's avatar
Matt committed
57
```