README.md 14.8 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

16
# Examples
LysandreJik's avatar
LysandreJik committed
17

18
19
20
21
22
23
24
This folder contains actively maintained examples of use of 🤗 Transformers organized along NLP tasks. If you are looking for an example that used to be in this folder, it may have moved to our [research projects](https://github.com/huggingface/transformers/tree/master/examples/research_projects) subfolder (which contains frozen snapshots of research projects) or to the [legacy](https://github.com/huggingface/transformers/tree/master/examples/legacy) subfolder.

While we strive to present as many use cases as possible, the scripts in this folder are just examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. To help you with that, all the PyTorch versions of the examples fully expose the preprocessing of the data. This way, you can easily tweak them.

This is similar if you want the scripts to report another metric than the one they currently use: look at the `compute_metrics` function inside the script. It takes the full arrays of predictions and labels and has to return a dictionary of string keys and float values. Just change it to add (or replace) your own metric to the ones already reported.

Please discuss on the [forum](https://discuss.huggingface.co/) or in an [issue](https://github.com/huggingface/transformers/issues) a feature you would like to implement in an example before submitting a PR: we welcome bug fixes but since we want to keep the examples as simple as possible, it's unlikely we will merge a pull request adding more functionality at the cost of readability.
Julien Chaumond's avatar
Julien Chaumond committed
25
26

## Important note
LysandreJik's avatar
LysandreJik committed
27

28
**Important**
29

Sylvain Gugger's avatar
Sylvain Gugger committed
30
To make sure you can successfully run the latest versions of the example scripts, you have to **install the library from source** and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Rémi Louf's avatar
Rémi Louf committed
31
```bash
Julien Chaumond's avatar
Julien Chaumond committed
32
git clone https://github.com/huggingface/transformers
Rémi Louf's avatar
Rémi Louf committed
33
cd transformers
34
pip install .
Sylvain Gugger's avatar
Sylvain Gugger committed
35
36
37
38
```
Then cd in the example folder of your choice and run
```bash
pip install -r requirements.txt
Rémi Louf's avatar
Rémi Louf committed
39
40
```

41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:

<details>
  <summary>Examples for older versions of 🤗 Transformers</summary>

  - [v4.3.3](https://github.com/huggingface/transformers/tree/v4.3.3/examples)
  - [v4.2.2](https://github.com/huggingface/transformers/tree/v4.2.2/examples)
  - [v4.1.1](https://github.com/huggingface/transformers/tree/v4.1.1/examples)
  - [v4.0.1](https://github.com/huggingface/transformers/tree/v4.0.1/examples)
  - [v3.5.1](https://github.com/huggingface/transformers/tree/v3.5.1/examples)
  - [v3.4.0](https://github.com/huggingface/transformers/tree/v3.4.0/examples)
  - [v3.3.1](https://github.com/huggingface/transformers/tree/v3.3.1/examples)
  - [v3.2.0](https://github.com/huggingface/transformers/tree/v3.2.0/examples)
  - [v3.1.0](https://github.com/huggingface/transformers/tree/v3.1.0/examples)
  - [v3.0.2](https://github.com/huggingface/transformers/tree/v3.0.2/examples)
  - [v2.11.0](https://github.com/huggingface/transformers/tree/v2.11.0/examples)
  - [v2.10.0](https://github.com/huggingface/transformers/tree/v2.10.0/examples)
  - [v2.9.1](https://github.com/huggingface/transformers/tree/v2.9.1/examples)
  - [v2.8.0](https://github.com/huggingface/transformers/tree/v2.8.0/examples)
  - [v2.7.0](https://github.com/huggingface/transformers/tree/v2.7.0/examples)
  - [v2.6.0](https://github.com/huggingface/transformers/tree/v2.6.0/examples)
  - [v2.5.1](https://github.com/huggingface/transformers/tree/v2.5.1/examples)
  - [v2.4.0](https://github.com/huggingface/transformers/tree/v2.4.0/examples)
  - [v2.3.0](https://github.com/huggingface/transformers/tree/v2.3.0/examples)
  - [v2.2.0](https://github.com/huggingface/transformers/tree/v2.2.0/examples)
  - [v2.1.1](https://github.com/huggingface/transformers/tree/v2.1.0/examples)
  - [v2.0.0](https://github.com/huggingface/transformers/tree/v2.0.0/examples)
  - [v1.2.0](https://github.com/huggingface/transformers/tree/v1.2.0/examples)
  - [v1.1.0](https://github.com/huggingface/transformers/tree/v1.1.0/examples)
  - [v1.0.0](https://github.com/huggingface/transformers/tree/v1.0.0/examples)
</details>

Alternatively, you can find switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with
74
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
75
git checkout tags/v3.5.1
76
```
77
and run the example command as usual afterward.
78
79
80

## The Big Table of Tasks

Sylvain Gugger's avatar
Sylvain Gugger committed
81
82
83
84
85
86
87
88
89
90
Here is the list of all our examples:
- with information on whether they are **built on top of `Trainer`/`TFTrainer`** (if not, they still work, they might
  just lack some features),
- whether or not they leverage the [🤗 Datasets](https://github.com/huggingface/datasets) library.
- links to **Colab notebooks** to walk through the scripts and run them easily,
<!--
Coming soon!
- links to **Cloud deployments** to be able to deploy large-scale trainings in the Cloud with little to no setup.
-->

Sylvain Gugger's avatar
Sylvain Gugger committed
91
92
| Task | Example datasets | Trainer support | TFTrainer support | 🤗 Datasets | Colab
|---|---|:---:|:---:|:---:|:---:|
93
94
| [**`language-modeling`**](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)       | WikiText-2      | ✅ | -  | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb)
| [**`multiple-choice`**](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice)           | SWAG            | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/multiple_choice.ipynb)
95
| [**`question-answering`**](https://github.com/huggingface/transformers/tree/master/examples/question-answering)     | SQuAD           | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb)
96
97
| [**`summarization`**](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)                     |  XSum           | ✅ | -  | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb)
| [**`text-classification`**](https://github.com/huggingface/transformers/tree/master/examples/text-classification)   | GLUE            | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb)
Sylvain Gugger's avatar
Sylvain Gugger committed
98
| [**`text-generation`**](https://github.com/huggingface/transformers/tree/master/examples/text-generation)           | -               | n/a | n/a | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb)
99
| [**`token-classification`**](https://github.com/huggingface/transformers/tree/master/examples/token-classification) | CoNLL NER       | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb)
100
| [**`translation`**](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)                       | WMT             | ✅  | - | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/translation.ipynb)
101
102


103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
## Running quick tests

Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.

For example here is how to truncate all three splits to just 50 samples each:
```
examples/token-classification/run_ner.py \
--max_train_samples 50 \
--max_val_samples 50 \
--max_test_samples 50 \
[...]
```

Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a `-h` option, e.g.:
```
examples/token-classification/run_ner.py -h
```
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134

## Resuming training

You can resume training from a previous checkpoint like this:

1. Pass `--output_dir previous_output_dir` without `--overwrite_output_dir` to resume training from the latest checkpoint in `output_dir` (what you would use if the training was interrupted, for instance).
2. Pass `--model_name_or_path path_to_a_specific_checkpoint` to resume training from that checkpoint folder.

Should you want to turn an example into a notebook where you'd no longer have access to the command
line, 🤗 Trainer supports resuming from a checkpoint via `trainer.train(resume_from_checkpoint)`.

1. If `resume_from_checkpoint` is `True` it will look for the last checkpoint in the value of `output_dir` passed via `TrainingArguments`.
2. If `resume_from_checkpoint` is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.


135
136
137
138
139
140
141
142
143
## Distributed training and mixed precision

All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to
the [Trainer API](https://huggingface.co/transformers/main_classes/trainer.html). To launch one of them on _n_ GPUS,
use the following command:

```bash
python -m torch.distributed.launch \
    --nproc_per_node number_of_gpu_you_have path_to_script.py \
144
	--all_arguments_of_the_script
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
```

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the `run_glue` script, with 8 GPUs:

```bash
python -m torch.distributed.launch \
    --nproc_per_node 8 text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/
```

If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest, or by installing the [Apex](https://github.com/NVIDIA/apex) library for previous
versions. Just add the flag `--fp16` to your command launching one of the scripts mentioned above!

Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in
[this table](https://github.com/huggingface/transformers/tree/master/examples/text-classification#mixed-precision-training)
for text classification).

Julien Chaumond's avatar
Julien Chaumond committed
172
## Running on TPUs
LysandreJik's avatar
LysandreJik committed
173

174
175
176
177
178
When using Tensorflow, TPUs are supported out of the box as a `tf.distribute.Strategy`.

When using PyTorch, we support TPUs thanks to `pytorch/xla`. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/README.md).

179
180
181
182
183
In this repo, we provide a very simple launcher script named
[xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your
regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for
`torch.distributed`):
184
185

```bash
186
187
python xla_spawn.py --num_cores num_tpu_you_have \
    path_to_script.py \
188
	--all_arguments_of_the_script
189
190
```

191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the `run_glue` script, with 8 TPUs:

```bash
python xla_spawn.py --num_cores 8 \
    text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/
```
207
208
209

## Logging & Experiment tracking

210
211
212
You can easily log and monitor your runs code. The following are currently supported:

* [TensorBoard](https://www.tensorflow.org/tensorboard)
213
* [Weights & Biases](https://docs.wandb.ai/integrations/huggingface)
214
215
216
* [Comet ML](https://www.comet.ml/docs/python-sdk/huggingface/)

### Weights & Biases
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236

To use Weights & Biases, install the wandb package with:

```bash
pip install wandb
```

Then log in the command line:

```bash
wandb login
```

If you are in Jupyter or Colab, you should login with:

```python
import wandb
wandb.login()
```

237
238
To enable logging to W&B, include `"wandb"` in the `report_to` of your `TrainingArguments` or script. Or just pass along `--report_to all` if you have `wandb` installed.

239
240
Whenever you use `Trainer` or `TFTrainer` classes, your losses, evaluation metrics, model topology and gradients (for `Trainer` only) will automatically be logged.

241
242
Advanced configuration is possible by setting environment variables:

243
244
245
246
247
| Environment Variable | Value |
|---|---|
| WANDB_LOG_MODEL | Log the model as artifact (log the model as artifact at the end of training (`false` by default) |
| WANDB_WATCH | one of `gradients` (default) to log histograms of gradients, `all` to log histograms of both gradients and parameters, or `false` for no histogram logging |
| WANDB_PROJECT | Organize runs by project |
248
249
250
251
252
253

Set run names with `run_name` argument present in scripts or as part of `TrainingArguments`.

Additional configuration options are available through generic [wandb environment variables](https://docs.wandb.com/library/environment-variables).

Refer to related [documentation & examples](https://docs.wandb.ai/integrations/huggingface).
254
255
256
257
258
259
260
261
262
263
264
265
266
267

### Comet.ml

To use `comet_ml`, install the Python package with:

```bash
pip install comet_ml
```

or if in a Conda environment:

```bash
conda install -c comet_ml -c anaconda -c conda-forge comet_ml
```