README.md 13.7 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

16
# Examples
LysandreJik's avatar
LysandreJik committed
17

Sylvain Gugger's avatar
Sylvain Gugger committed
18
19
This folder contains actively maintained examples of use of 🤗 Transformers organized along NLP tasks. If you are looking for an example that used to
be in this folder, it may have moved to our [research projects](https://github.com/huggingface/transformers/tree/master/examples/research_projects) subfolder (which contains frozen snapshots of research projects).
Julien Chaumond's avatar
Julien Chaumond committed
20
21

## Important note
LysandreJik's avatar
LysandreJik committed
22

23
**Important**
24

Sylvain Gugger's avatar
Sylvain Gugger committed
25
To make sure you can successfully run the latest versions of the example scripts, you have to **install the library from source** and install some example-specific requirements. To do this, execute the following steps in a new virtual environment:
Rémi Louf's avatar
Rémi Louf committed
26
```bash
Julien Chaumond's avatar
Julien Chaumond committed
27
git clone https://github.com/huggingface/transformers
Rémi Louf's avatar
Rémi Louf committed
28
cd transformers
29
pip install .
Sylvain Gugger's avatar
Sylvain Gugger committed
30
31
32
33
```
Then cd in the example folder of your choice and run
```bash
pip install -r requirements.txt
Rémi Louf's avatar
Rémi Louf committed
34
35
```

36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
To browse the examples corresponding to released versions of 🤗 Transformers, click on the line below and then on your desired version of the library:

<details>
  <summary>Examples for older versions of 🤗 Transformers</summary>

  - [v4.3.3](https://github.com/huggingface/transformers/tree/v4.3.3/examples)
  - [v4.2.2](https://github.com/huggingface/transformers/tree/v4.2.2/examples)
  - [v4.1.1](https://github.com/huggingface/transformers/tree/v4.1.1/examples)
  - [v4.0.1](https://github.com/huggingface/transformers/tree/v4.0.1/examples)
  - [v3.5.1](https://github.com/huggingface/transformers/tree/v3.5.1/examples)
  - [v3.4.0](https://github.com/huggingface/transformers/tree/v3.4.0/examples)
  - [v3.3.1](https://github.com/huggingface/transformers/tree/v3.3.1/examples)
  - [v3.2.0](https://github.com/huggingface/transformers/tree/v3.2.0/examples)
  - [v3.1.0](https://github.com/huggingface/transformers/tree/v3.1.0/examples)
  - [v3.0.2](https://github.com/huggingface/transformers/tree/v3.0.2/examples)
  - [v2.11.0](https://github.com/huggingface/transformers/tree/v2.11.0/examples)
  - [v2.10.0](https://github.com/huggingface/transformers/tree/v2.10.0/examples)
  - [v2.9.1](https://github.com/huggingface/transformers/tree/v2.9.1/examples)
  - [v2.8.0](https://github.com/huggingface/transformers/tree/v2.8.0/examples)
  - [v2.7.0](https://github.com/huggingface/transformers/tree/v2.7.0/examples)
  - [v2.6.0](https://github.com/huggingface/transformers/tree/v2.6.0/examples)
  - [v2.5.1](https://github.com/huggingface/transformers/tree/v2.5.1/examples)
  - [v2.4.0](https://github.com/huggingface/transformers/tree/v2.4.0/examples)
  - [v2.3.0](https://github.com/huggingface/transformers/tree/v2.3.0/examples)
  - [v2.2.0](https://github.com/huggingface/transformers/tree/v2.2.0/examples)
  - [v2.1.1](https://github.com/huggingface/transformers/tree/v2.1.0/examples)
  - [v2.0.0](https://github.com/huggingface/transformers/tree/v2.0.0/examples)
  - [v1.2.0](https://github.com/huggingface/transformers/tree/v1.2.0/examples)
  - [v1.1.0](https://github.com/huggingface/transformers/tree/v1.1.0/examples)
  - [v1.0.0](https://github.com/huggingface/transformers/tree/v1.0.0/examples)
</details>

Alternatively, you can find switch your cloned 🤗 Transformers to a specific version (for instance with v3.5.1) with
69
```bash
Sylvain Gugger's avatar
Sylvain Gugger committed
70
git checkout tags/v3.5.1
71
```
72
and run the example command as usual afterward.
73
74
75

## The Big Table of Tasks

Sylvain Gugger's avatar
Sylvain Gugger committed
76
77
78
79
80
81
82
83
84
85
Here is the list of all our examples:
- with information on whether they are **built on top of `Trainer`/`TFTrainer`** (if not, they still work, they might
  just lack some features),
- whether or not they leverage the [🤗 Datasets](https://github.com/huggingface/datasets) library.
- links to **Colab notebooks** to walk through the scripts and run them easily,
<!--
Coming soon!
- links to **Cloud deployments** to be able to deploy large-scale trainings in the Cloud with little to no setup.
-->

Sylvain Gugger's avatar
Sylvain Gugger committed
86
87
88
| Task | Example datasets | Trainer support | TFTrainer support | 🤗 Datasets | Colab
|---|---|:---:|:---:|:---:|:---:|
| [**`language-modeling`**](https://github.com/huggingface/transformers/tree/master/examples/language-modeling)       | Raw text        | ✅ | -  | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb)
89
| [**`multiple-choice`**](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice)           | SWAG, RACE, ARC | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ViktorAlm/notebooks/blob/master/MPC_GPU_Demo_for_TF_and_PT.ipynb)
90
| [**`question-answering`**](https://github.com/huggingface/transformers/tree/master/examples/question-answering)     | SQuAD           | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb)
Sylvain Gugger's avatar
Sylvain Gugger committed
91
| [**`summarization`**](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)                     | CNN/Daily Mail  | ✅  | - | - | -
92
| [**`text-classification`**](https://github.com/huggingface/transformers/tree/master/examples/text-classification)   | GLUE, XNLI      | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb)
Sylvain Gugger's avatar
Sylvain Gugger committed
93
| [**`text-generation`**](https://github.com/huggingface/transformers/tree/master/examples/text-generation)           | -               | n/a | n/a | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/02_how_to_generate.ipynb)
94
| [**`token-classification`**](https://github.com/huggingface/transformers/tree/master/examples/token-classification) | CoNLL NER       | ✅ | ✅ | ✅ | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/token_classification.ipynb)
Sylvain Gugger's avatar
Sylvain Gugger committed
95
| [**`translation`**](https://github.com/huggingface/transformers/tree/master/examples/seq2seq)                       | WMT             | ✅  | - | - | -
96
97


98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
## Running quick tests

Most examples are equipped with a mechanism to truncate the number of dataset samples to the desired length. This is useful for debugging purposes, for example to quickly check that all stages of the programs can complete, before running the same setup on the full dataset which may take hours to complete.

For example here is how to truncate all three splits to just 50 samples each:
```
examples/token-classification/run_ner.py \
--max_train_samples 50 \
--max_val_samples 50 \
--max_test_samples 50 \
[...]
```

Most example scripts should have the first two command line arguments and some have the third one. You can quickly check if a given example supports any of these by passing a `-h` option, e.g.:
```
examples/token-classification/run_ner.py -h
```
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129

## Resuming training

You can resume training from a previous checkpoint like this:

1. Pass `--output_dir previous_output_dir` without `--overwrite_output_dir` to resume training from the latest checkpoint in `output_dir` (what you would use if the training was interrupted, for instance).
2. Pass `--model_name_or_path path_to_a_specific_checkpoint` to resume training from that checkpoint folder.

Should you want to turn an example into a notebook where you'd no longer have access to the command
line, 🤗 Trainer supports resuming from a checkpoint via `trainer.train(resume_from_checkpoint)`.

1. If `resume_from_checkpoint` is `True` it will look for the last checkpoint in the value of `output_dir` passed via `TrainingArguments`.
2. If `resume_from_checkpoint` is a path to a specific checkpoint it will use that saved checkpoint folder to resume the training from.


130
131
132
133
134
135
136
137
138
## Distributed training and mixed precision

All the PyTorch scripts mentioned above work out of the box with distributed training and mixed precision, thanks to
the [Trainer API](https://huggingface.co/transformers/main_classes/trainer.html). To launch one of them on _n_ GPUS,
use the following command:

```bash
python -m torch.distributed.launch \
    --nproc_per_node number_of_gpu_you_have path_to_script.py \
139
	--all_arguments_of_the_script
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
```

As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the `run_glue` script, with 8 GPUs:

```bash
python -m torch.distributed.launch \
    --nproc_per_node 8 text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/
```

If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest, or by installing the [Apex](https://github.com/NVIDIA/apex) library for previous
versions. Just add the flag `--fp16` to your command launching one of the scripts mentioned above!

Using mixed precision training usually results in 2x-speedup for training with the same final results (as shown in
[this table](https://github.com/huggingface/transformers/tree/master/examples/text-classification#mixed-precision-training)
for text classification).

Julien Chaumond's avatar
Julien Chaumond committed
167
## Running on TPUs
LysandreJik's avatar
LysandreJik committed
168

169
170
171
172
173
When using Tensorflow, TPUs are supported out of the box as a `tf.distribute.Strategy`.

When using PyTorch, we support TPUs thanks to `pytorch/xla`. For more context and information on how to setup your TPU environment refer to Google's documentation and to the
very detailed [pytorch/xla README](https://github.com/pytorch/xla/blob/master/README.md).

174
175
176
177
178
In this repo, we provide a very simple launcher script named
[xla_spawn.py](https://github.com/huggingface/transformers/tree/master/examples/xla_spawn.py) that lets you run our
example scripts on multiple TPU cores without any boilerplate. Just pass a `--num_cores` flag to this script, then your
regular training script with its arguments (this is similar to the `torch.distributed.launch` helper for
`torch.distributed`):
179
180

```bash
181
182
python xla_spawn.py --num_cores num_tpu_you_have \
    path_to_script.py \
183
	--all_arguments_of_the_script
184
185
```

186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
As an example, here is how you would fine-tune the BERT large model (with whole word masking) on the text
classification MNLI task using the `run_glue` script, with 8 TPUs:

```bash
python xla_spawn.py --num_cores 8 \
    text-classification/run_glue.py \
    --model_name_or_path bert-large-uncased-whole-word-masking \
    --task_name mnli \
    --do_train \
    --do_eval \
    --max_seq_length 128 \
    --per_device_train_batch_size 8 \
    --learning_rate 2e-5 \
    --num_train_epochs 3.0 \
    --output_dir /tmp/mnli_output/
```
202
203
204

## Logging & Experiment tracking

205
206
207
You can easily log and monitor your runs code. The following are currently supported:

* [TensorBoard](https://www.tensorflow.org/tensorboard)
208
* [Weights & Biases](https://docs.wandb.ai/integrations/huggingface)
209
210
211
* [Comet ML](https://www.comet.ml/docs/python-sdk/huggingface/)

### Weights & Biases
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231

To use Weights & Biases, install the wandb package with:

```bash
pip install wandb
```

Then log in the command line:

```bash
wandb login
```

If you are in Jupyter or Colab, you should login with:

```python
import wandb
wandb.login()
```

232
233
To enable logging to W&B, include `"wandb"` in the `report_to` of your `TrainingArguments` or script. Or just pass along `--report_to all` if you have `wandb` installed.

234
235
Whenever you use `Trainer` or `TFTrainer` classes, your losses, evaluation metrics, model topology and gradients (for `Trainer` only) will automatically be logged.

236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
Advanced configuration is possible by setting environment variables:

<table>
  <thead>
    <tr>
      <th style="text-align:left">Environment Variables</th>
      <th style="text-align:left">Options</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td style="text-align:left">WANDB_LOG_MODEL</td>
      <td style="text-align:left">Log the model as artifact at the end of training (<b>false</b> by default)</td>
    </tr>
    <tr>
      <td style="text-align:left">WANDB_WATCH</td>
      <td style="text-align:left">
        <ul>
          <li><b>gradients</b> (default): Log histograms of the gradients</li>
          <li><b>all</b>: Log histograms of gradients and parameters</li>
          <li><b>false</b>: No gradient or parameter logging</li>
        </ul>
      </td>
    </tr>
    <tr>
      <td style="text-align:left">WANDB_PROJECT</td>
      <td style="text-align:left">Organize runs by project</td>
    </tr>
  </tbody>
</table>

Set run names with `run_name` argument present in scripts or as part of `TrainingArguments`.

Additional configuration options are available through generic [wandb environment variables](https://docs.wandb.com/library/environment-variables).

Refer to related [documentation & examples](https://docs.wandb.ai/integrations/huggingface).
272
273
274
275
276
277
278
279
280
281
282
283
284
285

### Comet.ml

To use `comet_ml`, install the Python package with:

```bash
pip install comet_ml
```

or if in a Conda environment:

```bash
conda install -c comet_ml -c anaconda -c conda-forge comet_ml
```