README.md 4.4 KB
Newer Older
Sylvain Gugger's avatar
Sylvain Gugger committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

17
# Token classification
18

19
20
21
## PyTorch version

Fine-tuning the library models for token classification task such as Named Entity Recognition (NER), Parts-of-speech
michal pitr's avatar
michal pitr committed
22
tagging (POS) or phrase extraction (CHUNKS). The main scrip `run_ner.py` leverages the 馃 Datasets library and the Trainer API. You can easily
23
24
25
customize it to your needs if you need extra processing on your datasets.

It will either run on a datasets hosted on our [hub](https://huggingface.co/datasets) or with your own text files for
26
training and validation, you might just need to add some tweaks in the data preprocessing.
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52

The following example fine-tunes BERT on CoNLL-2003:

```bash
python run_ner.py \
  --model_name_or_path bert-base-uncased \
  --dataset_name conll2003 \
  --output_dir /tmp/test-ner \
  --do_train \
  --do_eval
```

or just can just run the bash script `run.sh`.

To run on your own training and validation files, use the following command:

```bash
python run_ner.py \
  --model_name_or_path bert-base-uncased \
  --train_file path_to_train_file \
  --validation_file path_to_validation_file \
  --output_dir /tmp/test-ner \
  --do_train \
  --do_eval
```

Sylvain Gugger's avatar
Sylvain Gugger committed
53
54
**Note:** This script only works with models that have a fast tokenizer (backed by the 馃 Tokenizers library) as it
uses special features of those tokenizers. You can check if your favorite model has a fast tokenizer in
55
[this table](https://huggingface.co/transformers/index.html#supported-frameworks), if it doesn't you can still use the old version
Sylvain Gugger's avatar
Sylvain Gugger committed
56
57
of the script.

58
59
## Old version of the script

60
You can find the old version of the PyTorch script [here](https://github.com/huggingface/transformers/blob/main/examples/legacy/token-classification/run_ner.py).
Sylvain Gugger's avatar
Sylvain Gugger committed
61

62
63
## Pytorch version, no Trainer

64
Based on the script [run_ner_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/token-classification/run_ner_no_trainer.py).
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85

Like `run_ner.py`, this script allows you to fine-tune any of the models on the [hub](https://huggingface.co/models) on a
token classification task, either NER, POS or CHUNKS tasks or your own data in a csv or a JSON file. The main difference is that this
script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.

It offers less options than the script with `Trainer` (for instance you can easily change the options for the optimizer
or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
the mean of the [馃 `Accelerate`](https://github.com/huggingface/accelerate) library. You can use the script normally
after installing it:

```bash
pip install accelerate
```

then

```bash
export TASK_NAME=ner

python run_ner_no_trainer.py \
  --model_name_or_path bert-base-cased \
86
  --dataset_name conll2003 \
87
  --task_name $TASK_NAME \
88
  --max_length 128 \
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
  --per_device_train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 3 \
  --output_dir /tmp/$TASK_NAME/
```

You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run

```bash
accelerate config
```

and reply to the questions asked. Then

```bash
accelerate test
```

Takuya Makino's avatar
Takuya Makino committed
107
that will check everything is ready for training. Finally, you can launch training with
108
109
110
111
112
113

```bash
export TASK_NAME=ner

accelerate launch run_ner_no_trainer.py \
  --model_name_or_path bert-base-cased \
114
  --dataset_name conll2003 \
115
  --task_name $TASK_NAME \
116
  --max_length 128 \
117
118
119
120
121
122
123
124
125
126
127
128
129
130
  --per_device_train_batch_size 32 \
  --learning_rate 2e-5 \
  --num_train_epochs 3 \
  --output_dir /tmp/$TASK_NAME/
```

This command is the same and will work for:

- a CPU-only setup
- a setup with one GPU
- a distributed training with several GPUs (single or multi node)
- a training on TPUs

Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.