README.md 6.44 KB
Newer Older
Will Cromar's avatar
Will Cromar committed
1
2
# Image Classification

Allen Wang's avatar
Allen Wang committed
3
This folder contains TF 2.0 model examples for image classification:
Will Cromar's avatar
Will Cromar committed
4
5

* [MNIST](#mnist)
Allen Wang's avatar
Allen Wang committed
6
7
8
9
* [Classifier Trainer](#classifier-trainer), a framework that uses the Keras
compile/fit methods for image classification models, including:
  * ResNet
  * EfficientNet[^1]
Will Cromar's avatar
Will Cromar committed
10

Allen Wang's avatar
Allen Wang committed
11
[^1]: Currently a work in progress. We cannot match "AutoAugment (AA)" in [the original version](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet).
Will Cromar's avatar
Will Cromar committed
12
13
14
For more information about other types of models, please refer to this
[README file](../../README.md).

Allen Wang's avatar
Allen Wang committed
15
## Before you begin
16
Please make sure that you have the latest version of TensorFlow
17
installed and
18
19
[add the models folder to your Python path](/official/#running-the-models).

Allen Wang's avatar
Allen Wang committed
20
### ImageNet preparation
21

22
23
24
25
26
27
28
29
30
31
#### Using TFDS
`classifier_trainer.py` supports ImageNet with
[TensorFlow Datasets (TFDS)](https://www.tensorflow.org/datasets/overview).

Please see the following [example snippet](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/scripts/download_and_prepare.py)
for more information on how to use TFDS to download and prepare datasets, and
specifically the [TFDS ImageNet readme](https://github.com/tensorflow/datasets/blob/master/docs/catalog/imagenet2012.md)
for manual download instructions.

#### Legacy TFRecords
32
Download the ImageNet dataset and convert it to TFRecord format.
33
34
35
36
The following [script](https://github.com/tensorflow/tpu/blob/master/tools/datasets/imagenet_to_gcs.py)
and [README](https://github.com/tensorflow/tpu/tree/master/tools/datasets#imagenet_to_gcspy)
provide a few options.

37
38
39
40
Note that the legacy ResNet runners, e.g. [resnet/resnet_ctl_imagenet_main.py](resnet/resnet_ctl_imagenet_main.py)
require TFRecords whereas `classifier_trainer.py` can use both by setting the
builder to 'records' or 'tfds' in the configurations.

Will Cromar's avatar
Will Cromar committed
41
### Running on Cloud TPUs
Will Cromar's avatar
Will Cromar committed
42

Allen Wang's avatar
Allen Wang committed
43
Note: These models will **not** work with TPUs on Colab.
Will Cromar's avatar
Will Cromar committed
44

Allen Wang's avatar
Allen Wang committed
45
You can train image classification models on Cloud TPUs using
46
[tf.distribute.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf.distribute.TPUStrategy?version=nightly).
47
48
If you are not familiar with Cloud TPUs, it is strongly recommended that you go
through the
Will Cromar's avatar
Will Cromar committed
49
50
51
[quickstart](https://cloud.google.com/tpu/docs/quickstart) to learn how to
create a TPU and GCE VM.

52
53
54
55
56
57
58
59
60
61
62
63
64
### Running on multiple GPU hosts

You can also train these models on multiple hosts, each with GPUs, using
[tf.distribute.experimental.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy).

The easiest way to run multi-host benchmarks is to set the
[`TF_CONFIG`](https://www.tensorflow.org/guide/distributed_training#TF_CONFIG)
appropriately at each host.  e.g., to run using `MultiWorkerMirroredStrategy` on
2 hosts, the `cluster` in `TF_CONFIG` should have 2 `host:port` entries, and
host `i` should have the `task` in `TF_CONFIG` set to `{"type": "worker",
"index": i}`.  `MultiWorkerMirroredStrategy` will automatically use all the
available GPUs at each host.

Allen Wang's avatar
Allen Wang committed
65
66
67
68
## MNIST

To download the data and run the MNIST sample model locally for the first time,
run one of the following command:
Will Cromar's avatar
Will Cromar committed
69
70

```bash
Allen Wang's avatar
Allen Wang committed
71
python3 mnist_main.py \
Will Cromar's avatar
Will Cromar committed
72
73
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
74
75
76
77
  --train_epochs=10 \
  --distribution_strategy=one_device \
  --num_gpus=$NUM_GPUS \
  --download
Will Cromar's avatar
Will Cromar committed
78
79
```

Allen Wang's avatar
Allen Wang committed
80
To train the model on a Cloud TPU, run the following command:
Will Cromar's avatar
Will Cromar committed
81
82

```bash
Allen Wang's avatar
Allen Wang committed
83
python3 mnist_main.py \
Will Cromar's avatar
Will Cromar committed
84
85
86
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
87
  --train_epochs=10 \
Will Cromar's avatar
Will Cromar committed
88
  --distribution_strategy=tpu \
Allen Wang's avatar
Allen Wang committed
89
  --download
Will Cromar's avatar
Will Cromar committed
90
91
```

Allen Wang's avatar
Allen Wang committed
92
Note: the `--download` flag is only required the first time you run the model.
Will Cromar's avatar
Will Cromar committed
93

Will Cromar's avatar
Will Cromar committed
94

Allen Wang's avatar
Allen Wang committed
95
96
97
98
99
100
## Classifier Trainer
The classifier trainer is a unified framework for running image classification
models using Keras's compile/fit methods. Experiments should be provided in the
form of YAML files, some examples are included within the configs/examples
folder. Please see [configs/examples](./configs/examples) for more example
configurations.
Will Cromar's avatar
Will Cromar committed
101

Allen Wang's avatar
Allen Wang committed
102
103
104
105
106
The provided configuration files use a per replica batch size and is scaled
by the number of devices. For instance, if `batch size` = 64, then for 1 GPU
the global batch size would be 64 * 1 = 64. For 8 GPUs, the global batch size
would be 64 * 8 = 512. Similarly, for a v3-8 TPU, the global batch size would
be 64 * 8 = 512, and for a v3-32, the global batch size is 64 * 32 = 2048.
Will Cromar's avatar
Will Cromar committed
107

Allen Wang's avatar
Allen Wang committed
108
109
110
### ResNet50

#### On GPU:
Will Cromar's avatar
Will Cromar committed
111
```bash
Allen Wang's avatar
Allen Wang committed
112
113
114
115
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=resnet \
  --dataset=imagenet \
Will Cromar's avatar
Will Cromar committed
116
117
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
118
119
  --config_file=configs/examples/resnet/imagenet/gpu.yaml \
  --params_override='runtime.num_gpus=$NUM_GPUS'
Will Cromar's avatar
Will Cromar committed
120
121
```

122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
To train on multiple hosts, each with GPUs attached using
[MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)
please update `runtime` section in gpu.yaml
(or override using `--params_override`) with:

```YAML
# gpu.yaml
runtime:
  distribution_strategy: 'multi_worker_mirrored'
  worker_hosts: '$HOST1:port,$HOST2:port'
  num_gpus: $NUM_GPUS
  task_index: 0
```
By having `task_index: 0` on the first host and `task_index: 1` on the second
and so on. `$HOST1` and `$HOST2` are the IP addresses of the hosts, and `port`
can be chosen any free port on the hosts. Only the first host will write
TensorBoard Summaries and save checkpoints.

Allen Wang's avatar
Allen Wang committed
140
141
142
143
144
145
146
147
148
#### On TPU:
```bash
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=resnet \
  --dataset=imagenet \
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
149
  --config_file=configs/examples/resnet/imagenet/tpu.yaml
Allen Wang's avatar
Allen Wang committed
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
```

### EfficientNet
**Note: EfficientNet development is a work in progress.**
#### On GPU:
```bash
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=efficientnet \
  --dataset=imagenet \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
  --config_file=configs/examples/efficientnet/imagenet/efficientnet-b0-gpu.yaml \
  --params_override='runtime.num_gpus=$NUM_GPUS'
```
Will Cromar's avatar
Will Cromar committed
165

Allen Wang's avatar
Allen Wang committed
166
167

#### On TPU:
Will Cromar's avatar
Will Cromar committed
168
```bash
Allen Wang's avatar
Allen Wang committed
169
170
171
172
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=efficientnet \
  --dataset=imagenet \
Will Cromar's avatar
Will Cromar committed
173
174
175
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
176
  --config_file=configs/examples/efficientnet/imagenet/efficientnet-b0-tpu.yaml
Will Cromar's avatar
Will Cromar committed
177
178
```

Allen Wang's avatar
Allen Wang committed
179
180
181
182
Note that the number of GPU devices can be overridden in the command line using
`--params_overrides`. The TPU does not need this override as the device is fixed
by providing the TPU address or name with the `--tpu` flag.