README.md 6.6 KB
Newer Older
Will Cromar's avatar
Will Cromar committed
1
2
# Image Classification

Hongkun Yu's avatar
Hongkun Yu committed
3
4
5
**Warning:** the features in the `image_classification/` folder have been fully
intergrated into vision/beta. Please use the [new code base](../beta/README.md).

Allen Wang's avatar
Allen Wang committed
6
This folder contains TF 2.0 model examples for image classification:
Will Cromar's avatar
Will Cromar committed
7
8

* [MNIST](#mnist)
Allen Wang's avatar
Allen Wang committed
9
10
11
12
* [Classifier Trainer](#classifier-trainer), a framework that uses the Keras
compile/fit methods for image classification models, including:
  * ResNet
  * EfficientNet[^1]
Will Cromar's avatar
Will Cromar committed
13

Allen Wang's avatar
Allen Wang committed
14
[^1]: Currently a work in progress. We cannot match "AutoAugment (AA)" in [the original version](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet).
Will Cromar's avatar
Will Cromar committed
15
16
17
For more information about other types of models, please refer to this
[README file](../../README.md).

Allen Wang's avatar
Allen Wang committed
18
## Before you begin
19
Please make sure that you have the latest version of TensorFlow
20
installed and
21
22
[add the models folder to your Python path](/official/#running-the-models).

Allen Wang's avatar
Allen Wang committed
23
### ImageNet preparation
24

25
26
27
28
29
30
31
32
33
34
#### Using TFDS
`classifier_trainer.py` supports ImageNet with
[TensorFlow Datasets (TFDS)](https://www.tensorflow.org/datasets/overview).

Please see the following [example snippet](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/scripts/download_and_prepare.py)
for more information on how to use TFDS to download and prepare datasets, and
specifically the [TFDS ImageNet readme](https://github.com/tensorflow/datasets/blob/master/docs/catalog/imagenet2012.md)
for manual download instructions.

#### Legacy TFRecords
35
Download the ImageNet dataset and convert it to TFRecord format.
36
37
38
39
The following [script](https://github.com/tensorflow/tpu/blob/master/tools/datasets/imagenet_to_gcs.py)
and [README](https://github.com/tensorflow/tpu/tree/master/tools/datasets#imagenet_to_gcspy)
provide a few options.

40
41
42
43
Note that the legacy ResNet runners, e.g. [resnet/resnet_ctl_imagenet_main.py](resnet/resnet_ctl_imagenet_main.py)
require TFRecords whereas `classifier_trainer.py` can use both by setting the
builder to 'records' or 'tfds' in the configurations.

Will Cromar's avatar
Will Cromar committed
44
### Running on Cloud TPUs
Will Cromar's avatar
Will Cromar committed
45

Allen Wang's avatar
Allen Wang committed
46
Note: These models will **not** work with TPUs on Colab.
Will Cromar's avatar
Will Cromar committed
47

Allen Wang's avatar
Allen Wang committed
48
You can train image classification models on Cloud TPUs using
49
[tf.distribute.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf.distribute.TPUStrategy?version=nightly).
50
51
If you are not familiar with Cloud TPUs, it is strongly recommended that you go
through the
Will Cromar's avatar
Will Cromar committed
52
53
54
[quickstart](https://cloud.google.com/tpu/docs/quickstart) to learn how to
create a TPU and GCE VM.

55
56
57
58
59
60
61
62
63
64
65
66
67
### Running on multiple GPU hosts

You can also train these models on multiple hosts, each with GPUs, using
[tf.distribute.experimental.MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy).

The easiest way to run multi-host benchmarks is to set the
[`TF_CONFIG`](https://www.tensorflow.org/guide/distributed_training#TF_CONFIG)
appropriately at each host.  e.g., to run using `MultiWorkerMirroredStrategy` on
2 hosts, the `cluster` in `TF_CONFIG` should have 2 `host:port` entries, and
host `i` should have the `task` in `TF_CONFIG` set to `{"type": "worker",
"index": i}`.  `MultiWorkerMirroredStrategy` will automatically use all the
available GPUs at each host.

Allen Wang's avatar
Allen Wang committed
68
69
70
71
## MNIST

To download the data and run the MNIST sample model locally for the first time,
run one of the following command:
Will Cromar's avatar
Will Cromar committed
72
73

```bash
Allen Wang's avatar
Allen Wang committed
74
python3 mnist_main.py \
Will Cromar's avatar
Will Cromar committed
75
76
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
77
78
79
80
  --train_epochs=10 \
  --distribution_strategy=one_device \
  --num_gpus=$NUM_GPUS \
  --download
Will Cromar's avatar
Will Cromar committed
81
82
```

Allen Wang's avatar
Allen Wang committed
83
To train the model on a Cloud TPU, run the following command:
Will Cromar's avatar
Will Cromar committed
84
85

```bash
Allen Wang's avatar
Allen Wang committed
86
python3 mnist_main.py \
Will Cromar's avatar
Will Cromar committed
87
88
89
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
90
  --train_epochs=10 \
Will Cromar's avatar
Will Cromar committed
91
  --distribution_strategy=tpu \
Allen Wang's avatar
Allen Wang committed
92
  --download
Will Cromar's avatar
Will Cromar committed
93
94
```

Allen Wang's avatar
Allen Wang committed
95
Note: the `--download` flag is only required the first time you run the model.
Will Cromar's avatar
Will Cromar committed
96

Will Cromar's avatar
Will Cromar committed
97

Allen Wang's avatar
Allen Wang committed
98
99
100
101
102
103
## Classifier Trainer
The classifier trainer is a unified framework for running image classification
models using Keras's compile/fit methods. Experiments should be provided in the
form of YAML files, some examples are included within the configs/examples
folder. Please see [configs/examples](./configs/examples) for more example
configurations.
Will Cromar's avatar
Will Cromar committed
104

Allen Wang's avatar
Allen Wang committed
105
106
107
108
109
The provided configuration files use a per replica batch size and is scaled
by the number of devices. For instance, if `batch size` = 64, then for 1 GPU
the global batch size would be 64 * 1 = 64. For 8 GPUs, the global batch size
would be 64 * 8 = 512. Similarly, for a v3-8 TPU, the global batch size would
be 64 * 8 = 512, and for a v3-32, the global batch size is 64 * 32 = 2048.
Will Cromar's avatar
Will Cromar committed
110

Allen Wang's avatar
Allen Wang committed
111
112
113
### ResNet50

#### On GPU:
Will Cromar's avatar
Will Cromar committed
114
```bash
Allen Wang's avatar
Allen Wang committed
115
116
117
118
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=resnet \
  --dataset=imagenet \
Will Cromar's avatar
Will Cromar committed
119
120
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
Allen Wang's avatar
Allen Wang committed
121
122
  --config_file=configs/examples/resnet/imagenet/gpu.yaml \
  --params_override='runtime.num_gpus=$NUM_GPUS'
Will Cromar's avatar
Will Cromar committed
123
124
```

125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
To train on multiple hosts, each with GPUs attached using
[MultiWorkerMirroredStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)
please update `runtime` section in gpu.yaml
(or override using `--params_override`) with:

```YAML
# gpu.yaml
runtime:
  distribution_strategy: 'multi_worker_mirrored'
  worker_hosts: '$HOST1:port,$HOST2:port'
  num_gpus: $NUM_GPUS
  task_index: 0
```
By having `task_index: 0` on the first host and `task_index: 1` on the second
and so on. `$HOST1` and `$HOST2` are the IP addresses of the hosts, and `port`
can be chosen any free port on the hosts. Only the first host will write
TensorBoard Summaries and save checkpoints.

Allen Wang's avatar
Allen Wang committed
143
144
145
146
147
148
149
150
151
#### On TPU:
```bash
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=resnet \
  --dataset=imagenet \
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
152
  --config_file=configs/examples/resnet/imagenet/tpu.yaml
Allen Wang's avatar
Allen Wang committed
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
```

### EfficientNet
**Note: EfficientNet development is a work in progress.**
#### On GPU:
```bash
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=efficientnet \
  --dataset=imagenet \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
  --config_file=configs/examples/efficientnet/imagenet/efficientnet-b0-gpu.yaml \
  --params_override='runtime.num_gpus=$NUM_GPUS'
```
Will Cromar's avatar
Will Cromar committed
168

Allen Wang's avatar
Allen Wang committed
169
170

#### On TPU:
Will Cromar's avatar
Will Cromar committed
171
```bash
Allen Wang's avatar
Allen Wang committed
172
173
174
175
python3 classifier_trainer.py \
  --mode=train_and_eval \
  --model_type=efficientnet \
  --dataset=imagenet \
Will Cromar's avatar
Will Cromar committed
176
177
178
  --tpu=$TPU_NAME \
  --model_dir=$MODEL_DIR \
  --data_dir=$DATA_DIR \
179
  --config_file=configs/examples/efficientnet/imagenet/efficientnet-b0-tpu.yaml
Will Cromar's avatar
Will Cromar committed
180
181
```

Allen Wang's avatar
Allen Wang committed
182
183
184
185
Note that the number of GPU devices can be overridden in the command line using
`--params_overrides`. The TPU does not need this override as the device is fixed
by providing the TPU address or name with the `--tpu` flag.