Commit abeb0356 authored by Toby Boyd's avatar Toby Boyd
Browse files

Merge branch 'master' into cifar_mkl

parents aadf299d da62bb0b
...@@ -2,7 +2,9 @@ CIFAR-10 is a common benchmark in machine learning for image recognition. ...@@ -2,7 +2,9 @@ CIFAR-10 is a common benchmark in machine learning for image recognition.
http://www.cs.toronto.edu/~kriz/cifar.html http://www.cs.toronto.edu/~kriz/cifar.html
Code in this directory focuses on how to use TensorFlow Estimators to train and evaluate a CIFAR-10 ResNet model on: Code in this directory focuses on how to use TensorFlow Estimators to train and
evaluate a CIFAR-10 ResNet model on:
* A single host with one CPU; * A single host with one CPU;
* A single host with multiple GPUs; * A single host with multiple GPUs;
* Multiple hosts with CPU or multiple GPUs; * Multiple hosts with CPU or multiple GPUs;
...@@ -11,30 +13,29 @@ Before trying to run the model we highly encourage you to read all the README. ...@@ -11,30 +13,29 @@ Before trying to run the model we highly encourage you to read all the README.
## Prerequisite ## Prerequisite
1. Install TensorFlow version 1.2.1 or later with GPU support. 1. [Install](https://www.tensorflow.org/install/) TensorFlow version 1.2.1 or
You can see how to do it [here](https://www.tensorflow.org/install/). later.
2. Generate TFRecord files. 2. Download the CIFAR-10 dataset and generate TFRecord files using the provided
This will generate a tf record for the training and test data available at the input_dir. script. The script and associated command below will download the CIFAR-10
You can see more details in `generate_cifar10_tf_records.py` dataset and then generate a TFRecord for the training, validation, and
evaluation datasets.
```shell ```shell
python generate_cifar10_tfrecords.py --data-dir=${PWD}/cifar-10-data python generate_cifar10_tfrecords.py --data-dir=${PWD}/cifar-10-data
``` ```
After running the command above, you should see the following new files in the output_dir. After running the command above, you should see the following files in the
--data-dir (```ls -R cifar-10-data```):
``` shell * train.tfrecords
ls -R cifar-10-data * validation.tfrecords
``` * eval.tfrecords
```
train.tfrecords validation.tfrecords eval.tfrecords
```
## How to run on local mode ## Training on a single machine with GPUs or CPU
Run the model on CPU only. After training, it runs the evaluation. Run the training on CPU only. After training, it runs the evaluation.
``` ```
python cifar10_main.py --data-dir=${PWD}/cifar-10-data \ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \
...@@ -43,7 +44,8 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \ ...@@ -43,7 +44,8 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \
--train-steps=1000 --train-steps=1000
``` ```
Run the model on 2 GPUs using CPU as parameter server. After training, it runs the evaluation. Run the model on 2 GPUs using CPU as parameter server. After training, it runs
the evaluation.
``` ```
python cifar10_main.py --data-dir=${PWD}/cifar-10-data \ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \
--job-dir=/tmp/cifar10 \ --job-dir=/tmp/cifar10 \
...@@ -52,7 +54,8 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \ ...@@ -52,7 +54,8 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \
``` ```
Run the model on 2 GPUs using GPU as parameter server. Run the model on 2 GPUs using GPU as parameter server.
It will run an experiment, which for local setting basically means it will run stop training It will run an experiment, which for local setting basically means it will run
stop training
a couple of times to perform evaluation. a couple of times to perform evaluation.
``` ```
...@@ -62,24 +65,30 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \ ...@@ -62,24 +65,30 @@ python cifar10_main.py --data-dir=${PWD}/cifar-10-data \
--num-gpus=2 \ --num-gpus=2 \
``` ```
There are more command line flags to play with; run `python cifar10_main.py --help` for details. There are more command line flags to play with; run
`python cifar10_main.py --help` for details.
## How to run on distributed mode ## Run distributed training
### (Optional) Running on Google Cloud Machine Learning Engine ### (Optional) Running on Google Cloud Machine Learning Engine
This example can be run on Google Cloud Machine Learning Engine (ML Engine), which will configure the environment and take care of running workers, parameters servers, and masters in a fault tolerant way. This example can be run on Google Cloud Machine Learning Engine (ML Engine),
which will configure the environment and take care of running workers,
parameters servers, and masters in a fault tolerant way.
To install the command line tool, and set up a project and billing, see the quickstart [here](https://cloud.google.com/ml-engine/docs/quickstarts/command-line). To install the command line tool, and set up a project and billing, see the
quickstart [here](https://cloud.google.com/ml-engine/docs/quickstarts/command-line).
You'll also need a Google Cloud Storage bucket for the data. If you followed the instructions above, you can just run: You'll also need a Google Cloud Storage bucket for the data. If you followed the
instructions above, you can just run:
``` ```
MY_BUCKET=gs://<my-bucket-name> MY_BUCKET=gs://<my-bucket-name>
gsutil cp -r ${PWD}/cifar-10-data $MY_BUCKET/ gsutil cp -r ${PWD}/cifar-10-data $MY_BUCKET/
``` ```
Then run the following command from the `tutorials/image` directory of this repository (the parent directory of this README): Then run the following command from the `tutorials/image` directory of this
repository (the parent directory of this README):
``` ```
gcloud ml-engine jobs submit training cifarmultigpu \ gcloud ml-engine jobs submit training cifarmultigpu \
...@@ -97,10 +106,13 @@ gcloud ml-engine jobs submit training cifarmultigpu \ ...@@ -97,10 +106,13 @@ gcloud ml-engine jobs submit training cifarmultigpu \
### Set TF_CONFIG ### Set TF_CONFIG
Considering that you already have multiple hosts configured, all you need is a `TF_CONFIG` Considering that you already have multiple hosts configured, all you need is a
environment variable on each host. You can set up the hosts manually or check [tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) for instructions about how to set up a Cluster. `TF_CONFIG` environment variable on each host. You can set up the hosts manually
or check [tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) for
instructions about how to set up a Cluster.
The `TF_CONFIG` will be used by the `RunConfig` to know the existing hosts and their task: `master`, `ps` or `worker`. The `TF_CONFIG` will be used by the `RunConfig` to know the existing hosts and
their task: `master`, `ps` or `worker`.
Here's an example of `TF_CONFIG`. Here's an example of `TF_CONFIG`.
...@@ -119,21 +131,26 @@ TF_CONFIG = json.dumps( ...@@ -119,21 +131,26 @@ TF_CONFIG = json.dumps(
*Cluster* *Cluster*
A cluster spec, which is basically a dictionary that describes all of the tasks in the cluster. More about it [here](https://www.tensorflow.org/deploy/distributed). A cluster spec, which is basically a dictionary that describes all of the tasks
in the cluster. More about it [here](https://www.tensorflow.org/deploy/distributed).
In this cluster spec we are defining a cluster with 1 master, 1 ps and 1 worker. In this cluster spec we are defining a cluster with 1 master, 1 ps and 1 worker.
* `ps`: saves the parameters among all workers. All workers can read/write/update the parameters for model via ps. * `ps`: saves the parameters among all workers. All workers can
As some models are extremely large the parameters are shared among the ps (each ps stores a subset). read/write/update the parameters for model via ps. As some models are
extremely large the parameters are shared among the ps (each ps stores a
subset).
* `worker`: does the training. * `worker`: does the training.
* `master`: basically a special worker, it does training, but also restores and saves checkpoints and do evaluation. * `master`: basically a special worker, it does training, but also restores and
saves checkpoints and do evaluation.
*Task* *Task*
The Task defines what is the role of the current node, for this example the node is the master on index 0 The Task defines what is the role of the current node, for this example the node
on the cluster spec, the task will be different for each node. An example of the `TF_CONFIG` for a worker would be: is the master on index 0 on the cluster spec, the task will be different for
each node. An example of the `TF_CONFIG` for a worker would be:
```python ```python
cluster = {'master': ['master-ip:8000'], cluster = {'master': ['master-ip:8000'],
...@@ -150,23 +167,26 @@ TF_CONFIG = json.dumps( ...@@ -150,23 +167,26 @@ TF_CONFIG = json.dumps(
*Model_dir* *Model_dir*
This is the path where the master will save the checkpoints, graph and TensorBoard files. This is the path where the master will save the checkpoints, graph and
For a multi host environment you may want to use a Distributed File System, Google Storage and DFS are supported. TensorBoard files. For a multi host environment you may want to use a
Distributed File System, Google Storage and DFS are supported.
*Environment* *Environment*
By the default environment is *local*, for a distributed setting we need to change it to *cloud*. By the default environment is *local*, for a distributed setting we need to
change it to *cloud*.
### Running script ### Running script
Once you have a `TF_CONFIG` configured properly on each host you're ready to run on distributed settings. Once you have a `TF_CONFIG` configured properly on each host you're ready to run
on distributed settings.
#### Master #### Master
Run this on master: Run this on master:
Runs an Experiment in sync mode on 4 GPUs using CPU as parameter server for 40000 steps. Runs an Experiment in sync mode on 4 GPUs using CPU as parameter server for
It will run evaluation a couple of times during training. 40000 steps. It will run evaluation a couple of times during training. The
The num_workers arugument is used only to update the learning rate correctly. num_workers arugument is used only to update the learning rate correctly. Make
Make sure the model_dir is the same as defined on the TF_CONFIG. sure the model_dir is the same as defined on the TF_CONFIG.
```shell ```shell
python cifar10_main.py --data-dir=gs://path/cifar-10-data \ python cifar10_main.py --data-dir=gs://path/cifar-10-data \
...@@ -305,9 +325,9 @@ INFO:tensorflow:Saving dict for global step 1: accuracy = 0.0994, global_step = ...@@ -305,9 +325,9 @@ INFO:tensorflow:Saving dict for global step 1: accuracy = 0.0994, global_step =
#### Worker #### Worker
Run this on worker: Run this on worker:
Runs an Experiment in sync mode on 4 GPUs using CPU as parameter server for 40000 steps. Runs an Experiment in sync mode on 4 GPUs using CPU as parameter server for
It will run evaluation a couple of times during training. 40000 steps. It will run evaluation a couple of times during training. Make sure
Make sure the model_dir is the same as defined on the TF_CONFIG. the model_dir is the same as defined on the TF_CONFIG.
```shell ```shell
python cifar10_main.py --data-dir=gs://path/cifar-10-data \ python cifar10_main.py --data-dir=gs://path/cifar-10-data \
...@@ -447,13 +467,17 @@ allow_soft_placement: true ...@@ -447,13 +467,17 @@ allow_soft_placement: true
## Visualizing results with TensorFlow ## Visualizing results with TensorFlow
When using Estimators you can also visualize your data in TensorBoard, with no changes in your code. You can use TensorBoard to visualize your TensorFlow graph, plot quantitative metrics about the execution of your graph, and show additional data like images that pass through it. When using Estimators you can also visualize your data in TensorBoard, with no
changes in your code. You can use TensorBoard to visualize your TensorFlow
graph, plot quantitative metrics about the execution of your graph, and show
additional data like images that pass through it.
You'll see something similar to this if you "point" TensorBoard to the `model_dir` you used to train or evaluate your model. You'll see something similar to this if you "point" TensorBoard to the
`model_dir` you used to train or evaluate your model.
Check TensorBoard during training or after it. Check TensorBoard during training or after it. Just point TensorBoard to the
Just point TensorBoard to the model_dir you chose on the previous step model_dir you chose on the previous step by default the model_dir is
by default the model_dir is "sentiment_analysis_output" "sentiment_analysis_output"
```shell ```shell
tensorboard --log-dir="sentiment_analysis_output" tensorboard --log-dir="sentiment_analysis_output"
...@@ -461,7 +485,8 @@ tensorboard --log-dir="sentiment_analysis_output" ...@@ -461,7 +485,8 @@ tensorboard --log-dir="sentiment_analysis_output"
## Warnings ## Warnings
When runninng `cifar10_main.py` with `--sync` argument you may see an error similar to: When runninng `cifar10_main.py` with `--sync` argument you may see an error
similar to:
```python ```python
File "cifar10_main.py", line 538, in <module> File "cifar10_main.py", line 538, in <module>
......
import collections import collections
import six import six
import tensorflow as tf
import tensorflow as tf import tensorflow as tf
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment