**NOTE**: For the most part, you will find a newer version of this code at [models/slim](https://github.com/tensorflow/models/tree/master/slim). In particular:
*`inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
*`inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.
For performance benchmarking, please see https://www.tensorflow.org/performance/benchmarks.
# Inception in TensorFlow
[ImageNet](http://www.image-net.org/) is a common academic data set in machine
...
...
@@ -22,11 +29,6 @@ model architecture.
## Description of Code
**NOTE**: For the most part, you will find a newer version of this code at [models/slim](https://github.com/tensorflow/models/tree/master/slim). In particular:
*`inception_train.py` and `imagenet_train.py` should no longer be used. The slim editions for running on multiple GPUs are the current best examples.
*`inception_distributed_train.py` and `imagenet_distributed_train.py` are still valid examples of distributed training.
The code base provides three core binaries for:
* Training an Inception v3 network from scratch across multiple GPUs and/or