^ ResNet V2 models use Inception pre-processing and input image size of 299 (use
^ ResNet V2 models use Inception pre-processing and input image size of 299 (use
`--preprocessing_name inception --eval_image_size 299` when using
`--preprocessing_name inception --eval_image_size 299` when using
`eval_image_classifier.py`). Performance numbers for ResNet V2 models are
`eval_image_classifier.py`). Performance numbers for ResNet V2 models are
reported on the ImageNet validation set.
reported on the ImageNet validation set.
(#) More information and details about the NASNet architectures are available at this [README](nets/nasnet/README.md)
All 16 MobileNet Models reported in the [MobileNet Paper](https://arxiv.org/abs/1704.04861) can be found [here](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet_v1.md).
All 16 MobileNet Models reported in the [MobileNet Paper](https://arxiv.org/abs/1704.04861) can be found [here](https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet_v1.md).
(\*): Results quoted from the [paper](https://arxiv.org/abs/1603.05027).
(\*): Results quoted from the [paper](https://arxiv.org/abs/1603.05027).
See the [evaluation module example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim#evaluation-loop) for an example of how to evaluate a model at multiple checkpoints during or after the training.
See the [evaluation module example](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/slim#evaluation-loop)
for an example of how to evaluate a model at multiple checkpoints during or after the training.