Commit 477ed41e authored by Jonathan Huang's avatar Jonathan Huang Committed by Sergio Guadarrama
Browse files

Replace Oxford-IIT by Oxford-IIIT. (#1708)

parent c4ba26b4
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Tensorflow Object Detection API reads data using the TFRecord file format. Two Tensorflow Object Detection API reads data using the TFRecord file format. Two
sample scripts (`create_pascal_tf_record.py` and `create_pet_tf_record.py`) are sample scripts (`create_pascal_tf_record.py` and `create_pet_tf_record.py`) are
provided to convert from the PASCAL VOC dataset and Oxford-IIT Pet dataset to provided to convert from the PASCAL VOC dataset and Oxford-IIIT Pet dataset to
TFRecords. TFRecords.
## Generating the PASCAL VOC TFRecord files. ## Generating the PASCAL VOC TFRecord files.
...@@ -26,9 +26,9 @@ pascal_val.record in the tensorflow/models/object_detection directory. ...@@ -26,9 +26,9 @@ pascal_val.record in the tensorflow/models/object_detection directory.
The label map for the PASCAL VOC data set can be found at The label map for the PASCAL VOC data set can be found at
data/pascal_label_map.pbtxt. data/pascal_label_map.pbtxt.
## Generation the Oxford-IIT Pet TFRecord files. ## Generation the Oxford-IIIT Pet TFRecord files.
The Oxford-IIT Pet data set can be downloaded from The Oxford-IIIT Pet data set can be downloaded from
[their website](http://www.robots.ox.ac.uk/~vgg/data/pets/). Extract the tar [their website](http://www.robots.ox.ac.uk/~vgg/data/pets/). Extract the tar
file and run the `create_pet_tf_record` script to generate TFRecords. file and run the `create_pet_tf_record` script to generate TFRecords.
......
...@@ -10,7 +10,7 @@ dependencies, compiling the configuration protobufs and setting up the Python ...@@ -10,7 +10,7 @@ dependencies, compiling the configuration protobufs and setting up the Python
environment. environment.
2. A valid data set has been created. See [this page](preparing_inputs.md) for 2. A valid data set has been created. See [this page](preparing_inputs.md) for
instructions on how to generate a dataset for the PASCAL VOC challenge or the instructions on how to generate a dataset for the PASCAL VOC challenge or the
Oxford-IIT Pet dataset. Oxford-IIIT Pet dataset.
3. A Object Detection pipeline configuration has been written. See 3. A Object Detection pipeline configuration has been written. See
[this page](configuring_jobs.md) for details on how to write a pipeline configuration. [this page](configuring_jobs.md) for details on how to write a pipeline configuration.
......
...@@ -11,7 +11,7 @@ See [the Cloud ML quick start guide](https://cloud.google.com/ml-engine/docs/qui ...@@ -11,7 +11,7 @@ See [the Cloud ML quick start guide](https://cloud.google.com/ml-engine/docs/qui
in the [installation instructions](installation.md). in the [installation instructions](installation.md).
3. The reader has a valid data set and stored it in a Google Cloud Storage 3. The reader has a valid data set and stored it in a Google Cloud Storage
bucket. See [this page](preparing_inputs.md) for instructions on how to generate bucket. See [this page](preparing_inputs.md) for instructions on how to generate
a dataset for the PASCAL VOC challenge or the Oxford-IIT Pet dataset. a dataset for the PASCAL VOC challenge or the Oxford-IIIT Pet dataset.
4. The reader has configured a valid Object Detection pipeline, and stored it 4. The reader has configured a valid Object Detection pipeline, and stored it
in a Google Cloud Storage bucket. See [this page](configuring_jobs.md) for in a Google Cloud Storage bucket. See [this page](configuring_jobs.md) for
details on how to write a pipeline configuration. details on how to write a pipeline configuration.
......
# Quick Start: Distributed Training on the Oxford-IIT Pets Dataset on Google Cloud # Quick Start: Distributed Training on the Oxford-IIIT Pets Dataset on Google Cloud
This page is a walkthrough for training an object detector using the Tensorflow This page is a walkthrough for training an object detector using the Tensorflow
Object Detection API. In this tutorial, we'll be training on the Oxford-IIT Pets Object Detection API. In this tutorial, we'll be training on the Oxford-IIIT Pets
dataset to build a system to detect various breeds of cats and dogs. The output dataset to build a system to detect various breeds of cats and dogs. The output
of the detector will look like the following: of the detector will look like the following:
...@@ -43,11 +43,11 @@ Please run through the [installation instructions](installation.md) to install ...@@ -43,11 +43,11 @@ Please run through the [installation instructions](installation.md) to install
Tensorflow and all it dependencies. Ensure the Protobuf libraries are Tensorflow and all it dependencies. Ensure the Protobuf libraries are
compiled and the library directories are added to `PYTHONPATH`. compiled and the library directories are added to `PYTHONPATH`.
## Getting the Oxford-IIT Pets Dataset and Uploading it to Google Cloud Storage ## Getting the Oxford-IIIT Pets Dataset and Uploading it to Google Cloud Storage
In order to train a detector, we require a dataset of images, bounding boxes and In order to train a detector, we require a dataset of images, bounding boxes and
classifications. For this demo, we'll use the Oxford-IIT Pets dataset. The raw classifications. For this demo, we'll use the Oxford-IIIT Pets dataset. The raw
dataset for Oxford-IIT Pets lives dataset for Oxford-IIIT Pets lives
[here](http://www.robots.ox.ac.uk/~vgg/data/pets/). You will need to download [here](http://www.robots.ox.ac.uk/~vgg/data/pets/). You will need to download
both the image dataset [`images.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz) both the image dataset [`images.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz)
and the groundtruth data [`annotations.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz) and the groundtruth data [`annotations.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz)
...@@ -65,7 +65,7 @@ the tarballs, your object_detection directory should appear as follows: ...@@ -65,7 +65,7 @@ the tarballs, your object_detection directory should appear as follows:
The Tensorflow Object Detection API expects data to be in the TFRecord format, The Tensorflow Object Detection API expects data to be in the TFRecord format,
so we'll now run the _create_pet_tf_record_ script to convert from the raw so we'll now run the _create_pet_tf_record_ script to convert from the raw
Oxford-IIT Pet dataset into TFRecords. Run the following commands from the Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
object_detection directory: object_detection directory:
``` bash ``` bash
......
# Faster R-CNN with Inception Resnet v2, Atrous version; # Faster R-CNN with Inception Resnet v2, Atrous version;
# Configured for Oxford-IIT Pets Dataset. # Configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# Faster R-CNN with Resnet-101 (v1) configured for the Oxford-IIT Pet Dataset. # Faster R-CNN with Resnet-101 (v1) configured for the Oxford-IIIT Pet Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# Faster R-CNN with Resnet-152 (v1), configured for Oxford-IIT Pets Dataset. # Faster R-CNN with Resnet-152 (v1), configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# Faster R-CNN with Resnet-50 (v1), configured for Oxford-IIT Pets Dataset. # Faster R-CNN with Resnet-50 (v1), configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# R-FCN with Resnet-101 (v1), configured for Oxford-IIT Pets Dataset. # R-FCN with Resnet-101 (v1), configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# SSD with Inception v2 configured for Oxford-IIT Pets Dataset. # SSD with Inception v2 configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
# SSD with Mobilenet v1, configured for Oxford-IIT Pets Dataset. # SSD with Mobilenet v1, configured for Oxford-IIIT Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as # Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and # well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that # eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment