Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
477ed41e
Commit
477ed41e
authored
Jun 20, 2017
by
Jonathan Huang
Committed by
Sergio Guadarrama
Jun 20, 2017
Browse files
Replace Oxford-IIT by Oxford-IIIT. (#1708)
parent
c4ba26b4
Changes
11
Show whitespace changes
Inline
Side-by-side
Showing
11 changed files
with
18 additions
and
18 deletions
+18
-18
object_detection/g3doc/preparing_inputs.md
object_detection/g3doc/preparing_inputs.md
+3
-3
object_detection/g3doc/running_locally.md
object_detection/g3doc/running_locally.md
+1
-1
object_detection/g3doc/running_on_cloud.md
object_detection/g3doc/running_on_cloud.md
+1
-1
object_detection/g3doc/running_pets.md
object_detection/g3doc/running_pets.md
+6
-6
object_detection/samples/configs/faster_rcnn_inception_resnet_v2_atrous_pets.config
...onfigs/faster_rcnn_inception_resnet_v2_atrous_pets.config
+1
-1
object_detection/samples/configs/faster_rcnn_resnet101_pets.config
...tection/samples/configs/faster_rcnn_resnet101_pets.config
+1
-1
object_detection/samples/configs/faster_rcnn_resnet152_pets.config
...tection/samples/configs/faster_rcnn_resnet152_pets.config
+1
-1
object_detection/samples/configs/faster_rcnn_resnet50_pets.config
...etection/samples/configs/faster_rcnn_resnet50_pets.config
+1
-1
object_detection/samples/configs/rfcn_resnet101_pets.config
object_detection/samples/configs/rfcn_resnet101_pets.config
+1
-1
object_detection/samples/configs/ssd_inception_v2_pets.config
...ct_detection/samples/configs/ssd_inception_v2_pets.config
+1
-1
object_detection/samples/configs/ssd_mobilenet_v1_pets.config
...ct_detection/samples/configs/ssd_mobilenet_v1_pets.config
+1
-1
No files found.
object_detection/g3doc/preparing_inputs.md
View file @
477ed41e
...
@@ -2,7 +2,7 @@
...
@@ -2,7 +2,7 @@
Tensorflow Object Detection API reads data using the TFRecord file format. Two
Tensorflow Object Detection API reads data using the TFRecord file format. Two
sample scripts (
`create_pascal_tf_record.py`
and
`create_pet_tf_record.py`
) are
sample scripts (
`create_pascal_tf_record.py`
and
`create_pet_tf_record.py`
) are
provided to convert from the PASCAL VOC dataset and Oxford-IIT Pet dataset to
provided to convert from the PASCAL VOC dataset and Oxford-II
I
T Pet dataset to
TFRecords.
TFRecords.
## Generating the PASCAL VOC TFRecord files.
## Generating the PASCAL VOC TFRecord files.
...
@@ -26,9 +26,9 @@ pascal_val.record in the tensorflow/models/object_detection directory.
...
@@ -26,9 +26,9 @@ pascal_val.record in the tensorflow/models/object_detection directory.
The label map for the PASCAL VOC data set can be found at
The label map for the PASCAL VOC data set can be found at
data/pascal_label_map.pbtxt.
data/pascal_label_map.pbtxt.
## Generation the Oxford-IIT Pet TFRecord files.
## Generation the Oxford-II
I
T Pet TFRecord files.
The Oxford-IIT Pet data set can be downloaded from
The Oxford-II
I
T Pet data set can be downloaded from
[
their website
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. Extract the tar
[
their website
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. Extract the tar
file and run the
`create_pet_tf_record`
script to generate TFRecords.
file and run the
`create_pet_tf_record`
script to generate TFRecords.
...
...
object_detection/g3doc/running_locally.md
View file @
477ed41e
...
@@ -10,7 +10,7 @@ dependencies, compiling the configuration protobufs and setting up the Python
...
@@ -10,7 +10,7 @@ dependencies, compiling the configuration protobufs and setting up the Python
environment.
environment.
2.
A valid data set has been created. See
[
this page
](
preparing_inputs.md
)
for
2.
A valid data set has been created. See
[
this page
](
preparing_inputs.md
)
for
instructions on how to generate a dataset for the PASCAL VOC challenge or the
instructions on how to generate a dataset for the PASCAL VOC challenge or the
Oxford-IIT Pet dataset.
Oxford-II
I
T Pet dataset.
3.
A Object Detection pipeline configuration has been written. See
3.
A Object Detection pipeline configuration has been written. See
[
this page
](
configuring_jobs.md
)
for details on how to write a pipeline configuration.
[
this page
](
configuring_jobs.md
)
for details on how to write a pipeline configuration.
...
...
object_detection/g3doc/running_on_cloud.md
View file @
477ed41e
...
@@ -11,7 +11,7 @@ See [the Cloud ML quick start guide](https://cloud.google.com/ml-engine/docs/qui
...
@@ -11,7 +11,7 @@ See [the Cloud ML quick start guide](https://cloud.google.com/ml-engine/docs/qui
in the
[
installation instructions
](
installation.md
)
.
in the
[
installation instructions
](
installation.md
)
.
3.
The reader has a valid data set and stored it in a Google Cloud Storage
3.
The reader has a valid data set and stored it in a Google Cloud Storage
bucket. See
[
this page
](
preparing_inputs.md
)
for instructions on how to generate
bucket. See
[
this page
](
preparing_inputs.md
)
for instructions on how to generate
a dataset for the PASCAL VOC challenge or the Oxford-IIT Pet dataset.
a dataset for the PASCAL VOC challenge or the Oxford-II
I
T Pet dataset.
4.
The reader has configured a valid Object Detection pipeline, and stored it
4.
The reader has configured a valid Object Detection pipeline, and stored it
in a Google Cloud Storage bucket. See
[
this page
](
configuring_jobs.md
)
for
in a Google Cloud Storage bucket. See
[
this page
](
configuring_jobs.md
)
for
details on how to write a pipeline configuration.
details on how to write a pipeline configuration.
...
...
object_detection/g3doc/running_pets.md
View file @
477ed41e
# Quick Start: Distributed Training on the Oxford-IIT Pets Dataset on Google Cloud
# Quick Start: Distributed Training on the Oxford-II
I
T Pets Dataset on Google Cloud
This page is a walkthrough for training an object detector using the Tensorflow
This page is a walkthrough for training an object detector using the Tensorflow
Object Detection API. In this tutorial, we'll be training on the Oxford-IIT Pets
Object Detection API. In this tutorial, we'll be training on the Oxford-II
I
T Pets
dataset to build a system to detect various breeds of cats and dogs. The output
dataset to build a system to detect various breeds of cats and dogs. The output
of the detector will look like the following:
of the detector will look like the following:
...
@@ -43,11 +43,11 @@ Please run through the [installation instructions](installation.md) to install
...
@@ -43,11 +43,11 @@ Please run through the [installation instructions](installation.md) to install
Tensorflow and all it dependencies. Ensure the Protobuf libraries are
Tensorflow and all it dependencies. Ensure the Protobuf libraries are
compiled and the library directories are added to
`PYTHONPATH`
.
compiled and the library directories are added to
`PYTHONPATH`
.
## Getting the Oxford-IIT Pets Dataset and Uploading it to Google Cloud Storage
## Getting the Oxford-II
I
T Pets Dataset and Uploading it to Google Cloud Storage
In order to train a detector, we require a dataset of images, bounding boxes and
In order to train a detector, we require a dataset of images, bounding boxes and
classifications. For this demo, we'll use the Oxford-IIT Pets dataset. The raw
classifications. For this demo, we'll use the Oxford-II
I
T Pets dataset. The raw
dataset for Oxford-IIT Pets lives
dataset for Oxford-II
I
T Pets lives
[
here
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. You will need to download
[
here
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. You will need to download
both the image dataset
[
`images.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
)
both the image dataset
[
`images.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
)
and the groundtruth data
[
`annotations.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
)
and the groundtruth data
[
`annotations.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
)
...
@@ -65,7 +65,7 @@ the tarballs, your object_detection directory should appear as follows:
...
@@ -65,7 +65,7 @@ the tarballs, your object_detection directory should appear as follows:
The Tensorflow Object Detection API expects data to be in the TFRecord format,
The Tensorflow Object Detection API expects data to be in the TFRecord format,
so we'll now run the _create_pet_tf_record_ script to convert from the raw
so we'll now run the _create_pet_tf_record_ script to convert from the raw
Oxford-IIT Pet dataset into TFRecords. Run the following commands from the
Oxford-II
I
T Pet dataset into TFRecords. Run the following commands from the
object_detection directory:
object_detection directory:
```
bash
```
bash
...
...
object_detection/samples/configs/faster_rcnn_inception_resnet_v2_atrous_pets.config
View file @
477ed41e
# Faster R-CNN with Inception Resnet v2, Atrous version;
# Faster R-CNN with Inception Resnet v2, Atrous version;
# Configured for Oxford-IIT Pets Dataset.
# Configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/faster_rcnn_resnet101_pets.config
View file @
477ed41e
# Faster R-CNN with Resnet-101 (v1) configured for the Oxford-IIT Pet Dataset.
# Faster R-CNN with Resnet-101 (v1) configured for the Oxford-II
I
T Pet Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/faster_rcnn_resnet152_pets.config
View file @
477ed41e
# Faster R-CNN with Resnet-152 (v1), configured for Oxford-IIT Pets Dataset.
# Faster R-CNN with Resnet-152 (v1), configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/faster_rcnn_resnet50_pets.config
View file @
477ed41e
# Faster R-CNN with Resnet-50 (v1), configured for Oxford-IIT Pets Dataset.
# Faster R-CNN with Resnet-50 (v1), configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/rfcn_resnet101_pets.config
View file @
477ed41e
# R-FCN with Resnet-101 (v1), configured for Oxford-IIT Pets Dataset.
# R-FCN with Resnet-101 (v1), configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/ssd_inception_v2_pets.config
View file @
477ed41e
# SSD with Inception v2 configured for Oxford-IIT Pets Dataset.
# SSD with Inception v2 configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
object_detection/samples/configs/ssd_mobilenet_v1_pets.config
View file @
477ed41e
# SSD with Mobilenet v1, configured for Oxford-IIT Pets Dataset.
# SSD with Mobilenet v1, configured for Oxford-II
I
T Pets Dataset.
# Users should configure the fine_tune_checkpoint field in the train config as
# Users should configure the fine_tune_checkpoint field in the train config as
# well as the label_map_path and input_path fields in the train_input_reader and
# well as the label_map_path and input_path fields in the train_input_reader and
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
# eval_input_reader. Search for "PATH_TO_BE_CONFIGURED" to find the fields that
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment