Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
dac4fbab
Commit
dac4fbab
authored
Jun 27, 2017
by
Jonathan Huang
Committed by
GitHub
Jun 27, 2017
Browse files
Merge pull request #1774 from korrawat/obj_detect_typo
Fix typos/style in object_detection's markdown docs
parents
e648b94a
c9bb58f2
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
33 additions
and
33 deletions
+33
-33
object_detection/g3doc/preparing_inputs.md
object_detection/g3doc/preparing_inputs.md
+9
-9
object_detection/g3doc/running_pets.md
object_detection/g3doc/running_pets.md
+24
-24
No files found.
object_detection/g3doc/preparing_inputs.md
View file @
dac4fbab
...
@@ -11,20 +11,20 @@ The raw 2012 PASCAL VOC data set can be downloaded
...
@@ -11,20 +11,20 @@ The raw 2012 PASCAL VOC data set can be downloaded
[
here
](
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
)
.
[
here
](
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
)
.
Extract the tar file and run the
`create_pascal_tf_record`
script:
Extract the tar file and run the
`create_pascal_tf_record`
script:
```
```
bash
# From tensorflow/models/object_detection
# From tensorflow/models/object_detection
tar
-xvf
VOCtrainval_11-May-2012.tar
tar
-xvf
VOCtrainval_11-May-2012.tar
python create_pascal_tf_record.py
--data_dir
=
VOCdevkit
\
python create_pascal_tf_record.py
--data_dir
=
VOCdevkit
\
--year
=
VOC2012
--set
=
train
--output_path
=
pascal_train.record
--year
=
VOC2012
--set
=
train
--output_path
=
pascal_train.record
python create_pascal_tf_record.py --data_dir=
/home/user/
VOCdevkit \
python create_pascal_tf_record.py
--data_dir
=
VOCdevkit
\
--year
=
VOC2012
--set
=
val
--output_path
=
pascal_val.record
--year
=
VOC2012
--set
=
val
--output_path
=
pascal_val.record
```
```
You should end up with two TFRecord files named pascal_train.record and
You should end up with two TFRecord files named
`
pascal_train.record
`
and
pascal_val.record in the tensorflow/models/object_detection directory.
`
pascal_val.record
`
in the
`
tensorflow/models/object_detection
`
directory.
The label map for the PASCAL VOC data set can be found at
The label map for the PASCAL VOC data set can be found at
data/pascal_label_map.pbtxt.
`
data/pascal_label_map.pbtxt
`
.
## Generation the Oxford-IIIT Pet TFRecord files.
## Generation the Oxford-IIIT Pet TFRecord files.
...
@@ -32,14 +32,14 @@ The Oxford-IIIT Pet data set can be downloaded from
...
@@ -32,14 +32,14 @@ The Oxford-IIIT Pet data set can be downloaded from
[
their website
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. Extract the tar
[
their website
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. Extract the tar
file and run the
`create_pet_tf_record`
script to generate TFRecords.
file and run the
`create_pet_tf_record`
script to generate TFRecords.
```
```
bash
# From tensorflow/models/object_detection
# From tensorflow/models/object_detection
tar
-xvf
annotations.tar.gz
tar
-xvf
annotations.tar.gz
tar
-xvf
images.tar.gz
tar
-xvf
images.tar.gz
python create_pet_tf_record.py
--data_dir
=
`
pwd
`
--output_dir
=
`
pwd
`
python create_pet_tf_record.py
--data_dir
=
`
pwd
`
--output_dir
=
`
pwd
`
```
```
You should end up with two TFRecord files named pet_train.record and
You should end up with two TFRecord files named
`
pet_train.record
`
and
pet_val.record in the tensorflow/models/object_detection directory.
`
pet_val.record
`
in the
`
tensorflow/models/object_detection
`
directory.
The label map for the Pet dataset can be found at data/pet_label_map.pbtxt.
The label map for the Pet dataset can be found at
`
data/pet_label_map.pbtxt
`
.
object_detection/g3doc/running_pets.md
View file @
dac4fbab
...
@@ -51,8 +51,8 @@ dataset for Oxford-IIIT Pets lives
...
@@ -51,8 +51,8 @@ dataset for Oxford-IIIT Pets lives
[
here
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. You will need to download
[
here
](
http://www.robots.ox.ac.uk/~vgg/data/pets/
)
. You will need to download
both the image dataset
[
`images.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
)
both the image dataset
[
`images.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
)
and the groundtruth data
[
`annotations.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
)
and the groundtruth data
[
`annotations.tar.gz`
](
http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
)
to the tensorflow/models directory. This may take some time. After downloading
to the
`
tensorflow/models
`
directory. This may take some time. After downloading
the tarballs, your object_detection directory should appear as follows:
the tarballs, your
`
object_detection
`
directory should appear as follows:
```
lang-none
```
lang-none
+ object_detection/
+ object_detection/
...
@@ -64,9 +64,9 @@ the tarballs, your object_detection directory should appear as follows:
...
@@ -64,9 +64,9 @@ the tarballs, your object_detection directory should appear as follows:
```
```
The Tensorflow Object Detection API expects data to be in the TFRecord format,
The Tensorflow Object Detection API expects data to be in the TFRecord format,
so we'll now run the
_
create_pet_tf_record
_
script to convert from the raw
so we'll now run the
`
create_pet_tf_record
`
script to convert from the raw
Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
object_detection directory:
`
object_detection
`
directory:
```
bash
```
bash
# From tensorflow/models/
# From tensorflow/models/
...
@@ -83,12 +83,12 @@ python object_detection/create_pet_tf_record.py \
...
@@ -83,12 +83,12 @@ python object_detection/create_pet_tf_record.py \
Note: It is normal to see some warnings when running this script. You may ignore
Note: It is normal to see some warnings when running this script. You may ignore
them.
them.
Two TFRecord files named pet_train.record and pet_val.record should be generated
Two TFRecord files named
`
pet_train.record
`
and
`
pet_val.record
`
should be generated
in the object_detection
/
directory.
in the
`
object_detection
`
directory.
Now that the data has been generated, we'll need to upload it to Google Cloud
Now that the data has been generated, we'll need to upload it to Google Cloud
Storage so the data can be accessed by ML Engine. Run the following command to
Storage so the data can be accessed by ML Engine. Run the following command to
copy the files into your GCS bucket (substituting ${YOUR_GCS_BUCKET}):
copy the files into your GCS bucket (substituting
`
${YOUR_GCS_BUCKET}
`
):
```
bash
```
bash
# From tensorflow/models/
# From tensorflow/models/
...
@@ -109,7 +109,7 @@ parameters to initialize our new model.
...
@@ -109,7 +109,7 @@ parameters to initialize our new model.
Download our
[
COCO-pretrained Faster R-CNN with Resnet-101
Download our
[
COCO-pretrained Faster R-CNN with Resnet-101
model
](
http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz
)
.
model
](
http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz
)
.
Unzip the contents of the folder and copy the model.ckpt
*
files into your GCS
Unzip the contents of the folder and copy the
`
model.ckpt*
`
files into your GCS
Bucket.
Bucket.
```
bash
```
bash
...
@@ -127,14 +127,14 @@ In the Tensorflow Object Detection API, the model parameters, training
...
@@ -127,14 +127,14 @@ In the Tensorflow Object Detection API, the model parameters, training
parameters and eval parameters are all defined by a config file. More details
parameters and eval parameters are all defined by a config file. More details
can be found
[
here
](
configuring_jobs.md
)
. For this tutorial, we will use some
can be found
[
here
](
configuring_jobs.md
)
. For this tutorial, we will use some
predefined templates provided with the source code. In the
predefined templates provided with the source code. In the
object_detection/samples/configs folder, there are skeleton object_detection
`
object_detection/samples/configs
`
folder, there are skeleton object_detection
configuration files. We will use
`faster_rcnn_resnet101_pets.config`
as a
configuration files. We will use
`faster_rcnn_resnet101_pets.config`
as a
starting point for configuring the pipeline. Open the file with your favourite
starting point for configuring the pipeline. Open the file with your favourite
text editor.
text editor.
We'll need to configure some paths in order for the template to work. Search the
We'll need to configure some paths in order for the template to work. Search the
file for instances of
`PATH_TO_BE_CONFIGURED`
and replace them with the
file for instances of
`PATH_TO_BE_CONFIGURED`
and replace them with the
appropriate value (typically
"
gs://${YOUR_GCS_BUCKET}/data/
"
). Afterwards
appropriate value (typically
`
gs://${YOUR_GCS_BUCKET}/data/
`
). Afterwards
upload your edited file onto GCS, making note of the path it was uploaded to
upload your edited file onto GCS, making note of the path it was uploaded to
(we'll need it when starting the training/eval jobs).
(we'll need it when starting the training/eval jobs).
...
@@ -171,7 +171,7 @@ the following:
...
@@ -171,7 +171,7 @@ the following:
```
```
You can inspect your bucket using the
[
Google Cloud Storage
You can inspect your bucket using the
[
Google Cloud Storage
browser
](
pantheon.corp
.google.com/storage
)
.
browser
](
https://console.cloud
.google.com/storage
/browser
)
.
## Starting Training and Evaluation Jobs on Google Cloud ML Engine
## Starting Training and Evaluation Jobs on Google Cloud ML Engine
...
@@ -181,7 +181,7 @@ Before we can start a job on Google Cloud ML Engine, we must:
...
@@ -181,7 +181,7 @@ Before we can start a job on Google Cloud ML Engine, we must:
2.
Write a cluster configuration for our Google Cloud ML job.
2.
Write a cluster configuration for our Google Cloud ML job.
To package the Tensorflow Object Detection code, run the following commands from
To package the Tensorflow Object Detection code, run the following commands from
the tensorflow/models/ directory:
the
`
tensorflow/models/
`
directory:
```
bash
```
bash
# From tensorflow/models/
# From tensorflow/models/
...
@@ -194,9 +194,9 @@ and `slim/dist/slim-0.1.tar.gz`.
...
@@ -194,9 +194,9 @@ and `slim/dist/slim-0.1.tar.gz`.
For running the training Cloud ML job, we'll configure the cluster to use 10
For running the training Cloud ML job, we'll configure the cluster to use 10
training jobs (1 master + 9 workers) and three parameters servers. The
training jobs (1 master + 9 workers) and three parameters servers. The
configuration file can be found at object_detection/samples/cloud/cloud.yml.
configuration file can be found at
`
object_detection/samples/cloud/cloud.yml
`
.
To start training, execute the following command from the tensorflow/models/
To start training, execute the following command from the
`
tensorflow/models/
`
directory:
directory:
```
bash
```
bash
...
@@ -233,7 +233,7 @@ submit training` command is correct. ML Engine does not distinguish between
...
@@ -233,7 +233,7 @@ submit training` command is correct. ML Engine does not distinguish between
training and evaluation jobs.
training and evaluation jobs.
Users can monitor and stop training and evaluation jobs on the
[
ML Engine
Users can monitor and stop training and evaluation jobs on the
[
ML Engine
Dasboard
](
https://console.cloud.google.com/mlengine/jobs
)
.
Das
h
board
](
https://console.cloud.google.com/mlengine/jobs
)
.
## Monitoring Progress with Tensorboard
## Monitoring Progress with Tensorboard
...
@@ -263,35 +263,35 @@ Note: It takes roughly 10 minutes for a job to get started on ML Engine, and
...
@@ -263,35 +263,35 @@ Note: It takes roughly 10 minutes for a job to get started on ML Engine, and
roughly an hour for the system to evaluate the validation dataset. It may take
roughly an hour for the system to evaluate the validation dataset. It may take
some time to populate the dashboards. If you do not see any entries after half
some time to populate the dashboards. If you do not see any entries after half
an hour, check the logs from the
[
ML Engine
an hour, check the logs from the
[
ML Engine
Dasboard
](
https://
pantheon.corp
.google.com/mlengine/jobs
)
.
Das
h
board
](
https://
console.cloud
.google.com/mlengine/jobs
)
.
## Exporting the Tensorflow Graph
## Exporting the Tensorflow Graph
After your model has been trained, you should export it to a Tensorflow
After your model has been trained, you should export it to a Tensorflow
graph proto. First, you need to identify a candidate checkpoint to export. You
graph proto. First, you need to identify a candidate checkpoint to export. You
can search your bucket using the
[
Google Cloud Storage
can search your bucket using the
[
Google Cloud Storage
Browser
](
https://
pantheon.corp
.google.com/storage/browser
)
. The file should be
Browser
](
https://
console.cloud
.google.com/storage/browser
)
. The file should be
stored under ${YOUR_GCS_BUCKET}/train. The checkpoint will typically consist of
stored under
`
${YOUR_GCS_BUCKET}/train
`
. The checkpoint will typically consist of
three files:
three files:
*
model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
,
*
`
model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
`
*
model.ckpt-${CHECKPOINT_NUMBER}.index
*
`
model.ckpt-${CHECKPOINT_NUMBER}.index
`
*
model.ckpt-${CHECKPOINT_NUMBER}.meta
*
`
model.ckpt-${CHECKPOINT_NUMBER}.meta
`
After you've identified a candidate checkpoint to export, run the following
After you've identified a candidate checkpoint to export, run the following
command from tensorflow/models/object_detection:
command from
`
tensorflow/models/object_detection
`
:
```
bash
```
bash
# From tensorflow/models
# From tensorflow/models
gsutil
cp
gs://
${
YOUR_GCS_BUCKET
}
/train/model.ckpt-
${
CHECKPOINT_NUMBER
}
.
*
.
gsutil
cp
gs://
${
YOUR_GCS_BUCKET
}
/train/model.ckpt-
${
CHECKPOINT_NUMBER
}
.
*
.
python object_detection/export_inference_graph
\
python object_detection/export_inference_graph
.py
\
--input_type
image_tensor
\
--input_type
image_tensor
\
--pipeline_config_path
object_detection/samples/configs/faster_rcnn_resnet101_pets.config
\
--pipeline_config_path
object_detection/samples/configs/faster_rcnn_resnet101_pets.config
\
--checkpoint_path
model.ckpt-
${
CHECKPOINT_NUMBER
}
\
--checkpoint_path
model.ckpt-
${
CHECKPOINT_NUMBER
}
\
--inference_graph_path
output_inference_graph.pb
--inference_graph_path
output_inference_graph.pb
```
```
Afterwards, you should see a graph named output_inference_graph.pb.
Afterwards, you should see a graph named
`
output_inference_graph.pb
`
.
## What's Next
## What's Next
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment