Commit 801f892a authored by James Pruegsanusak's avatar James Pruegsanusak
Browse files

Fix typos and inline code style

parent 1e093b26
...@@ -64,7 +64,7 @@ the tarballs, your object_detection directory should appear as follows: ...@@ -64,7 +64,7 @@ the tarballs, your object_detection directory should appear as follows:
``` ```
The Tensorflow Object Detection API expects data to be in the TFRecord format, The Tensorflow Object Detection API expects data to be in the TFRecord format,
so we'll now run the _create_pet_tf_record_ script to convert from the raw so we'll now run the `create_pet_tf_record` script to convert from the raw
Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
object_detection directory: object_detection directory:
...@@ -83,12 +83,12 @@ python object_detection/create_pet_tf_record.py \ ...@@ -83,12 +83,12 @@ python object_detection/create_pet_tf_record.py \
Note: It is normal to see some warnings when running this script. You may ignore Note: It is normal to see some warnings when running this script. You may ignore
them. them.
Two TFRecord files named pet_train.record and pet_val.record should be generated Two TFRecord files named `pet_train.record` and `pet_val.record` should be generated
in the object_detection/ directory. in the object_detection/ directory.
Now that the data has been generated, we'll need to upload it to Google Cloud Now that the data has been generated, we'll need to upload it to Google Cloud
Storage so the data can be accessed by ML Engine. Run the following command to Storage so the data can be accessed by ML Engine. Run the following command to
copy the files into your GCS bucket (substituting ${YOUR_GCS_BUCKET}): copy the files into your GCS bucket (substituting `${YOUR_GCS_BUCKET}`):
``` bash ``` bash
# From tensorflow/models/ # From tensorflow/models/
...@@ -109,7 +109,7 @@ parameters to initialize our new model. ...@@ -109,7 +109,7 @@ parameters to initialize our new model.
Download our [COCO-pretrained Faster R-CNN with Resnet-101 Download our [COCO-pretrained Faster R-CNN with Resnet-101
model](http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz). model](http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz).
Unzip the contents of the folder and copy the model.ckpt* files into your GCS Unzip the contents of the folder and copy the `model.ckpt*` files into your GCS
Bucket. Bucket.
``` bash ``` bash
...@@ -134,7 +134,7 @@ text editor. ...@@ -134,7 +134,7 @@ text editor.
We'll need to configure some paths in order for the template to work. Search the We'll need to configure some paths in order for the template to work. Search the
file for instances of `PATH_TO_BE_CONFIGURED` and replace them with the file for instances of `PATH_TO_BE_CONFIGURED` and replace them with the
appropriate value (typically "gs://${YOUR_GCS_BUCKET}/data/"). Afterwards appropriate value (typically `gs://${YOUR_GCS_BUCKET}/data/`). Afterwards
upload your edited file onto GCS, making note of the path it was uploaded to upload your edited file onto GCS, making note of the path it was uploaded to
(we'll need it when starting the training/eval jobs). (we'll need it when starting the training/eval jobs).
...@@ -171,7 +171,7 @@ the following: ...@@ -171,7 +171,7 @@ the following:
``` ```
You can inspect your bucket using the [Google Cloud Storage You can inspect your bucket using the [Google Cloud Storage
browser](pantheon.corp.google.com/storage). browser](https://console.cloud.google.com/storage/browser).
## Starting Training and Evaluation Jobs on Google Cloud ML Engine ## Starting Training and Evaluation Jobs on Google Cloud ML Engine
...@@ -194,7 +194,7 @@ and `slim/dist/slim-0.1.tar.gz`. ...@@ -194,7 +194,7 @@ and `slim/dist/slim-0.1.tar.gz`.
For running the training Cloud ML job, we'll configure the cluster to use 10 For running the training Cloud ML job, we'll configure the cluster to use 10
training jobs (1 master + 9 workers) and three parameters servers. The training jobs (1 master + 9 workers) and three parameters servers. The
configuration file can be found at object_detection/samples/cloud/cloud.yml. configuration file can be found at `object_detection/samples/cloud/cloud.yml`.
To start training, execute the following command from the tensorflow/models/ To start training, execute the following command from the tensorflow/models/
directory: directory:
...@@ -233,7 +233,7 @@ submit training` command is correct. ML Engine does not distinguish between ...@@ -233,7 +233,7 @@ submit training` command is correct. ML Engine does not distinguish between
training and evaluation jobs. training and evaluation jobs.
Users can monitor and stop training and evaluation jobs on the [ML Engine Users can monitor and stop training and evaluation jobs on the [ML Engine
Dasboard](https://console.cloud.google.com/mlengine/jobs). Dashboard](https://console.cloud.google.com/mlengine/jobs).
## Monitoring Progress with Tensorboard ## Monitoring Progress with Tensorboard
...@@ -263,15 +263,15 @@ Note: It takes roughly 10 minutes for a job to get started on ML Engine, and ...@@ -263,15 +263,15 @@ Note: It takes roughly 10 minutes for a job to get started on ML Engine, and
roughly an hour for the system to evaluate the validation dataset. It may take roughly an hour for the system to evaluate the validation dataset. It may take
some time to populate the dashboards. If you do not see any entries after half some time to populate the dashboards. If you do not see any entries after half
an hour, check the logs from the [ML Engine an hour, check the logs from the [ML Engine
Dasboard](https://pantheon.corp.google.com/mlengine/jobs). Dashboard](https://console.cloud.google.com/mlengine/jobs).
## Exporting the Tensorflow Graph ## Exporting the Tensorflow Graph
After your model has been trained, you should export it to a Tensorflow After your model has been trained, you should export it to a Tensorflow
graph proto. First, you need to identify a candidate checkpoint to export. You graph proto. First, you need to identify a candidate checkpoint to export. You
can search your bucket using the [Google Cloud Storage can search your bucket using the [Google Cloud Storage
Browser](https://pantheon.corp.google.com/storage/browser). The file should be Browser](https://console.cloud.google.com/storage/browser). The file should be
stored under ${YOUR_GCS_BUCKET}/train. The checkpoint will typically consist of stored under `${YOUR_GCS_BUCKET}/train`. The checkpoint will typically consist of
three files: three files:
* model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001, * model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001,
...@@ -291,7 +291,7 @@ python object_detection/export_inference_graph \ ...@@ -291,7 +291,7 @@ python object_detection/export_inference_graph \
--inference_graph_path output_inference_graph.pb --inference_graph_path output_inference_graph.pb
``` ```
Afterwards, you should see a graph named output_inference_graph.pb. Afterwards, you should see a graph named `output_inference_graph.pb`.
## What's Next ## What's Next
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment