Commit c0b5d11d authored by derekjchow's avatar derekjchow Committed by GitHub
Browse files

Merge pull request #1893 from derekjchow/master

Update directories in running_pets.md
parents fdb70c22 50e427ad
...@@ -51,29 +51,35 @@ dataset for Oxford-IIIT Pets lives ...@@ -51,29 +51,35 @@ dataset for Oxford-IIIT Pets lives
[here](http://www.robots.ox.ac.uk/~vgg/data/pets/). You will need to download [here](http://www.robots.ox.ac.uk/~vgg/data/pets/). You will need to download
both the image dataset [`images.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz) both the image dataset [`images.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz)
and the groundtruth data [`annotations.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz) and the groundtruth data [`annotations.tar.gz`](http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz)
to the `tensorflow/models` directory. This may take some time. After downloading to the `tensorflow/models` directory and unzip them. This may take some time.
the tarballs, your `object_detection` directory should appear as follows:
``` bash
# From tensorflow/models/
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
tar -xvf images.tar.gz
tar -xvf annotations.tar.gz
```
After downloading the tarballs, your `tensorflow/models` directory should appear
as follows:
```lang-none ```lang-none
- images.tar.gz
- annotations.tar.gz
+ images/
+ annotations/
+ object_detection/ + object_detection/
+ data/ ... other files and directories
- images.tar.gz
- annotations.tar.gz
- create_pet_tf_record.py
... other files and directories
``` ```
The Tensorflow Object Detection API expects data to be in the TFRecord format, The Tensorflow Object Detection API expects data to be in the TFRecord format,
so we'll now run the `create_pet_tf_record` script to convert from the raw so we'll now run the `create_pet_tf_record` script to convert from the raw
Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the Oxford-IIIT Pet dataset into TFRecords. Run the following commands from the
`object_detection` directory: `tensorflow/models` directory:
``` bash ``` bash
# From tensorflow/models/ # From tensorflow/models/
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
tar -xvf annotations.tar.gz
tar -xvf images.tar.gz
python object_detection/create_pet_tf_record.py \ python object_detection/create_pet_tf_record.py \
--label_map_path=object_detection/data/pet_label_map.pbtxt \ --label_map_path=object_detection/data/pet_label_map.pbtxt \
--data_dir=`pwd` \ --data_dir=`pwd` \
...@@ -84,7 +90,7 @@ Note: It is normal to see some warnings when running this script. You may ignore ...@@ -84,7 +90,7 @@ Note: It is normal to see some warnings when running this script. You may ignore
them. them.
Two TFRecord files named `pet_train.record` and `pet_val.record` should be generated Two TFRecord files named `pet_train.record` and `pet_val.record` should be generated
in the `object_detection` directory. in the `tensorflow/models` directory.
Now that the data has been generated, we'll need to upload it to Google Cloud Now that the data has been generated, we'll need to upload it to Google Cloud
Storage so the data can be accessed by ML Engine. Run the following command to Storage so the data can be accessed by ML Engine. Run the following command to
...@@ -279,7 +285,7 @@ three files: ...@@ -279,7 +285,7 @@ three files:
* `model.ckpt-${CHECKPOINT_NUMBER}.meta` * `model.ckpt-${CHECKPOINT_NUMBER}.meta`
After you've identified a candidate checkpoint to export, run the following After you've identified a candidate checkpoint to export, run the following
command from `tensorflow/models/object_detection`: command from `tensorflow/models`:
``` bash ``` bash
# From tensorflow/models # From tensorflow/models
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment