Commit 2662da2c authored by Suharsh Sivakumar's avatar Suharsh Sivakumar
Browse files

improve docs and make train flag match eval flag

parent 1abcb9c3
......@@ -96,13 +96,13 @@ $ bazel build -c opt --config=cuda mobilenet_v1_{eval,train}
Train:
```
$ ./bazel-bin/mobilenet_v1_train
$ ./bazel-bin/mobilenet_v1_train --dataset_dir "path/to/dataset" --checkpoint_dir "path/to/checkpoints"
```
Eval:
```
$ ./bazel-bin/mobilenet_v1_eval
$ ./bazel-bin/mobilenet_v1_eval --dataset_dir "path/to/dataset" --checkpoint_dir "path/to/checkpoints"
```
#### Quantized Training and Eval
......@@ -110,19 +110,20 @@ $ ./bazel-bin/mobilenet_v1_eval
Train from preexisting float checkpoint:
```
$ ./bazel-bin/mobilenet_v1_train --quantize=True --fine_tune_checkpoint=checkpoint-name
$ ./bazel-bin/mobilenet_v1_train --dataset_dir "path/to/dataset" --checkpoint_dir "path/to/checkpoints" \
--quantize=True --fine_tune_checkpoint=float/checkpoint/path
```
Train from scratch:
```
$ ./bazel-bin/mobilenet_v1_train --quantize=True
$ ./bazel-bin/mobilenet_v1_train --dataset_dir "path/to/dataset" --checkpoint_dir "path/to/checkpoints" --quantize=True
```
Eval:
```
$ ./bazel-bin/mobilenet_v1_eval --quantize=True
$ ./bazel-bin/mobilenet_v1_eval --dataset_dir "path/to/dataset" --checkpoint_dir "path/to/checkpoints" --quantize=True
```
The resulting float and quantized models can be run on-device via [TensorFlow Lite](https://www.tensorflow.org/mobile/tflite/).
......
......@@ -40,7 +40,8 @@ flags.DEFINE_float('depth_multiplier', 1.0, 'Depth multiplier for mobilenet')
flags.DEFINE_bool('quantize', False, 'Quantize training')
flags.DEFINE_string('fine_tune_checkpoint', '',
'Checkpoint from which to start finetuning.')
flags.DEFINE_string('logdir', '', 'Directory for writing training event logs')
flags.DEFINE_string('checkpoint_dir', '',
'Directory for writing training checkpoints and logs')
flags.DEFINE_string('dataset_dir', '', 'Location of dataset')
flags.DEFINE_integer('log_every_n_steps', 100, 'Number of steps per log')
flags.DEFINE_integer('save_summaries_secs', 100,
......@@ -191,7 +192,7 @@ def train_model():
with g.as_default():
slim.learning.train(
train_tensor,
FLAGS.logdir,
FLAGS.checkpoint_dir,
is_chief=(FLAGS.task == 0),
master=FLAGS.master,
log_every_n_steps=FLAGS.log_every_n_steps,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment