Commit f2b80157 authored by Xin Pan's avatar Xin Pan Committed by GitHub
Browse files

Merge pull request #1408 from tensorflow/readme-clarifications

Improvements to several READMEs
parents f94f1637 00c9b3aa
......@@ -46,7 +46,7 @@ https://github.com/panyx0718/models/tree/master/slim
# Download the data to the data/ directory.
# List the codes.
ls -R differential_privacy/
$ ls -R differential_privacy/
differential_privacy/:
dp_sgd __init__.py privacy_accountant README.md
......@@ -72,16 +72,16 @@ differential_privacy/privacy_accountant/tf:
accountant.py accountant_test.py BUILD
# List the data.
ls -R data/
$ ls -R data/
./data:
mnist_test.tfrecord mnist_train.tfrecord
# Build the codes.
bazel build -c opt differential_privacy/...
$ bazel build -c opt differential_privacy/...
# Run the mnist differntial privacy training codes.
bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist \
$ bazel-bin/differential_privacy/dp_sgd/dp_mnist/dp_mnist \
--training_data_path=data/mnist_train.tfrecord \
--eval_data_path=data/mnist_test.tfrecord \
--save_path=/tmp/mnist_dir
......@@ -102,6 +102,6 @@ train_accuracy: 0.53
eval_accuracy: 0.53
...
ls /tmp/mnist_dir/
$ ls /tmp/mnist_dir/
checkpoint ckpt ckpt.meta results-0.json
```
......@@ -73,7 +73,7 @@ LSTM-8192-2048 (50\% Dropout) | 32.2 | 3.3
<b>How To Run</b>
Pre-requesite:
Prerequisites:
* Install TensorFlow.
* Install Bazel.
......@@ -97,7 +97,7 @@ Pre-requesite:
[link](http://download.tensorflow.org/models/LM_LSTM_CNN/vocab-2016-09-10.txt)
* test dataset: link
[link](http://download.tensorflow.org/models/LM_LSTM_CNN/test/news.en.heldout-00000-of-00050)
* It is recommended to run on modern desktop instead of laptop.
* It is recommended to run on a modern desktop instead of a laptop.
```shell
# 1. Clone the code to your workspace.
......@@ -105,7 +105,7 @@ Pre-requesite:
# 3. Create an empty WORKSPACE file in your workspace.
# 4. Create an empty output directory in your workspace.
# Example directory structure below:
ls -R
$ ls -R
.:
data lm_1b output WORKSPACE
......@@ -121,9 +121,9 @@ BUILD data_utils.py lm_1b_eval.py README.md
./output:
# Build the codes.
bazel build -c opt lm_1b/...
$ bazel build -c opt lm_1b/...
# Run sample mode:
bazel-bin/lm_1b/lm_1b_eval --mode sample \
$ bazel-bin/lm_1b/lm_1b_eval --mode sample \
--prefix "I love that I" \
--pbtxt data/graph-2016-09-10.pbtxt \
--vocab_file data/vocab-2016-09-10.txt \
......@@ -138,7 +138,7 @@ I love that I find that amazing
...(omitted)
# Run eval mode:
bazel-bin/lm_1b/lm_1b_eval --mode eval \
$ bazel-bin/lm_1b/lm_1b_eval --mode eval \
--pbtxt data/graph-2016-09-10.pbtxt \
--vocab_file data/vocab-2016-09-10.txt \
--input_data data/news.en.heldout-00000-of-00050 \
......@@ -166,7 +166,7 @@ Eval Step: 4531, Average Perplexity: 29.285674.
...(omitted. At convergence, it should be around 30.)
# Run dump_emb mode:
bazel-bin/lm_1b/lm_1b_eval --mode dump_emb \
$ bazel-bin/lm_1b/lm_1b_eval --mode dump_emb \
--pbtxt data/graph-2016-09-10.pbtxt \
--vocab_file data/vocab-2016-09-10.txt \
--ckpt 'data/ckpt-*' \
......@@ -177,17 +177,17 @@ Finished word embedding 0/793471
Finished word embedding 1/793471
Finished word embedding 2/793471
...(omitted)
ls output/
$ ls output/
embeddings_softmax.npy ...
# Run dump_lstm_emb mode:
bazel-bin/lm_1b/lm_1b_eval --mode dump_lstm_emb \
$ bazel-bin/lm_1b/lm_1b_eval --mode dump_lstm_emb \
--pbtxt data/graph-2016-09-10.pbtxt \
--vocab_file data/vocab-2016-09-10.txt \
--ckpt 'data/ckpt-*' \
--sentence "I love who I am ." \
--save_dir output
ls output/
$ ls output/
lstm_emb_step_0.npy lstm_emb_step_2.npy lstm_emb_step_4.npy
lstm_emb_step_6.npy lstm_emb_step_1.npy lstm_emb_step_3.npy
lstm_emb_step_5.npy
......
......@@ -34,7 +34,7 @@ to tf.SequenceExample.
<b>How to run:</b>
```shell
ls -R
$ ls -R
.:
data next_frame_prediction WORKSPACE
......@@ -52,14 +52,14 @@ cross_conv2.png cross_conv3.png cross_conv.png
# Build everything.
bazel build -c opt next_frame_prediction/...
$ bazel build -c opt next_frame_prediction/...
# The following example runs the generated 2d objects.
# For Sprites dataset, image_size should be 60, norm_scale should be 255.0.
# Batch size is normally 16~64, depending on your memory size.
#
# Run training.
bazel-bin/next_frame_prediction/cross_conv/train \
$ bazel-bin/next_frame_prediction/cross_conv/train \
--batch_size=1 \
--data_filepattern=data/tfrecords \
--image_size=64 \
......@@ -75,9 +75,9 @@ step: 7, loss: 1.747665
step: 8, loss: 1.572436
step: 9, loss: 1.586816
step: 10, loss: 1.434191
#
# Run eval.
bazel-bin/next_frame_prediction/cross_conv/eval \
$ bazel-bin/next_frame_prediction/cross_conv/eval \
--batch_size=1 \
--data_filepattern=data/tfrecords_test \
--image_size=64 \
......
......@@ -23,7 +23,7 @@ https://arxiv.org/pdf/1605.07146v1.pdf
<b>Settings:</b>
* Random split 50k training set into 45k/5k train/eval split.
* Pad to 36x36 and random crop. Horizontal flip. Per-image whitenting.
* Pad to 36x36 and random crop. Horizontal flip. Per-image whitening.
* Momentum optimizer 0.9.
* Learning rate schedule: 0.1 (40k), 0.01 (60k), 0.001 (>60k).
* L2 weight decay: 0.002.
......@@ -65,37 +65,37 @@ curl -o cifar-100-binary.tar.gz https://www.cs.toronto.edu/~kriz/cifar-100-binar
<b>How to run:</b>
```shell
# cd to the your workspace.
# cd to your workspace.
# It contains an empty WORKSPACE file, resnet codes and cifar10 dataset.
# Note: User can split 5k from train set for eval set.
ls -R
.:
cifar10 resnet WORKSPACE
$ ls -R
.:
cifar10 resnet WORKSPACE
./cifar10:
data_batch_1.bin data_batch_2.bin data_batch_3.bin data_batch_4.bin
data_batch_5.bin test_batch.bin
./cifar10:
data_batch_1.bin data_batch_2.bin data_batch_3.bin data_batch_4.bin
data_batch_5.bin test_batch.bin
./resnet:
BUILD cifar_input.py g3doc README.md resnet_main.py resnet_model.py
./resnet:
BUILD cifar_input.py g3doc README.md resnet_main.py resnet_model.py
# Build everything for GPU.
bazel build -c opt --config=cuda resnet/...
$ bazel build -c opt --config=cuda resnet/...
# Train the model.
bazel-bin/resnet/resnet_main --train_data_path=cifar10/data_batch* \
$ bazel-bin/resnet/resnet_main --train_data_path=cifar10/data_batch* \
--log_root=/tmp/resnet_model \
--train_dir=/tmp/resnet_model/train \
--dataset='cifar10' \
--num_gpus=1
# While the model is training, you can also check on its progress using tensorboard:
tensorboard --logdir=/tmp/resnet_model
$ tensorboard --logdir=/tmp/resnet_model
# Evaluate the model.
# Avoid running on the same GPU as the training job at the same time,
# otherwise, you might run out of memory.
bazel-bin/resnet/resnet_main --eval_data_path=cifar10/test_batch.bin \
$ bazel-bin/resnet/resnet_main --eval_data_path=cifar10/test_batch.bin \
--log_root=/tmp/resnet_model \
--eval_dir=/tmp/resnet_model/test \
--mode=eval \
......
......@@ -16,7 +16,7 @@ The results described below are based on model trained on multi-gpu and
multi-machine settings. It has been simplified to run on only one machine
for open source purpose.
<b>DataSet</b>
<b>Dataset</b>
We used the Gigaword dataset described in [Rush et al. A Neural Attention Model
for Sentence Summarization](https://arxiv.org/abs/1509.00685).
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment