Commit 3bbc5d2f authored by Neal Wu's avatar Neal Wu
Browse files

Make the README for textsum a little clearer

parent 29881fb4
......@@ -70,9 +70,7 @@ vocabulary size: Most frequent 200k words from dataset's article and summaries.
<b>How To Run</b>
Pre-requesite:
Install TensorFlow and Bazel.
Prerequisite: install TensorFlow and Bazel.
```shell
# cd to your workspace
......@@ -83,7 +81,7 @@ Install TensorFlow and Bazel.
# If your data files have different names, update the --data_path.
# If you don't have data but want to try out the model, copy the toy
# data from the textsum/data/data to the data/ directory in the workspace.
ls -R
$ ls -R
.:
data textsum WORKSPACE
......@@ -97,10 +95,10 @@ data.py seq2seq_attention_decode.py seq2seq_attention.py seq2seq_lib.py
./textsum/data:
data vocab
bazel build -c opt --config=cuda textsum/...
$ bazel build -c opt --config=cuda textsum/...
# Run the training.
bazel-bin/textsum/seq2seq_attention \
$ bazel-bin/textsum/seq2seq_attention \
--mode=train \
--article_key=article \
--abstract_key=abstract \
......@@ -110,7 +108,7 @@ bazel-bin/textsum/seq2seq_attention \
--train_dir=textsum/log_root/train
# Run the eval. Try to avoid running on the same machine as training.
bazel-bin/textsum/seq2seq_attention \
$ bazel-bin/textsum/seq2seq_attention \
--mode=eval \
--article_key=article \
--abstract_key=abstract \
......@@ -120,7 +118,7 @@ bazel-bin/textsum/seq2seq_attention \
--eval_dir=textsum/log_root/eval
# Run the decode. Run it when the most is mostly converged.
bazel-bin/textsum/seq2seq_attention \
$ bazel-bin/textsum/seq2seq_attention \
--mode=decode \
--article_key=article \
--abstract_key=abstract \
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment