Commit a80158ed authored by Neal Wu's avatar Neal Wu Committed by GitHub
Browse files

Merge pull request #1482 from alexgorban/master

attention_ocr# Update checkpoint and instructions.
parents 71bf3d47 b7253ccd
......@@ -22,16 +22,25 @@ Pull requests:
## Requirements
1. Installed TensorFlow library ([instructions][TF]).
2. At least 158Gb of free disk space to download FSNS dataset:
1. Install the TensorFlow library ([instructions][TF]). For example:
```
aria2c -c -j 20 -i ../street/python/fsns_urls.txt
virtualenv --system-site-packages ~/.tensorflow
source ~/.tensorflow/bin/activate
pip install --upgrade pip
pip install --upgrade tensorflow_gpu
```
3. 16Gb of RAM or more, 32Gb is recommended.
4. The train.py works with in both modes CPU and GPU, using GPU is preferable.
The GPU mode was tested with Titan X and GTX980.
2. At least 158GB of free disk space to download the FSNS dataset:
```
cd models/attention_ocr/python/datasets
aria2c -c -j 20 -i ../../../street/python/fsns_urls.txt
cd ..
```
3. 16GB of RAM or more; 32GB is recommended.
4. `train.py` works with both CPU and GPU, though using GPU is preferable. It has been tested with a Titan X and with a GTX980.
[TF]: https://www.tensorflow.org/install/
[FSNS]: https://github.com/tensorflow/models/tree/master/street
......@@ -50,7 +59,8 @@ To train from scratch:
python train.py
```
To train a model using a pre-trained inception weights as initialization:
To train a model using pre-trained Inception weights as initialization:
```
wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
tar xf inception_v3_2016_08_28.tar.gz
......@@ -60,16 +70,17 @@ python train.py --checkpoint_inception=inception_v3.ckpt
To fine tune the Attention OCR model using a checkpoint:
```
wget http://download.tensorflow.org/models/attention_ocr_2017_05_01.tar.gz
tar xf attention_ocr_2017_05_01.tar.gz
python train.py --checkpoint=model.ckpt-232572
wget http://download.tensorflow.org/models/attention_ocr_2017_05_17.tar.gz
tar xf attention_ocr_2017_05_17.tar.gz
python train.py --checkpoint=model.ckpt-399731
```
## Disclaimer
This code is a modified version of the internal model we used for our paper.
Currently it reaches 82.71% full sequence accuracy after 215k steps of training.
Currently it reaches 83.79% full sequence accuracy after 400k steps of training.
The main difference between this version and the version used in the paper - for
the paper we used a distributed training with 50 GPU (K80) workers (asynchronous
updates), the provided checkpoint was created using this code after ~60 hours of
training on a single GPU (Titan X).
updates), the provided checkpoint was created using this code after ~6 days of
training on a single GPU (Titan X) (it reached 81% after 24 hours of training),
the coordinate encoding is missing TODO(alexgorban@).
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment