Commit e41999f8 authored by Damien Vincent's avatar Damien Vincent
Browse files

Entropy coder for images: remove deprecated functions and update README.

parent c9244885
......@@ -8,6 +8,7 @@ code for the following papers:
## Organization
[Image Encoder](image_encoder/): Encoding and decoding images into their binary representation.
[Entropy Coder](entropy_coder/): Lossless compression of the binary representation.
## Contact Info
Model repository maintained by Nick Johnston ([nickj-google](https://github.com/nickj-google)).
......@@ -14,6 +14,11 @@ the width of the binary codes,
sliced into N groups of K, where each additional group is used by the image
decoder to add more details to the reconstructed image.
The code in this directory only contains the underlying code probability model
but does not perform the actual compression using arithmetic coding.
The code probability model is enough to compute the theoretical compression
ratio.
## Prerequisites
The only software requirements for running the encoder and decoder is having
......@@ -22,7 +27,7 @@ Tensorflow installed.
You will also need to add the top level source directory of the entropy coder
to your `PYTHONPATH`, for example:
`export PYTHONPATH=${PYTHONPATH}:/tmp/compression/entropy_coder`
`export PYTHONPATH=${PYTHONPATH}:/tmp/models/compression`
## Training the entropy coder
......@@ -38,6 +43,8 @@ less.
To generate a synthetic dataset with 20000 samples:
`mkdir -p /tmp/dataset`
`python ./dataset/gen_synthetic_dataset.py --dataset_dir=/tmp/dataset/
--count=20000`
......
......@@ -111,7 +111,7 @@ def train():
decay_steps=decay_steps,
decay_rate=decay_rate,
staircase=True)
tf.contrib.deprecated.scalar_summary('Learning Rate', learning_rate)
tf.summary.scalar('Learning Rate', learning_rate)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate,
epsilon=1.0)
......
......@@ -202,11 +202,10 @@ class ProgressiveModel(entropy_coder_model.EntropyCoderModel):
code_length.append(code_length_block(
blocks.ConvertSignCodeToZeroOneCode(x),
blocks.ConvertSignCodeToZeroOneCode(predicted_x)))
tf.contrib.deprecated.scalar_summary('code_length_layer_{:02d}'.format(k),
code_length[-1])
tf.summary.scalar('code_length_layer_{:02d}'.format(k), code_length[-1])
code_length = tf.stack(code_length)
self.loss = tf.reduce_mean(code_length)
tf.contrib.deprecated.scalar_summary('loss', self.loss)
tf.summary.scalar('loss', self.loss)
# Loop over all the remaining layers just to make sure they are
# instantiated. Otherwise, loading model params could fail.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment