Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
e41999f8
Commit
e41999f8
authored
May 24, 2017
by
Damien Vincent
Browse files
Entropy coder for images: remove deprecated functions and update README.
parent
c9244885
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
12 additions
and
5 deletions
+12
-5
compression/README.md
compression/README.md
+1
-0
compression/entropy_coder/README.md
compression/entropy_coder/README.md
+8
-1
compression/entropy_coder/core/entropy_coder_train.py
compression/entropy_coder/core/entropy_coder_train.py
+1
-1
compression/entropy_coder/progressive/progressive.py
compression/entropy_coder/progressive/progressive.py
+2
-3
No files found.
compression/README.md
View file @
e41999f8
...
...
@@ -8,6 +8,7 @@ code for the following papers:
## Organization
[
Image Encoder
](
image_encoder/
)
: Encoding and decoding images into their binary representation.
[
Entropy Coder
](
entropy_coder/
)
: Lossless compression of the binary representation.
## Contact Info
Model repository maintained by Nick Johnston (
[
nickj-google
](
https://github.com/nickj-google
)
).
compression/entropy_coder/README.md
View file @
e41999f8
...
...
@@ -14,6 +14,11 @@ the width of the binary codes,
sliced into N groups of K, where each additional group is used by the image
decoder to add more details to the reconstructed image.
The code in this directory only contains the underlying code probability model
but does not perform the actual compression using arithmetic coding.
The code probability model is enough to compute the theoretical compression
ratio.
## Prerequisites
The only software requirements for running the encoder and decoder is having
...
...
@@ -22,7 +27,7 @@ Tensorflow installed.
You will also need to add the top level source directory of the entropy coder
to your
`PYTHONPATH`
, for example:
`export PYTHONPATH=${PYTHONPATH}:/tmp/compression
/entropy_coder
`
`export PYTHONPATH=${PYTHONPATH}:/tmp/
models/
compression`
## Training the entropy coder
...
...
@@ -38,6 +43,8 @@ less.
To generate a synthetic dataset with 20000 samples:
`mkdir -p /tmp/dataset`
`python ./dataset/gen_synthetic_dataset.py --dataset_dir=/tmp/dataset/
--count=20000`
...
...
compression/entropy_coder/core/entropy_coder_train.py
View file @
e41999f8
...
...
@@ -111,7 +111,7 @@ def train():
decay_steps
=
decay_steps
,
decay_rate
=
decay_rate
,
staircase
=
True
)
tf
.
contrib
.
deprecated
.
scalar_
summary
(
'Learning Rate'
,
learning_rate
)
tf
.
summary
.
scalar
(
'Learning Rate'
,
learning_rate
)
optimizer
=
tf
.
train
.
AdamOptimizer
(
learning_rate
=
learning_rate
,
epsilon
=
1.0
)
...
...
compression/entropy_coder/progressive/progressive.py
View file @
e41999f8
...
...
@@ -202,11 +202,10 @@ class ProgressiveModel(entropy_coder_model.EntropyCoderModel):
code_length
.
append
(
code_length_block
(
blocks
.
ConvertSignCodeToZeroOneCode
(
x
),
blocks
.
ConvertSignCodeToZeroOneCode
(
predicted_x
)))
tf
.
contrib
.
deprecated
.
scalar_summary
(
'code_length_layer_{:02d}'
.
format
(
k
),
code_length
[
-
1
])
tf
.
summary
.
scalar
(
'code_length_layer_{:02d}'
.
format
(
k
),
code_length
[
-
1
])
code_length
=
tf
.
stack
(
code_length
)
self
.
loss
=
tf
.
reduce_mean
(
code_length
)
tf
.
contrib
.
deprecated
.
scalar_
summary
(
'loss'
,
self
.
loss
)
tf
.
summary
.
scalar
(
'loss'
,
self
.
loss
)
# Loop over all the remaining layers just to make sure they are
# instantiated. Otherwise, loading model params could fail.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment