Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
c9244885
Commit
c9244885
authored
May 22, 2017
by
Nick Johnston
Committed by
GitHub
May 22, 2017
Browse files
Merge pull request #1499 from damienv-gh/master
Image compression: initial version of the entropy coder.
parents
c9397c90
18e5ce8d
Changes
37
Show whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
1248 additions
and
0 deletions
+1248
-0
compression/entropy_coder/README.md
compression/entropy_coder/README.md
+102
-0
compression/entropy_coder/__init__.py
compression/entropy_coder/__init__.py
+0
-0
compression/entropy_coder/all_models/__init__.py
compression/entropy_coder/all_models/__init__.py
+0
-0
compression/entropy_coder/all_models/all_models.py
compression/entropy_coder/all_models/all_models.py
+19
-0
compression/entropy_coder/all_models/all_models_test.py
compression/entropy_coder/all_models/all_models_test.py
+68
-0
compression/entropy_coder/configs/gru_prime3/model_config.json
...ession/entropy_coder/configs/gru_prime3/model_config.json
+4
-0
compression/entropy_coder/configs/synthetic/input_config.json
...ression/entropy_coder/configs/synthetic/input_config.json
+4
-0
compression/entropy_coder/configs/synthetic/model_config.json
...ression/entropy_coder/configs/synthetic/model_config.json
+4
-0
compression/entropy_coder/configs/synthetic/train_config.json
...ression/entropy_coder/configs/synthetic/train_config.json
+6
-0
compression/entropy_coder/core/code_loader.py
compression/entropy_coder/core/code_loader.py
+73
-0
compression/entropy_coder/core/config_helper.py
compression/entropy_coder/core/config_helper.py
+52
-0
compression/entropy_coder/core/entropy_coder_single.py
compression/entropy_coder/core/entropy_coder_single.py
+116
-0
compression/entropy_coder/core/entropy_coder_train.py
compression/entropy_coder/core/entropy_coder_train.py
+184
-0
compression/entropy_coder/dataset/gen_synthetic_dataset.py
compression/entropy_coder/dataset/gen_synthetic_dataset.py
+88
-0
compression/entropy_coder/dataset/gen_synthetic_single.py
compression/entropy_coder/dataset/gen_synthetic_single.py
+72
-0
compression/entropy_coder/dataset/synthetic_model.py
compression/entropy_coder/dataset/synthetic_model.py
+74
-0
compression/entropy_coder/lib/__init__.py
compression/entropy_coder/lib/__init__.py
+0
-0
compression/entropy_coder/lib/block_base.py
compression/entropy_coder/lib/block_base.py
+258
-0
compression/entropy_coder/lib/block_util.py
compression/entropy_coder/lib/block_util.py
+100
-0
compression/entropy_coder/lib/blocks.py
compression/entropy_coder/lib/blocks.py
+24
-0
No files found.
compression/entropy_coder/README.md
0 → 100644
View file @
c9244885
# Neural net based entropy coding
This is a
[
TensorFlow
](
http://www.tensorflow.org/
)
model for additional
lossless compression of bitstreams generated by neural net based image
encoders as described in
[
https://arxiv.org/abs/1703.10114
](
https://arxiv.org/abs/1703.10114
)
.
To be more specific, the entropy coder aims at compressing further binary
codes which have a 3D tensor structure with:
*
the first two dimensions of the tensors corresponding to the height and
the width of the binary codes,
*
the last dimension being the depth of the codes. The last dimension can be
sliced into N groups of K, where each additional group is used by the image
decoder to add more details to the reconstructed image.
## Prerequisites
The only software requirements for running the encoder and decoder is having
Tensorflow installed.
You will also need to add the top level source directory of the entropy coder
to your
`PYTHONPATH`
, for example:
`export PYTHONPATH=${PYTHONPATH}:/tmp/compression/entropy_coder`
## Training the entropy coder
### Synthetic dataset
If you do not have a training dataset, there is a simple code generative model
that you can use to generate a dataset and play with the entropy coder.
The generative model is located under dataset/gen
\_
synthetic
\_
dataset.py. Note
that this simple generative model is not going to give good results on real
images as it is not supposed to be close to the statistics of the binary
representation of encoded images. Consider it as a toy dataset, no more, no
less.
To generate a synthetic dataset with 20000 samples:
`python ./dataset/gen_synthetic_dataset.py --dataset_dir=/tmp/dataset/
--count=20000`
Note that the generator has not been optimized at all, generating the synthetic
dataset is currently pretty slow.
### Training
If you just want to play with the entropy coder trainer, here is the command
line that can be used to train the entropy coder on the synthetic dataset:
`mkdir -p /tmp/entropy_coder_train`
`python ./core/entropy_coder_train.py --task=0
--train_dir=/tmp/entropy_coder_train/
--model=progressive
--model_config=./configs/synthetic/model_config.json
--train_config=./configs/synthetic/train_config.json
--input_config=./configs/synthetic/input_config.json
`
Training is configured using 3 files formatted using JSON:
*
One file is used to configure the underlying entropy coder model.
Currently, only the
*progressive*
model is supported.
This model takes 2 mandatory parameters and an optional one:
*
`layer_depth`
: the number of bits per layer (a.k.a. iteration).
Background: the image decoder takes each layer to add more detail
to the image.
*
`layer_count`
: the maximum number of layers that should be supported
by the model. This should be equal or greater than the maximum number
of layers in the input binary codes.
*
`coded_layer_count`
: This can be used to consider only partial codes,
keeping only the first
`coded_layer_count`
layers and ignoring the
remaining layers. If left empty, the binary codes are left unchanged.
*
One file to configure the training, including the learning rate, ...
The meaning of the parameters are pretty straightforward. Note that this
file is only used during training and is not needed during inference.
*
One file to specify the input dataset to use during training.
The dataset is formatted using tf.RecordIO.
## Inference: file size after entropy coding.
### Using a synthetic sample
Here is the command line to generate a single synthetic sample formatted
in the same way as what is provided by the image encoder:
`python ./dataset/gen_synthetic_single.py
--sample_filename=/tmp/dataset/sample_0000.npz`
To actually compute the additional compression ratio using the entropy coder
trained in the previous step:
`python ./core/entropy_coder_single.py
--model=progressive
--model_config=./configs/synthetic/model_config.json
--input_codes=/tmp/dataset/sample_0000.npz
--checkpoint=/tmp/entropy_coder_train/model.ckpt-209078`
where the checkpoint number should be adjusted accordingly.
compression/entropy_coder/__init__.py
0 → 100644
View file @
c9244885
compression/entropy_coder/all_models/__init__.py
0 → 100644
View file @
c9244885
compression/entropy_coder/all_models/all_models.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Import and register all the entropy coder models."""
# pylint: disable=unused-import
from
entropy_coder.progressive
import
progressive
compression/entropy_coder/all_models/all_models_test.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Basic test of all registered models."""
import
tensorflow
as
tf
# pylint: disable=unused-import
import
all_models
# pylint: enable=unused-import
from
entropy_coder.model
import
model_factory
class
AllModelsTest
(
tf
.
test
.
TestCase
):
def
testBuildModelForTraining
(
self
):
factory
=
model_factory
.
GetModelRegistry
()
model_names
=
factory
.
GetAvailableModels
()
for
m
in
model_names
:
tf
.
reset_default_graph
()
global_step
=
tf
.
Variable
(
tf
.
zeros
([],
dtype
=
tf
.
int64
),
trainable
=
False
,
name
=
'global_step'
)
optimizer
=
tf
.
train
.
GradientDescentOptimizer
(
learning_rate
=
0.1
)
batch_size
=
3
height
=
40
width
=
20
depth
=
5
binary_codes
=
tf
.
placeholder
(
dtype
=
tf
.
float32
,
shape
=
[
batch_size
,
height
,
width
,
depth
])
# Create a model with the default configuration.
print
(
'Creating model: {}'
.
format
(
m
))
model
=
factory
.
CreateModel
(
m
)
model
.
Initialize
(
global_step
,
optimizer
,
model
.
GetConfigStringForUnitTest
())
self
.
assertTrue
(
model
.
loss
is
None
,
'model: {}'
.
format
(
m
))
self
.
assertTrue
(
model
.
train_op
is
None
,
'model: {}'
.
format
(
m
))
self
.
assertTrue
(
model
.
average_code_length
is
None
,
'model: {}'
.
format
(
m
))
# Build the Tensorflow graph corresponding to the model.
model
.
BuildGraph
(
binary_codes
)
self
.
assertTrue
(
model
.
loss
is
not
None
,
'model: {}'
.
format
(
m
))
self
.
assertTrue
(
model
.
average_code_length
is
not
None
,
'model: {}'
.
format
(
m
))
if
model
.
train_op
is
None
:
print
(
'Model {} is not trainable'
.
format
(
m
))
if
__name__
==
'__main__'
:
tf
.
test
.
main
()
compression/entropy_coder/configs/gru_prime3/model_config.json
0 → 100644
View file @
c9244885
{
"layer_count"
:
16
,
"layer_depth"
:
32
}
compression/entropy_coder/configs/synthetic/input_config.json
0 → 100644
View file @
c9244885
{
"data"
:
"/tmp/dataset/synthetic_dataset"
,
"unique_code_size"
:
true
}
compression/entropy_coder/configs/synthetic/model_config.json
0 → 100644
View file @
c9244885
{
"layer_depth"
:
2
,
"layer_count"
:
8
}
compression/entropy_coder/configs/synthetic/train_config.json
0 → 100644
View file @
c9244885
{
"batch_size"
:
4
,
"learning_rate"
:
0.1
,
"decay_rate"
:
0.9
,
"samples_per_decay"
:
20000
}
compression/entropy_coder/core/code_loader.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Load binary codes stored as tf.Example in a TFRecord table."""
import
tensorflow
as
tf
def
ReadFirstCode
(
dataset
):
"""Read the first example from a binary code RecordIO table."""
for
record
in
tf
.
python_io
.
tf_record_iterator
(
dataset
):
tf_example
=
tf
.
train
.
Example
()
tf_example
.
ParseFromString
(
record
)
break
return
tf_example
def
LoadBinaryCode
(
input_config
,
batch_size
):
"""Load a batch of binary codes from a tf.Example dataset.
Args:
input_config: An InputConfig proto containing the input configuration.
batch_size: Output batch size of examples.
Returns:
A batched tensor of binary codes.
"""
data
=
input_config
.
data
# TODO: Possibly use multiple files (instead of just one).
file_list
=
[
data
]
filename_queue
=
tf
.
train
.
string_input_producer
(
file_list
,
capacity
=
4
)
reader
=
tf
.
TFRecordReader
()
_
,
values
=
reader
.
read
(
filename_queue
)
serialized_example
=
tf
.
reshape
(
values
,
shape
=
[
1
])
serialized_features
=
{
'code_shape'
:
tf
.
FixedLenFeature
([
3
],
dtype
=
tf
.
int64
),
'code'
:
tf
.
VarLenFeature
(
tf
.
float32
),
}
example
=
tf
.
parse_example
(
serialized_example
,
serialized_features
)
# 3D shape: height x width x binary_code_depth
z
=
example
[
'code_shape'
]
code_shape
=
tf
.
reshape
(
tf
.
cast
(
z
,
tf
.
int32
),
[
3
])
# Un-flatten the binary codes.
code
=
tf
.
reshape
(
tf
.
sparse_tensor_to_dense
(
example
[
'code'
]),
code_shape
)
queue_size
=
10
queue
=
tf
.
PaddingFIFOQueue
(
queue_size
+
3
*
batch_size
,
dtypes
=
[
code
.
dtype
],
shapes
=
[[
None
,
None
,
None
]])
enqueue_op
=
queue
.
enqueue
([
code
])
dequeue_code
=
queue
.
dequeue_many
(
batch_size
)
queue_runner
=
tf
.
train
.
queue_runner
.
QueueRunner
(
queue
,
[
enqueue_op
])
tf
.
add_to_collection
(
tf
.
GraphKeys
.
QUEUE_RUNNERS
,
queue_runner
)
return
dequeue_code
compression/entropy_coder/core/config_helper.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Helper functions used in both train and inference."""
import
json
import
os.path
import
tensorflow
as
tf
def
GetConfigString
(
config_file
):
config_string
=
''
if
config_file
is
not
None
:
config_string
=
open
(
config_file
).
read
()
return
config_string
class
InputConfig
(
object
):
def
__init__
(
self
,
config_string
):
config
=
json
.
loads
(
config_string
)
self
.
data
=
config
[
"data"
]
self
.
unique_code_size
=
config
[
"unique_code_size"
]
class
TrainConfig
(
object
):
def
__init__
(
self
,
config_string
):
config
=
json
.
loads
(
config_string
)
self
.
batch_size
=
config
[
"batch_size"
]
self
.
learning_rate
=
config
[
"learning_rate"
]
self
.
decay_rate
=
config
[
"decay_rate"
]
self
.
samples_per_decay
=
config
[
"samples_per_decay"
]
def
SaveConfig
(
directory
,
filename
,
config_string
):
path
=
os
.
path
.
join
(
directory
,
filename
)
with
tf
.
gfile
.
Open
(
path
,
mode
=
'w'
)
as
f
:
f
.
write
(
config_string
)
compression/entropy_coder/core/entropy_coder_single.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Compute the additional compression ratio after entropy coding."""
import
io
import
os
import
numpy
as
np
import
tensorflow
as
tf
import
config_helper
# pylint: disable=unused-import
from
entropy_coder.all_models
import
all_models
# pylint: enable=unused-import
from
entropy_coder.model
import
model_factory
# Checkpoint used to restore the model parameters.
tf
.
app
.
flags
.
DEFINE_string
(
'checkpoint'
,
None
,
"""Model checkpoint."""
)
# Model selection and configuration.
tf
.
app
.
flags
.
DEFINE_string
(
'model'
,
None
,
"""Underlying encoder model."""
)
tf
.
app
.
flags
.
DEFINE_string
(
'model_config'
,
None
,
"""Model config protobuf given as text file."""
)
# File holding the binary codes.
tf
.
flags
.
DEFINE_string
(
'input_codes'
,
None
,
'Location of binary code file.'
)
FLAGS
=
tf
.
flags
.
FLAGS
def
main
(
_
):
if
(
FLAGS
.
input_codes
is
None
or
FLAGS
.
model
is
None
):
print
(
'
\n
Usage: python entropy_coder_single.py --model=progressive '
'--model_config=model_config.json'
'--iteration=15
\n\n
'
)
return
#if FLAGS.iteration < -1 or FLAGS.iteration > 15:
# print ('\n--iteration must be between 0 and 15 inclusive, or -1 to infer '
# 'from file.\n')
# return
#iteration = FLAGS.iteration
if
not
tf
.
gfile
.
Exists
(
FLAGS
.
input_codes
):
print
'
\n
Input codes not found.
\n
'
return
with
tf
.
gfile
.
FastGFile
(
FLAGS
.
input_codes
,
'rb'
)
as
code_file
:
contents
=
code_file
.
read
()
loaded_codes
=
np
.
load
(
io
.
BytesIO
(
contents
))
assert
[
'codes'
,
'shape'
]
not
in
loaded_codes
.
files
loaded_shape
=
loaded_codes
[
'shape'
]
loaded_array
=
loaded_codes
[
'codes'
]
# Unpack and recover code shapes.
unpacked_codes
=
np
.
reshape
(
np
.
unpackbits
(
loaded_array
)
[:
np
.
prod
(
loaded_shape
)],
loaded_shape
)
numpy_int_codes
=
unpacked_codes
.
transpose
([
1
,
2
,
3
,
0
,
4
])
numpy_int_codes
=
numpy_int_codes
.
reshape
([
numpy_int_codes
.
shape
[
0
],
numpy_int_codes
.
shape
[
1
],
numpy_int_codes
.
shape
[
2
],
-
1
])
numpy_codes
=
numpy_int_codes
.
astype
(
np
.
float32
)
*
2.0
-
1.0
with
tf
.
Graph
().
as_default
()
as
graph
:
# TF tensor to hold the binary codes to losslessly compress.
batch_size
=
1
codes
=
tf
.
placeholder
(
tf
.
float32
,
shape
=
numpy_codes
.
shape
)
# Create the entropy coder model.
global_step
=
None
optimizer
=
None
model
=
model_factory
.
GetModelRegistry
().
CreateModel
(
FLAGS
.
model
)
model_config_string
=
config_helper
.
GetConfigString
(
FLAGS
.
model_config
)
model
.
Initialize
(
global_step
,
optimizer
,
model_config_string
)
model
.
BuildGraph
(
codes
)
saver
=
tf
.
train
.
Saver
(
sharded
=
True
,
keep_checkpoint_every_n_hours
=
12.0
)
with
tf
.
Session
(
graph
=
graph
)
as
sess
:
# Initialize local variables.
sess
.
run
(
tf
.
local_variables_initializer
())
# Restore model variables.
saver
.
restore
(
sess
,
FLAGS
.
checkpoint
)
tf_tensors
=
{
'code_length'
:
model
.
average_code_length
}
feed_dict
=
{
codes
:
numpy_codes
}
np_tensors
=
sess
.
run
(
tf_tensors
,
feed_dict
=
feed_dict
)
print
(
'Additional compression ratio: {}'
.
format
(
np_tensors
[
'code_length'
]))
if
__name__
==
'__main__'
:
tf
.
app
.
run
()
compression/entropy_coder/core/entropy_coder_train.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Train an entropy coder model."""
import
time
import
tensorflow
as
tf
import
code_loader
import
config_helper
# pylint: disable=unused-import
from
entropy_coder.all_models
import
all_models
# pylint: enable=unused-import
from
entropy_coder.model
import
model_factory
FLAGS
=
tf
.
app
.
flags
.
FLAGS
# Hardware resources configuration.
tf
.
app
.
flags
.
DEFINE_string
(
'master'
,
''
,
"""Name of the TensorFlow master to use."""
)
tf
.
app
.
flags
.
DEFINE_string
(
'train_dir'
,
None
,
"""Directory where to write event logs."""
)
tf
.
app
.
flags
.
DEFINE_integer
(
'task'
,
None
,
"""Task id of the replica running the training."""
)
tf
.
app
.
flags
.
DEFINE_integer
(
'ps_tasks'
,
0
,
"""Number of tasks in the ps job.
If 0 no ps job is used."""
)
# Model selection and configuration.
tf
.
app
.
flags
.
DEFINE_string
(
'model'
,
None
,
"""Underlying encoder model."""
)
tf
.
app
.
flags
.
DEFINE_string
(
'model_config'
,
None
,
"""Model config protobuf given as text file."""
)
# Training data and parameters configuration.
tf
.
app
.
flags
.
DEFINE_string
(
'input_config'
,
None
,
"""Path to the training input config file."""
)
tf
.
app
.
flags
.
DEFINE_string
(
'train_config'
,
None
,
"""Path to the training experiment config file."""
)
def
train
():
if
FLAGS
.
train_dir
is
None
:
raise
ValueError
(
'Parameter train_dir must be provided'
)
if
FLAGS
.
task
is
None
:
raise
ValueError
(
'Parameter task must be provided'
)
if
FLAGS
.
model
is
None
:
raise
ValueError
(
'Parameter model must be provided'
)
input_config_string
=
config_helper
.
GetConfigString
(
FLAGS
.
input_config
)
input_config
=
config_helper
.
InputConfig
(
input_config_string
)
# Training parameters.
train_config_string
=
config_helper
.
GetConfigString
(
FLAGS
.
train_config
)
train_config
=
config_helper
.
TrainConfig
(
train_config_string
)
batch_size
=
train_config
.
batch_size
initial_learning_rate
=
train_config
.
learning_rate
decay_rate
=
train_config
.
decay_rate
samples_per_decay
=
train_config
.
samples_per_decay
# Parameters for learning-rate decay.
# The formula is decay_rate ** floor(steps / decay_steps).
decay_steps
=
samples_per_decay
/
batch_size
decay_steps
=
max
(
decay_steps
,
1
)
first_code
=
code_loader
.
ReadFirstCode
(
input_config
.
data
)
first_code_height
=
(
first_code
.
features
.
feature
[
'code_shape'
].
int64_list
.
value
[
0
])
first_code_width
=
(
first_code
.
features
.
feature
[
'code_shape'
].
int64_list
.
value
[
1
])
max_bit_depth
=
(
first_code
.
features
.
feature
[
'code_shape'
].
int64_list
.
value
[
2
])
print
(
'Maximum code depth: {}'
.
format
(
max_bit_depth
))
with
tf
.
Graph
().
as_default
():
ps_ops
=
[
"Variable"
,
"VariableV2"
,
"AutoReloadVariable"
,
"VarHandleOp"
]
with
tf
.
device
(
tf
.
train
.
replica_device_setter
(
FLAGS
.
ps_tasks
,
ps_ops
=
ps_ops
)):
codes
=
code_loader
.
LoadBinaryCode
(
input_config
=
input_config
,
batch_size
=
batch_size
)
if
input_config
.
unique_code_size
:
print
(
'Input code size: {} x {}'
.
format
(
first_code_height
,
first_code_width
))
codes
.
set_shape
(
[
batch_size
,
first_code_height
,
first_code_width
,
max_bit_depth
])
else
:
codes
.
set_shape
([
batch_size
,
None
,
None
,
max_bit_depth
])
codes_effective_shape
=
tf
.
shape
(
codes
)
global_step
=
tf
.
contrib
.
framework
.
create_global_step
()
# Apply learning-rate decay.
learning_rate
=
tf
.
train
.
exponential_decay
(
learning_rate
=
initial_learning_rate
,
global_step
=
global_step
,
decay_steps
=
decay_steps
,
decay_rate
=
decay_rate
,
staircase
=
True
)
tf
.
contrib
.
deprecated
.
scalar_summary
(
'Learning Rate'
,
learning_rate
)
optimizer
=
tf
.
train
.
AdamOptimizer
(
learning_rate
=
learning_rate
,
epsilon
=
1.0
)
# Create the entropy coder model.
model
=
model_factory
.
GetModelRegistry
().
CreateModel
(
FLAGS
.
model
)
model_config_string
=
config_helper
.
GetConfigString
(
FLAGS
.
model_config
)
model
.
Initialize
(
global_step
,
optimizer
,
model_config_string
)
model
.
BuildGraph
(
codes
)
summary_op
=
tf
.
summary
.
merge_all
()
# Verify that the model can actually be trained.
if
model
.
train_op
is
None
:
raise
ValueError
(
'Input model {} is not trainable'
.
format
(
FLAGS
.
model
))
# We disable the summary thread run by Supervisor class by passing
# summary_op=None. We still pass save_summaries_secs because it is used by
# the global step counter thread.
is_chief
=
(
FLAGS
.
task
==
0
)
sv
=
tf
.
train
.
Supervisor
(
logdir
=
FLAGS
.
train_dir
,
is_chief
=
is_chief
,
global_step
=
global_step
,
# saver=model.saver,
summary_op
=
None
,
save_summaries_secs
=
120
,
save_model_secs
=
600
,
recovery_wait_secs
=
30
)
sess
=
sv
.
PrepareSession
(
FLAGS
.
master
)
sv
.
StartQueueRunners
(
sess
)
step
=
sess
.
run
(
global_step
)
print
(
'Trainer initial step: {}.'
.
format
(
step
))
# Once everything has been setup properly, save the configs.
if
is_chief
:
config_helper
.
SaveConfig
(
FLAGS
.
train_dir
,
'input_config.json'
,
input_config_string
)
config_helper
.
SaveConfig
(
FLAGS
.
train_dir
,
'model_config.json'
,
model_config_string
)
config_helper
.
SaveConfig
(
FLAGS
.
train_dir
,
'train_config.json'
,
train_config_string
)
# Train the model.
next_summary_time
=
time
.
time
()
while
not
sv
.
ShouldStop
():
feed_dict
=
None
# Once in a while, update the summaries on the chief worker.
if
is_chief
and
next_summary_time
<
time
.
time
():
summary_str
=
sess
.
run
(
summary_op
,
feed_dict
=
feed_dict
)
sv
.
SummaryComputed
(
sess
,
summary_str
)
next_summary_time
=
time
.
time
()
+
sv
.
save_summaries_secs
else
:
tf_tensors
=
{
'train'
:
model
.
train_op
,
'code_length'
:
model
.
average_code_length
}
np_tensors
=
sess
.
run
(
tf_tensors
,
feed_dict
=
feed_dict
)
print
np_tensors
[
'code_length'
]
sv
.
Stop
()
def
main
(
argv
=
None
):
# pylint: disable=unused-argument
train
()
if
__name__
==
'__main__'
:
tf
.
app
.
run
()
compression/entropy_coder/dataset/gen_synthetic_dataset.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generate a synthetic dataset."""
import
os
import
numpy
as
np
import
tensorflow
as
tf
import
synthetic_model
FLAGS
=
tf
.
app
.
flags
.
FLAGS
tf
.
app
.
flags
.
DEFINE_string
(
'dataset_dir'
,
None
,
"""Directory where to write the dataset and the configs."""
)
tf
.
app
.
flags
.
DEFINE_integer
(
'count'
,
1000
,
"""Number of samples to generate."""
)
def
int64_feature
(
values
):
"""Returns a TF-Feature of int64s.
Args:
values: A scalar or list of values.
Returns:
A TF-Feature.
"""
if
not
isinstance
(
values
,
(
tuple
,
list
)):
values
=
[
values
]
return
tf
.
train
.
Feature
(
int64_list
=
tf
.
train
.
Int64List
(
value
=
values
))
def
float_feature
(
values
):
"""Returns a TF-Feature of floats.
Args:
values: A scalar of list of values.
Returns:
A TF-Feature.
"""
if
not
isinstance
(
values
,
(
tuple
,
list
)):
values
=
[
values
]
return
tf
.
train
.
Feature
(
float_list
=
tf
.
train
.
FloatList
(
value
=
values
))
def
AddToTFRecord
(
code
,
tfrecord_writer
):
example
=
tf
.
train
.
Example
(
features
=
tf
.
train
.
Features
(
feature
=
{
'code_shape'
:
int64_feature
(
code
.
shape
),
'code'
:
float_feature
(
code
.
flatten
().
tolist
()),
}))
tfrecord_writer
.
write
(
example
.
SerializeToString
())
def
GenerateDataset
(
filename
,
count
,
code_shape
):
with
tf
.
python_io
.
TFRecordWriter
(
filename
)
as
tfrecord_writer
:
for
_
in
xrange
(
count
):
code
=
synthetic_model
.
GenerateSingleCode
(
code_shape
)
# Convert {0,1} codes to {-1,+1} codes.
code
=
2.0
*
code
-
1.0
AddToTFRecord
(
code
,
tfrecord_writer
)
def
main
(
argv
=
None
):
# pylint: disable=unused-argument
GenerateDataset
(
os
.
path
.
join
(
FLAGS
.
dataset_dir
+
'/synthetic_dataset'
),
FLAGS
.
count
,
[
35
,
48
,
8
])
if
__name__
==
'__main__'
:
tf
.
app
.
run
()
compression/entropy_coder/dataset/gen_synthetic_single.py
0 → 100644
View file @
c9244885
# Copyright 2016 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generate a single synthetic sample."""
import
io
import
os
import
numpy
as
np
import
tensorflow
as
tf
import
synthetic_model
FLAGS
=
tf
.
app
.
flags
.
FLAGS
tf
.
app
.
flags
.
DEFINE_string
(
'sample_filename'
,
None
,
"""Output file to store the generated binary code."""
)
def
GenerateSample
(
filename
,
code_shape
,
layer_depth
):
# {0, +1} binary codes.
# No conversion since the output file is expected to store
# codes using {0, +1} codes (and not {-1, +1}).
code
=
synthetic_model
.
GenerateSingleCode
(
code_shape
)
code
=
np
.
round
(
code
)
# Reformat the code so as to be compatible with what is generated
# by the image encoder.
# The image encoder generates a tensor of size:
# iteration_count x batch_size x height x width x iteration_depth.
# Here: batch_size = 1
if
code_shape
[
-
1
]
%
layer_depth
!=
0
:
raise
ValueError
(
'Number of layers is not an integer'
)
height
=
code_shape
[
0
]
width
=
code_shape
[
1
]
code
=
code
.
reshape
([
1
,
height
,
width
,
-
1
,
layer_depth
])
code
=
np
.
transpose
(
code
,
[
3
,
0
,
1
,
2
,
4
])
int_codes
=
code
.
astype
(
np
.
int8
)
exported_codes
=
np
.
packbits
(
int_codes
.
reshape
(
-
1
))
output
=
io
.
BytesIO
()
np
.
savez_compressed
(
output
,
shape
=
int_codes
.
shape
,
codes
=
exported_codes
)
with
tf
.
gfile
.
FastGFile
(
filename
,
'wb'
)
as
code_file
:
code_file
.
write
(
output
.
getvalue
())
def
main
(
argv
=
None
):
# pylint: disable=unused-argument
# Note: the height and the width is different from the training dataset.
# The main purpose is to show that the entropy coder model is fully
# convolutional and can be used on any image size.
layer_depth
=
2
GenerateSample
(
FLAGS
.
sample_filename
,
[
31
,
36
,
8
],
layer_depth
)
if
__name__
==
'__main__'
:
tf
.
app
.
run
()
compression/entropy_coder/dataset/synthetic_model.py
0 → 100644
View file @
c9244885
# Copyright 2016 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Binary code sample generator."""
import
numpy
as
np
_CRC_LINE
=
[
[
0
,
1
,
0
],
[
1
,
1
,
0
],
[
1
,
0
,
0
]
]
_CRC_DEPTH
=
[
1
,
1
,
0
,
1
]
def
ComputeLineCrc
(
code
,
width
,
y
,
x
,
d
):
crc
=
0
for
dy
in
xrange
(
len
(
_CRC_LINE
)):
i
=
y
-
1
-
dy
if
i
<
0
:
continue
for
dx
in
xrange
(
len
(
_CRC_LINE
[
dy
])):
j
=
x
-
2
+
dx
if
j
<
0
or
j
>=
width
:
continue
crc
+=
1
if
(
code
[
i
,
j
,
d
]
!=
_CRC_LINE
[
dy
][
dx
])
else
0
return
crc
def
ComputeDepthCrc
(
code
,
y
,
x
,
d
):
crc
=
0
for
delta
in
xrange
(
len
(
_CRC_DEPTH
)):
k
=
d
-
1
-
delta
if
k
<
0
:
continue
crc
+=
1
if
(
code
[
y
,
x
,
k
]
!=
_CRC_DEPTH
[
delta
])
else
0
return
crc
def
GenerateSingleCode
(
code_shape
):
code
=
np
.
zeros
(
code_shape
,
dtype
=
np
.
int
)
keep_value_proba
=
0.8
height
=
code_shape
[
0
]
width
=
code_shape
[
1
]
depth
=
code_shape
[
2
]
for
d
in
xrange
(
depth
):
for
y
in
xrange
(
height
):
for
x
in
xrange
(
width
):
v1
=
ComputeLineCrc
(
code
,
width
,
y
,
x
,
d
)
v2
=
ComputeDepthCrc
(
code
,
y
,
x
,
d
)
v
=
1
if
(
v1
+
v2
>=
6
)
else
0
if
np
.
random
.
rand
()
<
keep_value_proba
:
code
[
y
,
x
,
d
]
=
v
else
:
code
[
y
,
x
,
d
]
=
1
-
v
return
code
compression/entropy_coder/lib/__init__.py
0 → 100644
View file @
c9244885
compression/entropy_coder/lib/block_base.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Base class for Tensorflow building blocks."""
import
collections
import
contextlib
import
itertools
import
tensorflow
as
tf
_block_stacks
=
collections
.
defaultdict
(
lambda
:
[])
class
BlockBase
(
object
):
"""Base class for transform wrappers of Tensorflow.
To implement a Tensorflow transform block, inherit this class.
1. To create a variable, use NewVar() method. Do not overload this method!
For example, use as follows.
a_variable = self.NewVar(initial_value)
2. All Tensorflow-related code must be done inside 'with self._BlockScope().'
Otherwise, name scoping and block hierarchy will not work. An exception
is _Apply() method, which is already called inside the context manager
by __call__() method.
3. Override and implement _Apply() method. This method is called by
__call__() method.
The users would use blocks like the following.
nn1 = NN(128, bias=Bias(0), act=tf.nn.relu)
y = nn1(x)
Some things to consider.
- Use lazy-initialization if possible. That is, initialize at first Apply()
rather than at __init__().
Note: if needed, the variables can be created on a specific parameter
server by creating blocks in a scope like:
with g.device(device):
linear = Linear(...)
"""
def
__init__
(
self
,
name
):
self
.
_variables
=
[]
self
.
_subblocks
=
[]
self
.
_called
=
False
# Intentionally distinguishing empty string and None.
# If name is an empty string, then do not use name scope.
self
.
name
=
name
if
name
is
not
None
else
self
.
__class__
.
__name__
self
.
_graph
=
tf
.
get_default_graph
()
if
self
.
name
:
# Capture the scope string at the init time.
with
self
.
_graph
.
name_scope
(
self
.
name
)
as
scope
:
self
.
_scope_str
=
scope
else
:
self
.
_scope_str
=
''
# Maintain hierarchy structure of blocks.
self
.
_stack
=
_block_stacks
[
self
.
_graph
]
if
self
.
__class__
is
BlockBase
:
# This code is only executed to create the root, which starts in the
# initialized state.
assert
not
self
.
_stack
self
.
_parent
=
None
self
.
_called
=
True
# The root is initialized.
return
# Create a fake root if a root is not already present.
if
not
self
.
_stack
:
self
.
_stack
.
append
(
BlockBase
(
'NoOpRoot'
))
self
.
_parent
=
self
.
_stack
[
-
1
]
self
.
_parent
.
_subblocks
.
append
(
self
)
# pylint: disable=protected-access
def
__repr__
(
self
):
return
'"{}" ({})'
.
format
(
self
.
_scope_str
,
self
.
__class__
.
__name__
)
@
contextlib
.
contextmanager
def
_OptionalNameScope
(
self
,
scope_str
):
if
scope_str
:
with
self
.
_graph
.
name_scope
(
scope_str
):
yield
else
:
yield
@
contextlib
.
contextmanager
def
_BlockScope
(
self
):
"""Context manager that handles graph, namescope, and nested blocks."""
self
.
_stack
.
append
(
self
)
try
:
with
self
.
_graph
.
as_default
():
with
self
.
_OptionalNameScope
(
self
.
_scope_str
):
yield
self
finally
:
# Pop from the stack no matter exception is raised or not.
# The following line is executed when leaving 'with self._BlockScope()'
self
.
_stack
.
pop
()
def
__call__
(
self
,
*
args
,
**
kwargs
):
assert
self
.
_stack
is
_block_stacks
[
self
.
_graph
]
with
self
.
_BlockScope
():
ret
=
self
.
_Apply
(
*
args
,
**
kwargs
)
self
.
_called
=
True
return
ret
def
_Apply
(
self
,
*
args
,
**
kwargs
):
"""Implementation of __call__()."""
raise
NotImplementedError
()
# Redirect all variable creation to this single function, so that we can
# switch to better variable creation scheme.
def
NewVar
(
self
,
value
,
**
kwargs
):
"""Creates a new variable.
This function creates a variable, then returns a local copy created by
Identity operation. To get the Variable class object, use LookupRef()
method.
Note that each time Variable class object is used as an input to an
operation, Tensorflow will create a new Send/Recv pair. This hurts
performance.
If not for assign operations, use the local copy returned by this method.
Args:
value: Initialization value of the variable. The shape and the data type
of the variable is determined by this initial value.
**kwargs: Extra named arguments passed to Variable.__init__().
Returns:
A local copy of the new variable.
"""
v
=
tf
.
Variable
(
value
,
**
kwargs
)
self
.
_variables
.
append
(
v
)
return
v
@
property
def
initialized
(
self
):
"""Returns bool if the block is initialized.
By default, BlockBase assumes that a block is initialized when __call__()
is executed for the first time. If this is an incorrect assumption for some
subclasses, override this property in those subclasses.
Returns:
True if initialized, False otherwise.
"""
return
self
.
_called
def
AssertInitialized
(
self
):
"""Asserts initialized property."""
if
not
self
.
initialized
:
raise
RuntimeError
(
'{} has not been initialized.'
.
format
(
self
))
def
VariableList
(
self
):
"""Returns the list of all tensorflow variables used inside this block."""
variables
=
list
(
itertools
.
chain
(
itertools
.
chain
.
from_iterable
(
t
.
VariableList
()
for
t
in
self
.
_subblocks
),
self
.
_VariableList
()))
return
variables
def
_VariableList
(
self
):
"""Returns the list of all tensorflow variables owned by this block."""
self
.
AssertInitialized
()
return
self
.
_variables
def
CreateWeightLoss
(
self
):
"""Returns L2 loss list of (almost) all variables used inside this block.
When this method needs to be overridden, there are two choices.
1. Override CreateWeightLoss() to change the weight loss of all variables
that belong to this block, both directly and indirectly.
2. Override _CreateWeightLoss() to change the weight loss of all
variables that directly belong to this block but not to the sub-blocks.
Returns:
A Tensor object or None.
"""
losses
=
list
(
itertools
.
chain
(
itertools
.
chain
.
from_iterable
(
t
.
CreateWeightLoss
()
for
t
in
self
.
_subblocks
),
self
.
_CreateWeightLoss
()))
return
losses
def
_CreateWeightLoss
(
self
):
"""Returns weight loss list of variables that belong to this block."""
self
.
AssertInitialized
()
with
self
.
_BlockScope
():
return
[
tf
.
nn
.
l2_loss
(
v
)
for
v
in
self
.
_variables
]
def
CreateUpdateOps
(
self
):
"""Creates update operations for this block and its sub-blocks."""
ops
=
list
(
itertools
.
chain
(
itertools
.
chain
.
from_iterable
(
t
.
CreateUpdateOps
()
for
t
in
self
.
_subblocks
),
self
.
_CreateUpdateOps
()))
return
ops
def
_CreateUpdateOps
(
self
):
"""Creates update operations for this block."""
self
.
AssertInitialized
()
return
[]
def
MarkAsNonTrainable
(
self
):
"""Mark all the variables of this block as non-trainable.
All the variables owned directly or indirectly (through subblocks) are
marked as non trainable.
This function along with CheckpointInitOp can be used to load a pretrained
model that consists in only one part of the whole graph.
"""
assert
self
.
_called
all_variables
=
self
.
VariableList
()
collection
=
tf
.
get_collection_ref
(
tf
.
GraphKeys
.
TRAINABLE_VARIABLES
)
for
v
in
all_variables
:
if
v
in
collection
:
collection
.
remove
(
v
)
def
CreateWeightLoss
():
"""Returns all weight losses from the blocks in the graph."""
stack
=
_block_stacks
[
tf
.
get_default_graph
()]
if
not
stack
:
return
[]
return
stack
[
0
].
CreateWeightLoss
()
def
CreateBlockUpdates
():
"""Combines all updates from the blocks in the graph."""
stack
=
_block_stacks
[
tf
.
get_default_graph
()]
if
not
stack
:
return
[]
return
stack
[
0
].
CreateUpdateOps
()
compression/entropy_coder/lib/block_util.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utility functions for blocks."""
from
__future__
import
division
from
__future__
import
unicode_literals
import
math
import
numpy
as
np
import
tensorflow
as
tf
class
RsqrtInitializer
(
object
):
"""Gaussian initializer with standard deviation 1/sqrt(n).
Note that tf.truncated_normal is used internally. Therefore any random sample
outside two-sigma will be discarded and re-sampled.
"""
def
__init__
(
self
,
dims
=
(
0
,),
**
kwargs
):
"""Creates an initializer.
Args:
dims: Dimension(s) index to compute standard deviation:
1.0 / sqrt(product(shape[dims]))
**kwargs: Extra keyword arguments to pass to tf.truncated_normal.
"""
if
isinstance
(
dims
,
(
int
,
long
)):
self
.
_dims
=
[
dims
]
else
:
self
.
_dims
=
dims
self
.
_kwargs
=
kwargs
def
__call__
(
self
,
shape
,
dtype
):
stddev
=
1.0
/
np
.
sqrt
(
np
.
prod
([
shape
[
x
]
for
x
in
self
.
_dims
]))
return
tf
.
truncated_normal
(
shape
=
shape
,
dtype
=
dtype
,
stddev
=
stddev
,
**
self
.
_kwargs
)
class
RectifierInitializer
(
object
):
"""Gaussian initializer with standard deviation sqrt(2/fan_in).
Note that tf.random_normal is used internally to ensure the expected weight
distribution. This is intended to be used with ReLU activations, specially
in ResNets.
For details please refer to:
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet
Classification
"""
def
__init__
(
self
,
dims
=
(
0
,),
scale
=
2.0
,
**
kwargs
):
"""Creates an initializer.
Args:
dims: Dimension(s) index to compute standard deviation:
sqrt(scale / product(shape[dims]))
scale: A constant scaling for the initialization used as
sqrt(scale / product(shape[dims])).
**kwargs: Extra keyword arguments to pass to tf.truncated_normal.
"""
if
isinstance
(
dims
,
(
int
,
long
)):
self
.
_dims
=
[
dims
]
else
:
self
.
_dims
=
dims
self
.
_kwargs
=
kwargs
self
.
_scale
=
scale
def
__call__
(
self
,
shape
,
dtype
):
stddev
=
np
.
sqrt
(
self
.
_scale
/
np
.
prod
([
shape
[
x
]
for
x
in
self
.
_dims
]))
return
tf
.
random_normal
(
shape
=
shape
,
dtype
=
dtype
,
stddev
=
stddev
,
**
self
.
_kwargs
)
class
GaussianInitializer
(
object
):
"""Gaussian initializer with a given standard deviation.
Note that tf.truncated_normal is used internally. Therefore any random sample
outside two-sigma will be discarded and re-sampled.
"""
def
__init__
(
self
,
stddev
=
1.0
):
self
.
_stddev
=
stddev
def
__call__
(
self
,
shape
,
dtype
):
return
tf
.
truncated_normal
(
shape
=
shape
,
dtype
=
dtype
,
stddev
=
self
.
_stddev
)
compression/entropy_coder/lib/blocks.py
0 → 100644
View file @
c9244885
# Copyright 2017 The TensorFlow Authors All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from
block_base
import
*
from
block_util
import
*
from
blocks_binarizer
import
*
from
blocks_entropy_coding
import
*
from
blocks_lstm
import
*
from
blocks_masked_conv2d
import
*
from
blocks_masked_conv2d_lstm
import
*
from
blocks_operator
import
*
from
blocks_std
import
*
Prev
1
2
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment