Unverified Commit 3653ef1b authored by David Andersen's avatar David Andersen Committed by GitHub
Browse files

Merge pull request #2603 from charlesreid1/master

Consistent, fixed-width printing from adversarial crypto example.
parents 6827532c a0d771b6
......@@ -16,16 +16,16 @@ Cryptography"](https://arxiv.org/abs/1610.06918).
> encryption and decryption, and also how to apply these operations
> selectively in order to meet confidentiality goals.
This code allows you to train an encoder/decoder/adversary triplet
This code allows you to train encoder/decoder/adversary network triplets
and evaluate their effectiveness on randomly generated input and key
pairs.
## Prerequisites
The only software requirements for running the encoder and decoder is having
Tensorflow installed.
TensorFlow installed.
Requires Tensorflow r0.12 or later.
Requires TensorFlow r0.12 or later.
## Training and evaluating
......@@ -49,8 +49,8 @@ of two. In the version in the paper, there was a nonlinear unit
after the fully-connected layer; that nonlinear has been removed
here. These changes improve the robustness of training. The
initializer for the convolution layers has switched to the
tf.contrib.layers default of xavier_initializer instead of
a simpler truncated_normal.
`tf.contrib.layers default` of `xavier_initializer` instead of
a simpler `truncated_normal`.
## Contact information
......
......@@ -117,7 +117,7 @@ class AdversarialCrypto(object):
return in_m, in_k
def model(self, collection, message, key=None):
"""The model for Alice, Bob, and Eve. If key=None, the first FC layer
"""The model for Alice, Bob, and Eve. If key=None, the first fully connected layer
takes only the message as inputs. Otherwise, it uses both the key
and the message.
......@@ -206,7 +206,7 @@ def doeval(s, ac, n, itercount):
itercount: Iteration count label for logging.
Returns:
Bob and eve's loss, as a percent of bits incorrect.
Bob and Eve's loss, as a percent of bits incorrect.
"""
bob_loss_accum = 0
......@@ -217,7 +217,7 @@ def doeval(s, ac, n, itercount):
eve_loss_accum += el
bob_loss_percent = bob_loss_accum / (n * FLAGS.batch_size)
eve_loss_percent = eve_loss_accum / (n * FLAGS.batch_size)
print('%d %.2f %.2f' % (itercount, bob_loss_percent, eve_loss_percent))
print('%10d\t%20.2f\t%20.2f'%(itercount, bob_loss_percent, eve_loss_percent))
sys.stdout.flush()
return bob_loss_percent, eve_loss_percent
......@@ -245,7 +245,7 @@ def train_and_evaluate():
with tf.Session() as s:
s.run(init)
print('# Batch size: ', FLAGS.batch_size)
print('# Iter Bob_Recon_Error Eve_Recon_Error')
print('# %10s\t%20s\t%20s'%("Iter","Bob_Recon_Error","Eve_Recon_Error"))
if train_until_thresh(s, ac):
for _ in xrange(EVE_EXTRA_ROUNDS):
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment