Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
ResNet50_tensorflow
Commits
3653ef1b
Unverified
Commit
3653ef1b
authored
Nov 10, 2017
by
David Andersen
Committed by
GitHub
Nov 10, 2017
Browse files
Merge pull request #2603 from charlesreid1/master
Consistent, fixed-width printing from adversarial crypto example.
parents
6827532c
a0d771b6
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
9 additions
and
9 deletions
+9
-9
research/adversarial_crypto/README.md
research/adversarial_crypto/README.md
+5
-5
research/adversarial_crypto/train_eval.py
research/adversarial_crypto/train_eval.py
+4
-4
No files found.
research/adversarial_crypto/README.md
View file @
3653ef1b
...
@@ -16,16 +16,16 @@ Cryptography"](https://arxiv.org/abs/1610.06918).
...
@@ -16,16 +16,16 @@ Cryptography"](https://arxiv.org/abs/1610.06918).
> encryption and decryption, and also how to apply these operations
> encryption and decryption, and also how to apply these operations
> selectively in order to meet confidentiality goals.
> selectively in order to meet confidentiality goals.
This code allows you to train
an
encoder/decoder/adversary triplet
This code allows you to train encoder/decoder/adversary
network
triplet
s
and evaluate their effectiveness on randomly generated input and key
and evaluate their effectiveness on randomly generated input and key
pairs.
pairs.
## Prerequisites
## Prerequisites
The only software requirements for running the encoder and decoder is having
The only software requirements for running the encoder and decoder is having
Tensor
f
low installed.
Tensor
F
low installed.
Requires Tensor
f
low r0.12 or later.
Requires Tensor
F
low r0.12 or later.
## Training and evaluating
## Training and evaluating
...
@@ -49,8 +49,8 @@ of two. In the version in the paper, there was a nonlinear unit
...
@@ -49,8 +49,8 @@ of two. In the version in the paper, there was a nonlinear unit
after the fully-connected layer; that nonlinear has been removed
after the fully-connected layer; that nonlinear has been removed
here. These changes improve the robustness of training. The
here. These changes improve the robustness of training. The
initializer for the convolution layers has switched to the
initializer for the convolution layers has switched to the
tf.contrib.layers default of xavier_initializer instead of
`
tf.contrib.layers default
`
of
`
xavier_initializer
`
instead of
a simpler truncated_normal.
a simpler
`
truncated_normal
`
.
## Contact information
## Contact information
...
...
research/adversarial_crypto/train_eval.py
View file @
3653ef1b
...
@@ -117,7 +117,7 @@ class AdversarialCrypto(object):
...
@@ -117,7 +117,7 @@ class AdversarialCrypto(object):
return
in_m
,
in_k
return
in_m
,
in_k
def
model
(
self
,
collection
,
message
,
key
=
None
):
def
model
(
self
,
collection
,
message
,
key
=
None
):
"""The model for Alice, Bob, and Eve. If key=None, the first
FC
layer
"""The model for Alice, Bob, and Eve. If key=None, the first
fully connected
layer
takes only the message as inputs. Otherwise, it uses both the key
takes only the message as inputs. Otherwise, it uses both the key
and the message.
and the message.
...
@@ -206,7 +206,7 @@ def doeval(s, ac, n, itercount):
...
@@ -206,7 +206,7 @@ def doeval(s, ac, n, itercount):
itercount: Iteration count label for logging.
itercount: Iteration count label for logging.
Returns:
Returns:
Bob and
e
ve's loss, as a percent of bits incorrect.
Bob and
E
ve's loss, as a percent of bits incorrect.
"""
"""
bob_loss_accum
=
0
bob_loss_accum
=
0
...
@@ -217,7 +217,7 @@ def doeval(s, ac, n, itercount):
...
@@ -217,7 +217,7 @@ def doeval(s, ac, n, itercount):
eve_loss_accum
+=
el
eve_loss_accum
+=
el
bob_loss_percent
=
bob_loss_accum
/
(
n
*
FLAGS
.
batch_size
)
bob_loss_percent
=
bob_loss_accum
/
(
n
*
FLAGS
.
batch_size
)
eve_loss_percent
=
eve_loss_accum
/
(
n
*
FLAGS
.
batch_size
)
eve_loss_percent
=
eve_loss_accum
/
(
n
*
FLAGS
.
batch_size
)
print
(
'%
d %.2f %
.2f'
%
(
itercount
,
bob_loss_percent
,
eve_loss_percent
))
print
(
'%
10d
\t
%20.2f
\t
%20
.2f'
%
(
itercount
,
bob_loss_percent
,
eve_loss_percent
))
sys
.
stdout
.
flush
()
sys
.
stdout
.
flush
()
return
bob_loss_percent
,
eve_loss_percent
return
bob_loss_percent
,
eve_loss_percent
...
@@ -245,7 +245,7 @@ def train_and_evaluate():
...
@@ -245,7 +245,7 @@ def train_and_evaluate():
with
tf
.
Session
()
as
s
:
with
tf
.
Session
()
as
s
:
s
.
run
(
init
)
s
.
run
(
init
)
print
(
'# Batch size: '
,
FLAGS
.
batch_size
)
print
(
'# Batch size: '
,
FLAGS
.
batch_size
)
print
(
'# Iter
Bob_Recon_Error
Eve_Recon_Error
'
)
print
(
'#
%10s
\t
%20s
\t
%20s'
%
(
"
Iter
"
,
"
Bob_Recon_Error
"
,
"
Eve_Recon_Error
"
)
)
if
train_until_thresh
(
s
,
ac
):
if
train_until_thresh
(
s
,
ac
):
for
_
in
xrange
(
EVE_EXTRA_ROUNDS
):
for
_
in
xrange
(
EVE_EXTRA_ROUNDS
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment