Commit 93f77b38 authored by Neal Wu's avatar Neal Wu
Browse files

README updates for adversarial_crypto

parent e89f891e
...@@ -11,8 +11,8 @@ running TensorFlow 0.12 or earlier, please ...@@ -11,8 +11,8 @@ running TensorFlow 0.12 or earlier, please
## Models ## Models
- [adversarial_text](adversarial_text): semi-supervised sequence learning with - [adversarial_crypto](adversarial_crypto): protecting communications with adversarial neural cryptography.
adversarial training. - [adversarial_text](adversarial_text): semi-supervised sequence learning with adversarial training.
- [attention_ocr](attention_ocr): a model for real-world image text extraction. - [attention_ocr](attention_ocr): a model for real-world image text extraction.
- [autoencoder](autoencoder): various autoencoders. - [autoencoder](autoencoder): various autoencoders.
- [cognitive_mapping_and_planning](cognitive_mapping_and_planning): implementation of a spatial memory based mapping and planning architecture for visual navigation. - [cognitive_mapping_and_planning](cognitive_mapping_and_planning): implementation of a spatial memory based mapping and planning architecture for visual navigation.
......
...@@ -4,15 +4,15 @@ This is a slightly-updated model used for the paper ...@@ -4,15 +4,15 @@ This is a slightly-updated model used for the paper
["Learning to Protect Communications with Adversarial Neural ["Learning to Protect Communications with Adversarial Neural
Cryptography"](https://arxiv.org/abs/1610.06918). Cryptography"](https://arxiv.org/abs/1610.06918).
> We ask whether neural networks can learn to use secret keys to protect > We ask whether neural networks can learn to use secret keys to protect
> information from other neural networks. Specifically, we focus on ensuring > information from other neural networks. Specifically, we focus on ensuring
> confidentiality properties in a multiagent system, and we specify those > confidentiality properties in a multiagent system, and we specify those
> properties in terms of an adversary. Thus, a system may consist of neural > properties in terms of an adversary. Thus, a system may consist of neural
> networks named Alice and Bob, and we aim to limit what a third neural > networks named Alice and Bob, and we aim to limit what a third neural
> network named Eve learns from eavesdropping on the communication between > network named Eve learns from eavesdropping on the communication between
> Alice and Bob. We do not prescribe specific cryptographic algorithms to > Alice and Bob. We do not prescribe specific cryptographic algorithms to
> these neural networks; instead, we train end-to-end, adversarially. > these neural networks; instead, we train end-to-end, adversarially.
> We demonstrate that the neural networks can learn how to perform forms of > We demonstrate that the neural networks can learn how to perform forms of
> encryption and decryption, and also how to apply these operations > encryption and decryption, and also how to apply these operations
> selectively in order to meet confidentiality goals. > selectively in order to meet confidentiality goals.
...@@ -22,7 +22,7 @@ pairs. ...@@ -22,7 +22,7 @@ pairs.
## Prerequisites ## Prerequisites
The only software requirements for running the encoder and decoder is having The only software requirements for running the encoder and decoder is having
Tensorflow installed. Tensorflow installed.
Requires Tensorflow r0.12 or later. Requires Tensorflow r0.12 or later.
...@@ -32,8 +32,10 @@ Requires Tensorflow r0.12 or later. ...@@ -32,8 +32,10 @@ Requires Tensorflow r0.12 or later.
After installing TensorFlow and ensuring that your paths are configured After installing TensorFlow and ensuring that your paths are configured
appropriately: appropriately:
python train_eval.py ```
python train_eval.py
```
This will begin training a fresh model. If and when the model becomes This will begin training a fresh model. If and when the model becomes
sufficiently well-trained, it will reset the Eve model multiple times sufficiently well-trained, it will reset the Eve model multiple times
and retrain it from scratch, outputting the accuracy thus obtained and retrain it from scratch, outputting the accuracy thus obtained
...@@ -46,7 +48,7 @@ the paper - the convolutional layer width was reduced by a factor ...@@ -46,7 +48,7 @@ the paper - the convolutional layer width was reduced by a factor
of two. In the version in the paper, there was a nonlinear unit of two. In the version in the paper, there was a nonlinear unit
after the fully-connected layer; that nonlinear has been removed after the fully-connected layer; that nonlinear has been removed
here. These changes improve the robustness of training. The here. These changes improve the robustness of training. The
initializer for the convolution layers has switched to the initializer for the convolution layers has switched to the
tf.contrib.layers default of xavier_initializer instead of tf.contrib.layers default of xavier_initializer instead of
a simpler truncated_normal. a simpler truncated_normal.
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment