- 31 Jan, 2018 2 commits
-
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
Neal Wu authored
-
- 26 Jan, 2018 2 commits
-
-
Karmel Allison authored
Add multi-GPU option to MNIST
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
- 24 Jan, 2018 1 commit
-
- 22 Jan, 2018 1 commit
-
-
Karmel Allison authored
Add multi-GPU flag to MNIST and allow for setting of replicated optimizer and model_fn
-
- 20 Dec, 2017 1 commit
-
-
Asim Shankar authored
-
- 19 Dec, 2017 1 commit
-
-
Asim Shankar authored
- Use the object-oriented tf.layers API instead of the functional one. The object-oriented API is particularly useful when using the model with eager execution. - Update unittest to train, evaluate, and predict using the model. - Add a micro-benchmark for measuring step-time. The parameters (batch_size, num_steps etc.) have NOT been tuned, the purpose with this code is mostly to illustrate how model benchmarks may be written. These changes are made as a step towards consolidating model definitions for different TensorFlow features (like eager execution and support for TPUs in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/mnist and https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist
-
- 27 Oct, 2017 1 commit
-
-
Neal Wu authored
-
- 25 Oct, 2017 1 commit
-
-
Neal Wu authored
-
- 04 Oct, 2017 1 commit
-
-
Neal Wu authored
-
- 28 Sep, 2017 1 commit
-
-
Neal Wu authored
-
- 21 Sep, 2017 1 commit
-
-
Neal Wu authored
-