- 17 Feb, 2018 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
- 16 Feb, 2018 4 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
Asim Shankar authored
-
Asim Shankar authored
Add an example showing how to train the MNIST model with eager execution enabled. (This change requires changes to TensorFlow made after the 1.6 release branch was cut, i.e., will require a build from source or TensorFlow 1.7+)
-
- 08 Feb, 2018 2 commits
- 06 Feb, 2018 3 commits
-
-
Frank Chen authored
The long path doesn't work for TensorFlow 1.6 because the .python package under cluster_resolver doesn't exist.
-
Frank Chen authored
-
Neal Wu authored
-
- 31 Jan, 2018 3 commits
-
-
Mark Daoust authored
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
Neal Wu authored
-
- 26 Jan, 2018 2 commits
-
-
Karmel Allison authored
Add multi-GPU option to MNIST
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
- 24 Jan, 2018 1 commit
-
- 22 Jan, 2018 2 commits
-
-
Neal Wu authored
-
Karmel Allison authored
Add multi-GPU flag to MNIST and allow for setting of replicated optimizer and model_fn
-
- 18 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 12 Jan, 2018 1 commit
-
-
Mikalai Drabovich authored
Opening gzipped datasets in binary, read-only mode fixes the issue
-
- 10 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 06 Jan, 2018 1 commit
-
-
Asim Shankar authored
This is a step towards merging the example in https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist with this repository, so we have a single model definition for training across CPU/GPU/eager execution/TPU. The change to dataset.py is so that the raw data can be read from cloud storage systems (like GCS and S3).
-
- 03 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 02 Jan, 2018 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
- Prior to this change, the use of tf.data.Dataset essentially embedded the entire training/evaluation dataset into the graph as a constant, leading to unnecessarily humungous graphs (Fixes #3017) - Also, use batching on the evaluation dataset to allow evaluation on GPUs that cannot fit the entire evaluation dataset in memory (Fixes #3046)
-
- 21 Dec, 2017 1 commit
-
-
Asim Shankar authored
This will make it easier to share the model definition with eager execution and TPU demos without any side effects of running unnecessary code on module import.
-
- 20 Dec, 2017 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
- 19 Dec, 2017 3 commits
-
-
Asim Shankar authored
-
-
Asim Shankar authored
- Use the object-oriented tf.layers API instead of the functional one. The object-oriented API is particularly useful when using the model with eager execution. - Update unittest to train, evaluate, and predict using the model. - Add a micro-benchmark for measuring step-time. The parameters (batch_size, num_steps etc.) have NOT been tuned, the purpose with this code is mostly to illustrate how model benchmarks may be written. These changes are made as a step towards consolidating model definitions for different TensorFlow features (like eager execution and support for TPUs in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/mnist and https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist
-
- 18 Dec, 2017 1 commit
-
-
Changming Sun authored
With examples, and updates to the README
-
- 08 Dec, 2017 1 commit
-
-
Asim Shankar authored
- Remove `convert_to_records.py` and instead create `tf.data.Dataset` objects directly from the numpy arrays. - Format the Google Python Style (https://github.com/google/yapf/)
-
- 22 Nov, 2017 1 commit
-
-
Neal Wu authored
-
- 06 Nov, 2017 3 commits
- 27 Oct, 2017 1 commit
-
-
Neal Wu authored
-
- 25 Oct, 2017 1 commit
-
-
Neal Wu authored
-