"docs/source/en/api/schedulers/ipndm.mdx" did not exist on "df80ccf7de4cd7409141fe881fd4d630cd69fc4c"
- 06 Feb, 2018 1 commit
-
-
Neal Wu authored
-
- 03 Feb, 2018 1 commit
-
-
Karmel Allison authored
-
- 02 Feb, 2018 1 commit
-
-
Karmel Allison authored
-
- 31 Jan, 2018 3 commits
-
-
Mark Daoust authored
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
Neal Wu authored
-
- 26 Jan, 2018 4 commits
-
-
Karmel Allison authored
Add multi-GPU option to MNIST
-
Karmel Allison authored
* Using nightly TF docker * Explicitly pull image
-
Karmel Allison authored
* Update readme * Respond to CR
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
- 24 Jan, 2018 1 commit
-
- 22 Jan, 2018 2 commits
-
-
Neal Wu authored
-
Karmel Allison authored
Add multi-GPU flag to MNIST and allow for setting of replicated optimizer and model_fn
-
- 18 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 12 Jan, 2018 1 commit
-
-
Mikalai Drabovich authored
Opening gzipped datasets in binary, read-only mode fixes the issue
-
- 10 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 06 Jan, 2018 1 commit
-
-
Asim Shankar authored
This is a step towards merging the example in https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist with this repository, so we have a single model definition for training across CPU/GPU/eager execution/TPU. The change to dataset.py is so that the raw data can be read from cloud storage systems (like GCS and S3).
-
- 03 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 02 Jan, 2018 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
- Prior to this change, the use of tf.data.Dataset essentially embedded the entire training/evaluation dataset into the graph as a constant, leading to unnecessarily humungous graphs (Fixes #3017) - Also, use batching on the evaluation dataset to allow evaluation on GPUs that cannot fit the entire evaluation dataset in memory (Fixes #3046)
-
- 21 Dec, 2017 1 commit
-
-
Asim Shankar authored
This will make it easier to share the model definition with eager execution and TPU demos without any side effects of running unnecessary code on module import.
-
- 20 Dec, 2017 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
- 19 Dec, 2017 3 commits
-
-
Asim Shankar authored
-
-
Asim Shankar authored
- Use the object-oriented tf.layers API instead of the functional one. The object-oriented API is particularly useful when using the model with eager execution. - Update unittest to train, evaluate, and predict using the model. - Add a micro-benchmark for measuring step-time. The parameters (batch_size, num_steps etc.) have NOT been tuned, the purpose with this code is mostly to illustrate how model benchmarks may be written. These changes are made as a step towards consolidating model definitions for different TensorFlow features (like eager execution and support for TPUs in https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/mnist and https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist
-
- 18 Dec, 2017 1 commit
-
-
Changming Sun authored
With examples, and updates to the README
-
- 14 Dec, 2017 1 commit
-
-
Neal Wu authored
-
- 08 Dec, 2017 1 commit
-
-
Asim Shankar authored
- Remove `convert_to_records.py` and instead create `tf.data.Dataset` objects directly from the numpy arrays. - Format the Google Python Style (https://github.com/google/yapf/)
-
- 06 Dec, 2017 1 commit
-
-
田传武 authored
-
- 22 Nov, 2017 1 commit
-
-
Neal Wu authored
-
- 09 Nov, 2017 2 commits
- 08 Nov, 2017 3 commits
- 07 Nov, 2017 4 commits