- 20 Mar, 2018 2 commits
-
-
Karmel Allison authored
* Glint everything * Adding rcfile and pylinting * Extra newline * Few last lints
-
Katherine Wu authored
Use util functions hooks_helper and parser in mnist and wide_deep, and rename epochs_between_eval (from epochs_per_eval) (#3650)
-
- 15 Mar, 2018 1 commit
-
-
Brennan Saeta authored
-
- 12 Mar, 2018 1 commit
-
-
yhliang2018 authored
* Adding logging utils * restore utils * delete old file * update inputs and docstrings * make /official a python module * remove /utils directory * Update readme for python path setting * Change readme texts
-
- 06 Mar, 2018 1 commit
-
-
Allen Lavoie authored
-
- 02 Mar, 2018 2 commits
-
-
hsm207 authored
-
Brennan Saeta authored
In TensorFlow 1.7, the TPUClusterResolver and the TPU RunConfig are changed to reduce the amount of boilerplate required.
-
- 01 Mar, 2018 1 commit
-
-
Neal Wu authored
-
- 24 Feb, 2018 1 commit
-
-
Neal Wu authored
-
- 17 Feb, 2018 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
- 16 Feb, 2018 4 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
Asim Shankar authored
-
Asim Shankar authored
Add an example showing how to train the MNIST model with eager execution enabled. (This change requires changes to TensorFlow made after the 1.6 release branch was cut, i.e., will require a build from source or TensorFlow 1.7+)
-
- 08 Feb, 2018 2 commits
- 06 Feb, 2018 3 commits
-
-
Frank Chen authored
The long path doesn't work for TensorFlow 1.6 because the .python package under cluster_resolver doesn't exist.
-
Frank Chen authored
-
Neal Wu authored
-
- 31 Jan, 2018 3 commits
-
-
Mark Daoust authored
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
Neal Wu authored
-
- 26 Jan, 2018 2 commits
-
-
Karmel Allison authored
Add multi-GPU option to MNIST
-
Mark Daoust authored
The `sparse` version is more efficient anyway. I'm returning the labels shape [1] instead of [] because tf.accuracy fails otherwise.
-
- 24 Jan, 2018 1 commit
-
- 22 Jan, 2018 2 commits
-
-
Neal Wu authored
-
Karmel Allison authored
Add multi-GPU flag to MNIST and allow for setting of replicated optimizer and model_fn
-
- 18 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 12 Jan, 2018 1 commit
-
-
Mikalai Drabovich authored
Opening gzipped datasets in binary, read-only mode fixes the issue
-
- 10 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 06 Jan, 2018 1 commit
-
-
Asim Shankar authored
This is a step towards merging the example in https://github.com/tensorflow/tpu-demos/tree/master/cloud_tpu/models/mnist with this repository, so we have a single model definition for training across CPU/GPU/eager execution/TPU. The change to dataset.py is so that the raw data can be read from cloud storage systems (like GCS and S3).
-
- 03 Jan, 2018 1 commit
-
-
Asim Shankar authored
-
- 02 Jan, 2018 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
- Prior to this change, the use of tf.data.Dataset essentially embedded the entire training/evaluation dataset into the graph as a constant, leading to unnecessarily humungous graphs (Fixes #3017) - Also, use batching on the evaluation dataset to allow evaluation on GPUs that cannot fit the entire evaluation dataset in memory (Fixes #3046)
-
- 21 Dec, 2017 1 commit
-
-
Asim Shankar authored
This will make it easier to share the model definition with eager execution and TPU demos without any side effects of running unnecessary code on module import.
-
- 20 Dec, 2017 2 commits
-
-
Asim Shankar authored
-
Asim Shankar authored
-
- 19 Dec, 2017 2 commits
-
-
Asim Shankar authored
-
-