- 09 Feb, 2019 1 commit
-
-
Yuefeng Zhou authored
* Add pure synthetic data to keras resnet mode. * Add imports. * Address comments. * update comment * Undo set up synthetic data for real data path. * update comment * Address comment * Remove trailing whiltespaces. * s/make_data_set_iterator/make_dataset_iterator/
-
- 01 Feb, 2019 1 commit
-
-
guptapriya authored
-
- 27 Dec, 2018 1 commit
-
-
Shining Sun authored
-
- 24 Dec, 2018 1 commit
-
-
Toby Boyd authored
-
- 21 Dec, 2018 1 commit
-
-
Shining Sun authored
-
- 20 Dec, 2018 2 commits
-
-
Shining Sun authored
-
Shining Sun authored
-
- 21 Nov, 2018 1 commit
-
-
josh11b authored
We've deprecated the "tower" terminology in DistributionStrategy, so the "cross_tower_ops" argument is now "cross_device_ops", matching the current name of "AllReduceCrossDeviceOps".
-
- 25 Oct, 2018 1 commit
-
-
josh11b authored
-
- 24 Oct, 2018 1 commit
-
-
josh11b authored
-
- 12 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 12 Jun, 2018 1 commit
-
-
Katherine Wu authored
* Add DistributionStrategy to transformer model * add num_gpu flag * Calculate per device batch size for transformer * remove reference to flags_core * Add synthetic data option to transformer * fix typo * add import back in * Use hierarchical copy * address PR comments * lint * fix spaces * group train op together to fix single GPU error * Fix translate bug (sorted_keys is a dict, not a list) * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.) * Address PR comments. Removed multi gpu flag + more * fix lint * fix more lints * add todo for Synthetic dataset * Update docs
-