- 21 Feb, 2019 1 commit
-
-
Ayush Dubey authored
* Update official resnet for multi worker training with distribution strategies. * Fixes for multi worker training. * Fix call to `get_distribution_strategy`. * Undo test change. * Fix spacing. * Move cluster configuration to distribution_utils. * Move train_and_evaluate out of loop. Also, update docstrings for multi-worker flags and add use_train_and_evaluate flag. * Update distribution_strategy flag to match exported name for collective strategy.
-
- 14 Feb, 2019 1 commit
-
-
Toby Boyd authored
* One device from contrib to core. * remove test code.
-
- 13 Feb, 2019 1 commit
-
-
Yuefeng Zhou authored
* Add a flag to specify distribution strategies. * Fix a small error. * Address comments. * Address comments. * Fix typos.
-
- 12 Feb, 2019 1 commit
-
-
Toby Boyd authored
* Remove contrib thread pool. * Remove commented out contrib import. * Fix lint issues. * move tf.data.options higher. Tweak line breaks. * do not monkey patch on or off if dist_strat is off * Do not monkey patch if no_dist_strat. * Fix file permissions. * fix file permissions. * Revert change to main. Add hasattr(tf, 'contrib') to utils * compat.v1.logging * tf.compat.v1.get_local_variables.
-
- 09 Feb, 2019 1 commit
-
-
Yuefeng Zhou authored
* Add pure synthetic data to keras resnet mode. * Add imports. * Address comments. * update comment * Undo set up synthetic data for real data path. * update comment * Address comment * Remove trailing whiltespaces. * s/make_data_set_iterator/make_dataset_iterator/
-
- 01 Feb, 2019 1 commit
-
-
guptapriya authored
-
- 27 Dec, 2018 1 commit
-
-
Shining Sun authored
-
- 24 Dec, 2018 1 commit
-
-
Toby Boyd authored
-
- 21 Dec, 2018 1 commit
-
-
Shining Sun authored
-
- 20 Dec, 2018 2 commits
-
-
Shining Sun authored
-
Shining Sun authored
-
- 21 Nov, 2018 1 commit
-
-
josh11b authored
We've deprecated the "tower" terminology in DistributionStrategy, so the "cross_tower_ops" argument is now "cross_device_ops", matching the current name of "AllReduceCrossDeviceOps".
-
- 25 Oct, 2018 1 commit
-
-
josh11b authored
-
- 24 Oct, 2018 1 commit
-
-
josh11b authored
-
- 12 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 12 Jun, 2018 1 commit
-
-
Katherine Wu authored
* Add DistributionStrategy to transformer model * add num_gpu flag * Calculate per device batch size for transformer * remove reference to flags_core * Add synthetic data option to transformer * fix typo * add import back in * Use hierarchical copy * address PR comments * lint * fix spaces * group train op together to fix single GPU error * Fix translate bug (sorted_keys is a dict, not a list) * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.) * Address PR comments. Removed multi gpu flag + more * fix lint * fix more lints * add todo for Synthetic dataset * Update docs
-