- 19 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it. I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic. PiperOrigin-RevId: 264269836
-
- 06 Aug, 2019 1 commit
-
-
Toby Boyd authored
* force_v2_in_keras_compile FLAG default to None and added seperate temp path. * switch to force testing 1v path not force v2 path. * Rename function force_v1_path.
-
- 23 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Add force_run_distributed tests. * Added enable_eager * r/force_run_distributed/force_v2_in_keras_compile * Adding force_v2 tests and FLAGs. * Rename method to avoid conflict. * Add cpu force_v2 tests. * fix lint, wrap line. * change to force_v2_in_keras_compile * Update method name. * Lower mlperf target to 0.736.
-
- 19 Jun, 2019 1 commit
-
-
Toby Boyd authored
* set default steps to 300K. * Log flags to perfzero. * Add XLA support to transformer - Moved config logic to keras_utils - Added enable_xla flag to _performance flags - Did not refactor enable_xla flag from keras resnet due to reliance on calling FLAGs in estimator keras and that is a needed refactor for another time. * fix g3 lint complaint. * Refactor set config into keras_utils. * Move flags out of main. * pipe through enable_xla * Update official/transformer/v2/misc.py Co-Authored-By:Reed <reedwm@google.com>
-
- 06 Jun, 2019 1 commit
-
-
Reed authored
Before, there was a global default loss scale for all models. Currently, only resnet uses loss scaling, but this will be useful once more models support it.
-
- 15 May, 2019 1 commit
-
-
Rachel Lim authored
* Added 'tfdata_exp' version of all benchmarks which set FLAGS.tf_data_experimental_slack = True. Renamed `data_prefetch_with_slack` to `data_delay_prefetch` (haoyu's change) to make the names more distinct. * Add flag to resnet input pipeline and surface through keras_imagenet_main.py
-
- 11 May, 2019 1 commit
-
-
Toby Boyd authored
* Add FP16 and benchmarks. * add missing run and report. * Add loss_scale as option not included with dtype. * move loss_scale validation under dtype conditional. * add loss_scale to flags tested.
-
- 01 May, 2019 1 commit
-
-
Reed authored
This options allows the new tf.train.experimental.enable_mixed_precision_graph_rewrite() function to be used for fp16, instead of manual casts.
-
- 26 Apr, 2019 1 commit
-
-
Ayush Dubey authored
* Add num_packs flag for MirroredStrategy's cross device ops. * fix parens * Fix lint errors and make all_reduce_alg more robust. * Set default num_packs to 1
-
- 03 Apr, 2019 1 commit
-
-
Reed authored
-
- 20 Mar, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 07 Mar, 2019 1 commit
-
-
Ayush Dubey authored
* s/CollectiveAllReduceStrategy/MultiWorkerMirroredStrategy * More s/contrib.distribute/distribute.experimental * Collective communication options in MultiWorkerMirroredStrategy. * Minor fixes * No checkpointing if multi worker. * turn off checkpointing * fix lint
-
- 13 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 12 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 12 Jun, 2018 1 commit
-
-
Katherine Wu authored
* Add DistributionStrategy to transformer model * add num_gpu flag * Calculate per device batch size for transformer * remove reference to flags_core * Add synthetic data option to transformer * fix typo * add import back in * Use hierarchical copy * address PR comments * lint * fix spaces * group train op together to fix single GPU error * Fix translate bug (sorted_keys is a dict, not a list) * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.) * Address PR comments. Removed multi gpu flag + more * fix lint * fix more lints * add todo for Synthetic dataset * Update docs
-
- 03 May, 2018 1 commit
-
-
Taylor Robie authored
* squash of modular absl usage commits * delint * address PR comments * change hooks to comma separated list, as absl behavior for space separated lists is not as expected
-