- 21 Oct, 2019 1 commit
-
-
minoring authored
arparse -> argparse
-
- 16 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command: python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise. PiperOrigin-RevId: 275059871
-
- 07 Oct, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273371605
-
- 09 Sep, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--stop_threshold, --num_gpu, --hooks, --export_dir, and --distribution_strategy have been unexposed from models which do not use them PiperOrigin-RevId: 268032080
-
- 04 Sep, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--clean, --train_epochs, and --epochs_between_evals have been unexposed from models which do not use them PiperOrigin-RevId: 267065651
-
- 30 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 266376708
-
- 26 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--synthetic_data, --dtype, --all_reduce_alg, and --num_packs have been unexposed from models which do not use them PiperOrigin-RevId: 265483564
-
- 23 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--num_parallel_calls, --inter_op_parallelism_threads, and --intra_op_parallelism_threads have been unexposed from models which do not use them PiperOrigin-RevId: 264965788
-
- 20 Aug, 2019 2 commits
-
-
Vinh Nguyen authored
-
Vinh Nguyen authored
-
- 19 Aug, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it. I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic. PiperOrigin-RevId: 264269836
-
- 16 Aug, 2019 1 commit
-
-
Ayush Dubey authored
Also add `worker_hosts` and `task_index` flags. These flags enable running the model over multiple hosts by passing the cluster information via command line. Setting `TF_CONFIG` will continue to work. PiperOrigin-RevId: 263825245
-
- 06 Aug, 2019 1 commit
-
-
Toby Boyd authored
* force_v2_in_keras_compile FLAG default to None and added seperate temp path. * switch to force testing 1v path not force v2 path. * Rename function force_v1_path.
-
- 23 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Add force_run_distributed tests. * Added enable_eager * r/force_run_distributed/force_v2_in_keras_compile * Adding force_v2 tests and FLAGs. * Rename method to avoid conflict. * Add cpu force_v2 tests. * fix lint, wrap line. * change to force_v2_in_keras_compile * Update method name. * Lower mlperf target to 0.736.
-
- 21 Jun, 2019 2 commits
-
-
Neil authored
-
Toby Boyd authored
* XLA FP32 and first test * More XLA benchmarks FP32. * Add eager to NCF and refactor resnet. * fix v2_0 calls and more flag refactor. * Remove extra flag args. * 90 epoch default * add return * remove xla not used by estimator. * Remove duplicate run_eagerly. * fix flag defaults. * Remove fp16_implementation flag option. * Remove stop early on mlperf test. * remove unneeded args. * load flags from keras mains.
-
- 19 Jun, 2019 1 commit
-
-
Toby Boyd authored
* set default steps to 300K. * Log flags to perfzero. * Add XLA support to transformer - Moved config logic to keras_utils - Added enable_xla flag to _performance flags - Did not refactor enable_xla flag from keras resnet due to reliance on calling FLAGs in estimator keras and that is a needed refactor for another time. * fix g3 lint complaint. * Refactor set config into keras_utils. * Move flags out of main. * pipe through enable_xla * Update official/transformer/v2/misc.py Co-Authored-By:Reed <reedwm@google.com>
-
- 06 Jun, 2019 1 commit
-
-
Reed authored
Before, there was a global default loss scale for all models. Currently, only resnet uses loss scaling, but this will be useful once more models support it.
-
- 18 May, 2019 1 commit
-
-
Reed authored
This will allow one to easily reproduce a benchmark by running with the flags.
-
- 15 May, 2019 1 commit
-
-
Rachel Lim authored
* Added 'tfdata_exp' version of all benchmarks which set FLAGS.tf_data_experimental_slack = True. Renamed `data_prefetch_with_slack` to `data_delay_prefetch` (haoyu's change) to make the names more distinct. * Add flag to resnet input pipeline and surface through keras_imagenet_main.py
-
- 11 May, 2019 1 commit
-
-
Toby Boyd authored
* Add FP16 and benchmarks. * add missing run and report. * Add loss_scale as option not included with dtype. * move loss_scale validation under dtype conditional. * add loss_scale to flags tested.
-
- 01 May, 2019 1 commit
-
-
Reed authored
This options allows the new tf.train.experimental.enable_mixed_precision_graph_rewrite() function to be used for fp16, instead of manual casts.
-
- 26 Apr, 2019 2 commits
-
-
Ayush Dubey authored
* Add num_packs flag for MirroredStrategy's cross device ops. * fix parens * Fix lint errors and make all_reduce_alg more robust. * Set default num_packs to 1
-
Gaurav Jain authored
tf.test.is_gpu_available() should not be called in flags since this is called before app.main() and the runtime has not yet been initialized.
-
- 03 Apr, 2019 1 commit
-
-
Reed authored
-
- 28 Mar, 2019 1 commit
-
-
Shining Sun authored
* initial commit * bug fix * Move build_stats from common to keras main, because it is only applicable in keras * remove tailing blank line * add test for synth data * add kwargs to init * add kwargs to function invokation * correctly pass kwargs * debug * debug * debug * fix super init * bug fix * fix local_flags * fix import * bug fix * fix log_steps flag * bug fix * bug fix: add missing return value * resolve double-defined flags * lint fix * move log_steps flag to benchmarK flag * fix lint * lint fix * lint fix * try flag core default values * bug fix * bug fix * bug fix * debug * debug * remove debug prints * rename benchmark methods * flag bug fix for synth benchmark
-
- 20 Mar, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 07 Mar, 2019 1 commit
-
-
Ayush Dubey authored
* s/CollectiveAllReduceStrategy/MultiWorkerMirroredStrategy * More s/contrib.distribute/distribute.experimental * Collective communication options in MultiWorkerMirroredStrategy. * Minor fixes * No checkpointing if multi worker. * turn off checkpointing * fix lint
-
- 21 Feb, 2019 1 commit
-
-
Ayush Dubey authored
* Update official resnet for multi worker training with distribution strategies. * Fixes for multi worker training. * Fix call to `get_distribution_strategy`. * Undo test change. * Fix spacing. * Move cluster configuration to distribution_utils. * Move train_and_evaluate out of loop. Also, update docstrings for multi-worker flags and add use_train_and_evaluate flag. * Update distribution_strategy flag to match exported name for collective strategy.
-
- 13 Feb, 2019 1 commit
-
-
Yuefeng Zhou authored
* Add a flag to specify distribution strategies. * Fix a small error. * Address comments. * Address comments. * Fix typos.
-
- 08 Feb, 2019 1 commit
-
-
Goldie Gadde authored
This reverts commit 57e07520.
-
- 06 Feb, 2019 1 commit
-
-
Goldie Gadde authored
This reverts commit d6b2b83c.
-
- 05 Feb, 2019 1 commit
-
-
Goldie Gadde authored
* Add resnet56 short tests. (#6101) * Add resnet56 short tests. - created base benchmark module - renamed accuracy test class to contain the word Accuracy which will result in a need to update all the jobs and a loss of history but is worth it. - short tests are mostly copied from shining with oss refactor * Address feedback. * Move flag_methods to init - Address setting default flags repeatedly. * Rename accuracy tests. * Lint errors resolved. * fix model_dir set to flags.data_dir. * fixed not fulling pulling out flag_methods. * Use core mirrored strategy in official models (#6126) * Imagenet short tests (#6132) * Add short imagenet tests (taken from seemuch) - also rename to match go forward naming * fix method name * Update doc strings. * Fixe gpu number. * points default data_dir to child folder. (#6131) Failed test is python2 and was a kokoro failure * Imagenet short tests (#6136) * Add short imagenet tests (taken from seemuch) - also rename to match go forward naming * fix method name * Update doc strings. * Fixe gpu number. * Add fill_objects * fixed calling wrong class in super. * fix lint issue. * Flag (#6121) * Fix the turn_off_ds flag problem * add param names to all args * Export benchmark stats using tf.test.Benchmark.report_benchmark() (#6103) * Export benchmark stats using tf.test.Benchmark.report_benchmark() * Fix python style using pyformat * Typos. (#6120) * log verbosity=2 logs every epoch no progress bars (#6142) * tf_upgrade_v2 on resnet and utils folder. * tf_upgrade_v2 on resnet and utils folder.
-
- 13 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 12 Oct, 2018 1 commit
-
-
Toby Boyd authored
-
- 20 Aug, 2018 1 commit
-
-
Taylor Robie authored
* perform a codecs check and remove unicode \ufeff if utf-8 is not present * delint
-
- 20 Jun, 2018 1 commit
-
-
Taylor Robie authored
* begin branch * finish download script * rename download to dataset * intermediate commit * intermediate commit * misc tweaks * intermediate commit * intermediate commit * intermediate commit * delint and update census test. * add movie tests * delint * fix py2 issue * address PR comments * intermediate commit * intermediate commit * intermediate commit * finish wide deep transition to vanilla movielens * delint * intermediate commit * intermediate commit * intermediate commit * intermediate commit * fix import * add default ncf csv construction * change default on download_if_missing * shard and vectorize example serialization * fix import * update ncf data unittests * delint * delint * more delinting * fix wide-deep movielens serialization * address PR comments * add file_io tests * investigate wide-deep test failure * remove hard coded path and properly use flags. * address file_io test PR comments * missed a hash_bucked_size
-
- 12 Jun, 2018 2 commits
-
-
Katherine Wu authored
* Add checklist for official models. Remove file access from flag validator (causing issues with BUILD) * spelling * address PR comments
-
Katherine Wu authored
* Add DistributionStrategy to transformer model * add num_gpu flag * Calculate per device batch size for transformer * remove reference to flags_core * Add synthetic data option to transformer * fix typo * add import back in * Use hierarchical copy * address PR comments * lint * fix spaces * group train op together to fix single GPU error * Fix translate bug (sorted_keys is a dict, not a list) * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.) * Address PR comments. Removed multi gpu flag + more * fix lint * fix more lints * add todo for Synthetic dataset * Update docs
-
- 04 Jun, 2018 1 commit
-
-
Taylor Robie authored
* port changes from previous branch now that transformer util changes are in master fix incorrect count correct (hopefully) treatment of batch_size set eval_metrics to a dummy function for now add some comments start bringing metrics to transformer TPU resolve logits shape metrics are now working except for tf.py_func metrics increase batch_size for tpu, and create summary host call fix host call reduce tpu default batch size further tune batch sizes add minibatch loss to summary handle case of single_iteration_train_steps > number points in an epoch begin to incorporate hooks add sleep workarounds disable hooks altogether generalize host call function and move to newly created tpu utils module remove all traces of params as an object switch from to address some PR comments, and change the number of data points. minor tweaks add tpu dry run for testing, and use matmul for TPU embedding infeed/outfeed queue issue is fixed. Sleeps are no longer necessary add some documentation. cleanup and address PR comments delint add accelerator __init__ fix embedding missed PR comment address PR comments fix validator bug rewrite cloud storage validator, and add oauth dependency to requirements.txt * delint
-