- 14 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306521269
-
- 05 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 304908157
-
- 17 Mar, 2020 1 commit
-
-
ayushmankumar7 authored
-
- 05 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 299160422
-
- 17 Dec, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 285930545
-
- 03 Dec, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 283449673
-
- 25 Nov, 2019 1 commit
-
-
Sai Ganesh Bandiatmakuri authored
This will help general debugging by enabling custom execution with --benchmark_method_steps. E.g --benchmark_method_steps=train_steps=7 will run the benchmark for only 7 steps without modifying benchmark code. PiperOrigin-RevId: 282396875
-
- 16 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command: python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise. PiperOrigin-RevId: 275059871
-
- 10 Oct, 2019 2 commits
-
-
A. Unique TensorFlower authored
change benchmark's log verbosity to logging.INFO. it seems to me that DEBUG map to ---v=1 internally, which is way to verbose for the purpose of benchmarking. PiperOrigin-RevId: 274040907
-
Hongkun Yu authored
PiperOrigin-RevId: 273966871
-
- 23 Sep, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 270749832
-
- 10 Sep, 2019 1 commit
-
-
Tomasz Grel authored
-
- 09 Sep, 2019 1 commit
-
-
Adrian Kuegel authored
PiperOrigin-RevId: 267946336
-
- 29 Aug, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 266242786
-
- 28 Aug, 2019 1 commit
-
-
David Chen authored
PiperOrigin-RevId: 266002056
-
- 19 Aug, 2019 1 commit
-
-
Toby Boyd authored
-
- 14 Aug, 2019 4 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 263401952
-
Hongkun Yu authored
PiperOrigin-RevId: 263257133
-
tf-models-copybara-bot authored
PiperOrigin-RevId: 263204353
-
Nimit Nigania authored
-
- 13 Aug, 2019 3 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 263217684
-
Hongkun Yu authored
PiperOrigin-RevId: 263158478
-
Toby Boyd authored
-
- 09 Aug, 2019 4 commits
-
-
Nimit Nigania authored
-
Nimit Nigania authored
-
Nimit Nigania authored
-
Toby Boyd authored
* Add run_eagerly for ctl. * fix test name and do not set "default".
-
- 07 Aug, 2019 1 commit
-
-
Nimit Nigania authored
-
- 06 Aug, 2019 1 commit
-
-
Toby Boyd authored
* force_v2_in_keras_compile FLAG default to None and added seperate temp path. * switch to force testing 1v path not force v2 path. * Rename function force_v1_path.
-
- 01 Aug, 2019 1 commit
-
-
Haoyu Zhang authored
-
- 30 Jul, 2019 1 commit
-
-
Igor authored
-
- 29 Jul, 2019 1 commit
-
-
Hongjun Choi authored
260228553 by priyag<priyag@google.com>: Enable transformer and NCF official model tests. Also fix some minor issues so that all tests pass with TF 1 + enable_v2_behavior. -- 260043210 by A. Unique TensorFlower<gardener@tensorflow.org>: Add logic to train NCF model using offline generated data. -- 259778607 by priyag<priyag@google.com>: Internal change 259656389 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 260228553
-
- 25 Jul, 2019 1 commit
-
-
Toby Boyd authored
-
- 24 Jul, 2019 2 commits
- 23 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Add force_run_distributed tests. * Added enable_eager * r/force_run_distributed/force_v2_in_keras_compile * Adding force_v2 tests and FLAGs. * Rename method to avoid conflict. * Add cpu force_v2 tests. * fix lint, wrap line. * change to force_v2_in_keras_compile * Update method name. * Lower mlperf target to 0.736.
-
- 08 Jul, 2019 1 commit
-
-
Toby Boyd authored
-
- 24 Jun, 2019 1 commit
-
-
nnigania authored
-
- 21 Jun, 2019 1 commit
-
-
Toby Boyd authored
* XLA FP32 and first test * More XLA benchmarks FP32. * Add eager to NCF and refactor resnet. * fix v2_0 calls and more flag refactor. * Remove extra flag args. * 90 epoch default * add return * remove xla not used by estimator. * Remove duplicate run_eagerly. * fix flag defaults. * Remove fp16_implementation flag option. * Remove stop early on mlperf test. * remove unneeded args. * load flags from keras mains.
-
- 18 Jun, 2019 1 commit
-
-
nnigania authored
* adding a new perf test for ncf, and changing some names * Added change to make ncf use the data from the gcp bucket, and removed the need to re-download data >1day old. Reorganized the perf-zero tests
-