1. 03 Dec, 2019 1 commit
  2. 25 Nov, 2019 1 commit
    • Sai Ganesh Bandiatmakuri's avatar
      Inject enable_runtime_flags into benchmarks. · bcce419a
      Sai Ganesh Bandiatmakuri authored
      This will help general debugging by enabling custom execution with  --benchmark_method_steps.
      
      E.g --benchmark_method_steps=train_steps=7 will run the benchmark for only 7 steps without modifying benchmark code.
      
      PiperOrigin-RevId: 282396875
      bcce419a
  3. 16 Oct, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Add support for the tf.keras.mixed_precision API in NCF · cb913691
      Reed Wanderman-Milne authored
      To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command:
      
      python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl
      
      For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise.
      
      PiperOrigin-RevId: 275059871
      cb913691
  4. 10 Oct, 2019 2 commits
  5. 23 Sep, 2019 1 commit
  6. 10 Sep, 2019 1 commit
  7. 09 Sep, 2019 1 commit
  8. 29 Aug, 2019 1 commit
  9. 28 Aug, 2019 1 commit
  10. 19 Aug, 2019 1 commit
  11. 14 Aug, 2019 4 commits
  12. 13 Aug, 2019 3 commits
  13. 09 Aug, 2019 4 commits
  14. 07 Aug, 2019 1 commit
  15. 06 Aug, 2019 1 commit
  16. 01 Aug, 2019 1 commit
  17. 30 Jul, 2019 1 commit
  18. 29 Jul, 2019 1 commit
    • Hongjun Choi's avatar
      Merged commit includes the following changes: (#7322) · 803f833c
      Hongjun Choi authored
      260228553  by priyag<priyag@google.com>:
      
          Enable transformer and NCF official model tests. Also fix some minor issues so that all tests pass with TF 1 + enable_v2_behavior.
      
      --
      260043210  by A. Unique TensorFlower<gardener@tensorflow.org>:
      
          Add logic to train NCF model using offline generated data.
      
      --
      259778607  by priyag<priyag@google.com>:
      
          Internal change
      
      259656389  by hongkuny<hongkuny@google.com>:
      
          Internal change
      
      PiperOrigin-RevId: 260228553
      803f833c
  19. 25 Jul, 2019 1 commit
  20. 24 Jul, 2019 2 commits
  21. 23 Jul, 2019 1 commit
  22. 08 Jul, 2019 1 commit
  23. 24 Jun, 2019 1 commit
  24. 21 Jun, 2019 1 commit
    • Toby Boyd's avatar
      NCF XLA and Eager tests with a refactor of resnet flags to make this cleaner. (#7067) · a68f65f8
      Toby Boyd authored
      * XLA FP32 and first test
      
      * More XLA benchmarks FP32.
      
      * Add eager to NCF and refactor resnet.
      
      * fix v2_0 calls and more flag refactor.
      
      * Remove extra flag args.
      
      * 90 epoch default
      
      * add return
      
      * remove xla not used by estimator.
      
      * Remove duplicate run_eagerly.
      
      * fix flag defaults.
      
      * Remove fp16_implementation flag option.
      
      * Remove stop early on mlperf test.
      
      * remove unneeded args.
      
      * load flags from keras mains.
      a68f65f8
  25. 18 Jun, 2019 1 commit
  26. 13 Jun, 2019 1 commit
  27. 05 Jun, 2019 1 commit
  28. 03 Jun, 2019 1 commit
  29. 24 May, 2019 1 commit
  30. 23 May, 2019 1 commit
    • guptapriya's avatar
      Change batch size and epochs for NCF benchmarks · e8f97a1d
      guptapriya authored
      Current batch size 160000 does not converge to the desired HR. So we decrease to 99k which is known to converge. Tested locally and got to 63.5 at epoch 7. Also decreasing number of epochs as I don't see any improvement after epoch 7-8.
      e8f97a1d