1. 30 Mar, 2023 1 commit
  2. 06 Jun, 2021 4 commits
  3. 10 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · 3803472a
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      3803472a
  4. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · e353e4e5
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      e353e4e5
  5. 06 Apr, 2021 2 commits
  6. 28 Feb, 2021 4 commits
  7. 24 Jan, 2021 2 commits
  8. 12 Aug, 2020 2 commits
  9. 29 Apr, 2020 1 commit
  10. 25 Apr, 2020 1 commit
  11. 22 Apr, 2020 1 commit
  12. 17 Mar, 2020 1 commit
  13. 05 Mar, 2020 1 commit
  14. 02 Mar, 2020 1 commit
  15. 25 Feb, 2020 1 commit
  16. 27 Nov, 2019 1 commit
  17. 28 Oct, 2019 1 commit
  18. 21 Oct, 2019 1 commit
  19. 16 Oct, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Add support for the tf.keras.mixed_precision API in NCF · cb913691
      Reed Wanderman-Milne authored
      To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command:
      
      python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl
      
      For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise.
      
      PiperOrigin-RevId: 275059871
      cb913691
  20. 07 Oct, 2019 1 commit
  21. 09 Sep, 2019 1 commit
  22. 04 Sep, 2019 1 commit
  23. 30 Aug, 2019 1 commit
  24. 26 Aug, 2019 1 commit
  25. 23 Aug, 2019 1 commit
  26. 20 Aug, 2019 2 commits
  27. 19 Aug, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Do not expose --max_train_steps in models that do not use it. · 824ff2d6
      Reed Wanderman-Milne authored
      Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it.
      
      I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic.
      
      PiperOrigin-RevId: 264269836
      824ff2d6
  28. 16 Aug, 2019 1 commit
    • Ayush Dubey's avatar
      Add multi-worker benchmarks to Keras ResNet model. · ff6c3b1e
      Ayush Dubey authored
      Also add `worker_hosts` and `task_index` flags.  These flags enable running the
      model over multiple hosts by passing the cluster information via command line.
      
      Setting `TF_CONFIG` will continue to work.
      
      PiperOrigin-RevId: 263825245
      ff6c3b1e
  29. 06 Aug, 2019 1 commit
  30. 23 Jul, 2019 1 commit