1. 06 Apr, 2021 1 commit
  2. 28 Feb, 2021 2 commits
  3. 24 Jan, 2021 1 commit
  4. 12 Aug, 2020 2 commits
  5. 29 Apr, 2020 1 commit
  6. 25 Apr, 2020 1 commit
  7. 22 Apr, 2020 1 commit
  8. 17 Mar, 2020 1 commit
  9. 05 Mar, 2020 1 commit
  10. 02 Mar, 2020 1 commit
  11. 25 Feb, 2020 1 commit
  12. 27 Nov, 2019 1 commit
  13. 28 Oct, 2019 1 commit
  14. 21 Oct, 2019 1 commit
  15. 16 Oct, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Add support for the tf.keras.mixed_precision API in NCF · cb913691
      Reed Wanderman-Milne authored
      To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command:
      
      python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl
      
      For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise.
      
      PiperOrigin-RevId: 275059871
      cb913691
  16. 07 Oct, 2019 1 commit
  17. 09 Sep, 2019 1 commit
  18. 04 Sep, 2019 1 commit
  19. 30 Aug, 2019 1 commit
  20. 26 Aug, 2019 1 commit
  21. 23 Aug, 2019 1 commit
  22. 20 Aug, 2019 2 commits
  23. 19 Aug, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Do not expose --max_train_steps in models that do not use it. · 824ff2d6
      Reed Wanderman-Milne authored
      Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it.
      
      I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic.
      
      PiperOrigin-RevId: 264269836
      824ff2d6
  24. 16 Aug, 2019 1 commit
    • Ayush Dubey's avatar
      Add multi-worker benchmarks to Keras ResNet model. · ff6c3b1e
      Ayush Dubey authored
      Also add `worker_hosts` and `task_index` flags.  These flags enable running the
      model over multiple hosts by passing the cluster information via command line.
      
      Setting `TF_CONFIG` will continue to work.
      
      PiperOrigin-RevId: 263825245
      ff6c3b1e
  25. 06 Aug, 2019 1 commit
  26. 23 Jul, 2019 1 commit
  27. 21 Jun, 2019 2 commits
  28. 19 Jun, 2019 1 commit
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  29. 06 Jun, 2019 1 commit
  30. 18 May, 2019 1 commit
  31. 15 May, 2019 1 commit
  32. 11 May, 2019 1 commit
  33. 01 May, 2019 1 commit
    • Reed's avatar
      Add --fp16_implementation option. (#6703) · b691578c
      Reed authored
      This options allows the new tf.train.experimental.enable_mixed_precision_graph_rewrite() function to be used for fp16, instead of manual casts.
      b691578c
  34. 26 Apr, 2019 2 commits
  35. 03 Apr, 2019 1 commit