1. 30 Mar, 2020 1 commit
  2. 17 Mar, 2020 1 commit
  3. 27 Jan, 2020 1 commit
  4. 04 Jan, 2020 1 commit
  5. 14 Dec, 2019 2 commits
  6. 11 Oct, 2019 3 commits
  7. 09 Sep, 2019 1 commit
  8. 04 Sep, 2019 1 commit
  9. 26 Aug, 2019 1 commit
  10. 23 Aug, 2019 1 commit
  11. 20 Aug, 2019 1 commit
  12. 19 Aug, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Do not expose --max_train_steps in models that do not use it. · 824ff2d6
      Reed Wanderman-Milne authored
      Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it.
      
      I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic.
      
      PiperOrigin-RevId: 264269836
      824ff2d6
  13. 16 Aug, 2019 1 commit
    • Ayush Dubey's avatar
      Add multi-worker benchmarks to Keras ResNet model. · ff6c3b1e
      Ayush Dubey authored
      Also add `worker_hosts` and `task_index` flags.  These flags enable running the
      model over multiple hosts by passing the cluster information via command line.
      
      Setting `TF_CONFIG` will continue to work.
      
      PiperOrigin-RevId: 263825245
      ff6c3b1e
  14. 01 Aug, 2019 1 commit
  15. 21 Jun, 2019 1 commit
    • Toby Boyd's avatar
      NCF XLA and Eager tests with a refactor of resnet flags to make this cleaner. (#7067) · a68f65f8
      Toby Boyd authored
      * XLA FP32 and first test
      
      * More XLA benchmarks FP32.
      
      * Add eager to NCF and refactor resnet.
      
      * fix v2_0 calls and more flag refactor.
      
      * Remove extra flag args.
      
      * 90 epoch default
      
      * add return
      
      * remove xla not used by estimator.
      
      * Remove duplicate run_eagerly.
      
      * fix flag defaults.
      
      * Remove fp16_implementation flag option.
      
      * Remove stop early on mlperf test.
      
      * remove unneeded args.
      
      * load flags from keras mains.
      a68f65f8
  16. 19 Jun, 2019 1 commit
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  17. 14 Jun, 2019 1 commit
  18. 06 Jun, 2019 1 commit
  19. 18 May, 2019 1 commit
  20. 15 May, 2019 2 commits
  21. 11 May, 2019 1 commit
  22. 01 May, 2019 1 commit
    • Reed's avatar
      Add --fp16_implementation option. (#6703) · b691578c
      Reed authored
      This options allows the new tf.train.experimental.enable_mixed_precision_graph_rewrite() function to be used for fp16, instead of manual casts.
      b691578c
  23. 29 Apr, 2019 1 commit
  24. 26 Apr, 2019 2 commits
  25. 18 Apr, 2019 1 commit
  26. 17 Apr, 2019 1 commit
  27. 11 Apr, 2019 1 commit
  28. 03 Apr, 2019 2 commits
  29. 02 Apr, 2019 1 commit
  30. 30 Mar, 2019 1 commit
  31. 28 Mar, 2019 2 commits
  32. 19 Mar, 2019 1 commit
  33. 12 Mar, 2019 1 commit