1. 24 Jul, 2019 2 commits
  2. 23 Jul, 2019 1 commit
  3. 20 Jul, 2019 1 commit
  4. 11 Jul, 2019 3 commits
  5. 08 Jul, 2019 1 commit
  6. 03 Jul, 2019 1 commit
    • Toby Boyd's avatar
      Unit tests pass TF 2.0 GPU and CPU locally. (#7101) · 49097655
      Toby Boyd authored
      * Fix unit tests failures.
      
      * 96% of TF 2.0 tests on GPU are passing.
      
      * Currently all passing GPU and CPU TF 2.0
      
      * Address code comments.
      
      * use tf 2.0 cast.
      
      * Comment about working on TF 2.0 CPU
      
      * Uses contrib turn off for TF 2.0.
      
      * Fix wide_deep and add keras_common_tests.
      
      * use context to get num_gpus.
      
      * Switch to tf.keras.metrics
      49097655
  7. 28 Jun, 2019 1 commit
  8. 22 Jun, 2019 1 commit
  9. 21 Jun, 2019 2 commits
  10. 20 Jun, 2019 2 commits
  11. 19 Jun, 2019 2 commits
    • Reed's avatar
      Add mixed precision support to Transformer (#7011) · f8ec01ae
      Reed authored
      f8ec01ae
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  12. 18 Jun, 2019 1 commit
  13. 11 Jun, 2019 1 commit
  14. 06 Jun, 2019 3 commits
  15. 05 Jun, 2019 7 commits
  16. 31 May, 2019 2 commits
  17. 29 May, 2019 4 commits
  18. 28 May, 2019 5 commits
    • guptapriya's avatar
      Add static batch benchmarks to estimator (#6886) · 383c6e30
      guptapriya authored
      * Add static batch benchmarks to estimator 
      
      So we can distinguish how much static vs dynamic batch matter.
      
      * change max_length for static_batch tests
      
      * Add flag for max length
      383c6e30
    • Igor's avatar
      Make 'off' a string literal. · 3928d481
      Igor authored
      3928d481
    • guptapriya's avatar
      Turn dist strat off for 1 GPU benchmarks · 2be9ba5b
      guptapriya authored
      2be9ba5b
    • guptapriya's avatar
      undo shuffle change · df523d91
      guptapriya authored
      this is not going to help with current tf.data semantics. so removing it.
      df523d91
    • Igor's avatar
      Add distribute strategies to transformer. (#6883) · b9c1d1ca
      Igor authored
      * Fixes that make transformer run.
      
      * Remove debug print statements.
      
      * Changed the permissions to 644.
      
      * Fix the rest of the permissions.
      
      * enable static batch in all benchmarks
      
      * Restrict dist strat hack to training mode
      
      For now we will do predict/eval without dist strat, so remove that hack in non training cases.
      
      * Use `inputs` instead of `x` as arg name for call
      
      Keras has different behavior based on whether the inputs are called `inputs` or not. Using `inputs` gives expected behaviors.
      
      * Avoid extra map fn on input in dist strat case
      
      * Update how we handle custom metrics
      
      This new approach works with and without dist strat. The previous one didn't work with dist strat. We need to fix that but this is reasonable in meantime (b/133724664).
      
      * Update benchmarks
      
      * typo in metrics code
      
      * Revert metrics change
      
      Didn't actually work in distributed case..
      b9c1d1ca