1. 01 Aug, 2019 3 commits
    • Haoyu Zhang's avatar
      Move the official ResNet (estimator version) under `official/r1` (#7355) · 87542800
      Haoyu Zhang authored
      * Restructure resnet estimator code to under official/r1
      
      * Continue moving resnet code...
      
      * Improved README.md
      87542800
    • Haoyu Zhang's avatar
      Merged commit includes the following changes: (#7354) · dc4c5f1a
      Haoyu Zhang authored
      261171038  by gjn<gjn@google.com>:
      
          Remove weight_decay_rate 0 early exit check
      
          Removing this code path should be fine since this was actually not doing
          what it meant to do. Since weight_decay_rate is actually a tensor, the
          equality check was only looking at the id of the object and comparing to
          0. This should never be true. Evaluating a tensor is also not what we
          want to do at this point of the code. Thus it should be fine to simply
          remove this code.
      
      --
      261169862  by haoyuzhang<haoyuzhang@google.com>:
      
          Internal change
      
      261153520  by haoyuzhang<haoyuzhang@google.com>:
      
          Internal change
      
      261140302  by hongkuny<hongkuny@google.com>:
      
          Clean up
      
      --
      
      PiperOrigin-RevId: 261171038
      dc4c5f1a
    • Haoyu Zhang's avatar
      Remove whitespaces from empty lines (#7353) · 144bc3c2
      Haoyu Zhang authored
      144bc3c2
  2. 31 Jul, 2019 1 commit
  3. 30 Jul, 2019 1 commit
  4. 25 Jul, 2019 1 commit
  5. 24 Jul, 2019 2 commits
  6. 23 Jul, 2019 1 commit
  7. 19 Jul, 2019 2 commits
    • Igor's avatar
      Merged commit includes the following changes: (#7264) · 6f47c378
      Igor authored
      259030078  by isaprykin<isaprykin@google.com>:
      
          Clean up the --clone_model_in_keras_dist_strat from Keras Resnet.
      
          The cloning flag has been removed.  The current rule is that cloning is only done in graph mode.  That resulted in duplicate benchmarks: eager+no-cloning vs eager+cloning.  I removed eager+cloning ones.
      
      --
      259026454  by isaprykin<isaprykin@google.com>:
      
          Internal change
      
      PiperOrigin-RevId: 259030078
      6f47c378
    • Jing Li's avatar
      Merged commit includes the following changes: (#7263) · c5a4978d
      Jing Li authored
      * Merged commit includes the following changes:
      258867180  by jingli<jingli@google.com>:
      
          Add new folders for upcoming reorg in model garden.
      
      --
      258893811  by hongkuny<hongkuny@google.com>:
      
          Adds summaries for metrics, allowing metrics inside keras.model.
      
      --
      258893048  by isaprykin<isaprykin@google.com>:
      
          Remove the `cloning` argument to `compile()`.
      
          Keras models are distributed by cloning in graph mode and without cloning in eager mode as of the change # 258652546.
      
      --
      258881002  by hongkuny<hongkuny@google.com>:
      
          Fix lint.
      
      --
      258874998  by hongkuny<hongkuny@google.com>:
      
          Internal
      
      --
      258872662  by hongkuny<hongkuny@google.com>:
      
          Fix doc
      
      --
      
      PiperOrigin-RevId: 258867180
      
      * Create __init__.py
      
      * Update __init__.py
      
      * Update __init__.py
      
      * Update __init__.py
      c5a4978d
  8. 18 Jul, 2019 1 commit
    • Haoyu Zhang's avatar
      Improve Keras graph performance for ResNet56 (#7241) · dd5a91d3
      Haoyu Zhang authored
      * Config threadpool, cuDNN persistent BN, and grappler layout optimizer properly for ResNet56
      
      * Add tweaked tests for Resnet56
      
      * Avoid triggering the last partial batch overhead by explicitly dropping remainder
      dd5a91d3
  9. 11 Jul, 2019 2 commits
  10. 09 Jul, 2019 1 commit
  11. 03 Jul, 2019 1 commit
    • Toby Boyd's avatar
      Unit tests pass TF 2.0 GPU and CPU locally. (#7101) · 49097655
      Toby Boyd authored
      * Fix unit tests failures.
      
      * 96% of TF 2.0 tests on GPU are passing.
      
      * Currently all passing GPU and CPU TF 2.0
      
      * Address code comments.
      
      * use tf 2.0 cast.
      
      * Comment about working on TF 2.0 CPU
      
      * Uses contrib turn off for TF 2.0.
      
      * Fix wide_deep and add keras_common_tests.
      
      * use context to get num_gpus.
      
      * Switch to tf.keras.metrics
      49097655
  12. 22 Jun, 2019 1 commit
  13. 21 Jun, 2019 2 commits
  14. 20 Jun, 2019 4 commits
  15. 19 Jun, 2019 4 commits
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
    • anj-s's avatar
      Add flags info when reporting benchmarks (#7056) · 1e527fb5
      anj-s authored
      * first version of ctl
      
      * fix indent
      
      * remove monkey patching for core
      
      * add dtype arg
      
      * fix dtype arg
      
      * add logging lib
      
      * remove compat.v1.logging
      
      * add datetime import
      
      * fix FLAGS import
      
      * add constant vals
      
      * move to using as tf import
      
      * move to using as tf import
      
      * remove steps per epoch = 1
      
      * test train and test for one step
      
      * test train and test for one step
      
      * test train and test for one step
      
      * test train and test for the entire dataset
      
      * use an iterator for test
      
      * pass tensors instead of an iterator
      
      * add stats dict
      
      * fix list declaration
      
      * fix list declaration
      
      * fix elapsed time calc
      
      * print lr at epoch boundary alone
      
      * Use regular tf import instead of compat
      
      * remove tensorboard chkpts
      
      * add correct logging import
      
      * add correct logging import
      
      * add benchmark configs
      
      * add tests and configs
      
      * add tests and configs
      
      * add keras flags import
      
      * add keras flags import
      
      * fix eval ds creation cond
      
      * return numpy value of train_loss
      
      * return numpy value of loss and acc values
      
      * add option for full eager mode
      
      * fix lint errors
      
      * add ctl flags
      
      * add ctl import
      
      * add the xla flag
      
      * enable v2 behavior in unit tests
      
      * rename dataset var
      
      * add synthetic dataset without monkey patching
      
      * add ctl local constants
      
      * add ctl local constants
      
      * change to using v2 imports
      
      * change to using v2 imports
      
      * change to using v2 imports
      
      * change to using keras synthetic input fn
      
      * remove enable_eager flag from benchmarks
      
      * remove enable_eager flag from benchmarks
      
      * remove enable_eager flag from benchmarks
      
      * add option for no distrat
      
      * add lambda for flags
      
      * remove no_func benchmarks due to OOM error
      
      * remove README
      
      * remove unused comments
      
      * remove unchanged file
      
      * remove unchanged file
      
      * remove unused drop_remainder_arg
      
      * use keras.common lr function
      
      * address PR comments
      
      * remove reference to deleted file
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * fix lint errors
      
      * .
      
      * add flags info
      1e527fb5
    • anj-s's avatar
      Add benchmarks for custom training loops + tf.distribute (#6980) · 65636099
      anj-s authored
      * first version of ctl
      
      * fix indent
      
      * remove monkey patching for core
      
      * add dtype arg
      
      * fix dtype arg
      
      * add logging lib
      
      * remove compat.v1.logging
      
      * add datetime import
      
      * fix FLAGS import
      
      * add constant vals
      
      * move to using as tf import
      
      * move to using as tf import
      
      * remove steps per epoch = 1
      
      * test train and test for one step
      
      * test train and test for one step
      
      * test train and test for one step
      
      * test train and test for the entire dataset
      
      * use an iterator for test
      
      * pass tensors instead of an iterator
      
      * add stats dict
      
      * fix list declaration
      
      * fix list declaration
      
      * fix elapsed time calc
      
      * print lr at epoch boundary alone
      
      * Use regular tf import instead of compat
      
      * remove tensorboard chkpts
      
      * add correct logging import
      
      * add correct logging import
      
      * add benchmark configs
      
      * add tests and configs
      
      * add tests and configs
      
      * add keras flags import
      
      * add keras flags import
      
      * fix eval ds creation cond
      
      * return numpy value of train_loss
      
      * return numpy value of loss and acc values
      
      * add option for full eager mode
      
      * fix lint errors
      
      * add ctl flags
      
      * add ctl import
      
      * add the xla flag
      
      * enable v2 behavior in unit tests
      
      * rename dataset var
      
      * add synthetic dataset without monkey patching
      
      * add ctl local constants
      
      * add ctl local constants
      
      * change to using v2 imports
      
      * change to using v2 imports
      
      * change to using v2 imports
      
      * change to using keras synthetic input fn
      
      * remove enable_eager flag from benchmarks
      
      * remove enable_eager flag from benchmarks
      
      * remove enable_eager flag from benchmarks
      
      * add option for no distrat
      
      * add lambda for flags
      
      * remove no_func benchmarks due to OOM error
      
      * remove README
      
      * remove unused comments
      
      * remove unchanged file
      
      * remove unchanged file
      
      * remove unused drop_remainder_arg
      
      * use keras.common lr function
      
      * address PR comments
      
      * remove reference to deleted file
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * .
      
      * fix lint errors
      
      * .
      65636099
    • Toby Boyd's avatar
      Use PerfZeroBenchmark and log Flags. (#7052) · d3610769
      Toby Boyd authored
      d3610769
  16. 14 Jun, 2019 3 commits
  17. 13 Jun, 2019 1 commit
  18. 10 Jun, 2019 1 commit
  19. 06 Jun, 2019 3 commits
  20. 05 Jun, 2019 1 commit
  21. 04 Jun, 2019 1 commit
  22. 03 Jun, 2019 2 commits
  23. 31 May, 2019 1 commit