1. 08 Aug, 2019 1 commit
    • Reed's avatar
      Fix fp16 Transformer model. (#7402) · 58340818
      Reed authored
      Also, do Transformer inference in fp16, as well as training, when --dtype=fp16. In TF 2, layers now cannot run in multiple different dtypes, so we must use the same dtype for training and inference.
      58340818
  2. 07 Aug, 2019 1 commit
    • Hongkun Yu's avatar
      Merged commit includes the following changes: (#7398) · 45b708d4
      Hongkun Yu authored
      262039434  by A. Unique TensorFlower<gardener@tensorflow.org>:
      
          Internal change
      
      262024241  by hongkuny<hongkuny@google.com>:
      
          Adds __init__.py
      
      --
      262021128  by isaprykin<isaprykin@google.com>:
      
          Internal change
      
      PiperOrigin-RevId: 262039434
      45b708d4
  3. 06 Aug, 2019 1 commit
  4. 05 Aug, 2019 1 commit
    • Igor's avatar
      Fix the ValueError: Error when checking model input on the new codepath (#7382) · ca7d215d
      Igor authored
      * Fix the ValueError: Error when checking model input on the new codepath
      
      Fixes the following error:
      
        File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 2428, in _standardize_user_data
          exception_prefix='input')
        File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training_utils.py", line 530, in standardize_input_data
          str(len(data)) + ' arrays: ' + str(data)[:200] + '...')
      ValueError: Error when checking model input: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [<tf.Tensor 'cond_8/Identity:0' shape=(None, None) dtype=int64>]..
      
      Tested and reproduced by running trasformer_main_test (thanks to whoever wrote it, phew!)
      
      * Remove the unnecessary TODO.
      ca7d215d
  5. 01 Aug, 2019 1 commit
    • Haoyu Zhang's avatar
      Merged commit includes the following changes: (#7354) · dc4c5f1a
      Haoyu Zhang authored
      261171038  by gjn<gjn@google.com>:
      
          Remove weight_decay_rate 0 early exit check
      
          Removing this code path should be fine since this was actually not doing
          what it meant to do. Since weight_decay_rate is actually a tensor, the
          equality check was only looking at the id of the object and comparing to
          0. This should never be true. Evaluating a tensor is also not what we
          want to do at this point of the code. Thus it should be fine to simply
          remove this code.
      
      --
      261169862  by haoyuzhang<haoyuzhang@google.com>:
      
          Internal change
      
      261153520  by haoyuzhang<haoyuzhang@google.com>:
      
          Internal change
      
      261140302  by hongkuny<hongkuny@google.com>:
      
          Clean up
      
      --
      
      PiperOrigin-RevId: 261171038
      dc4c5f1a
  6. 24 Jul, 2019 5 commits
  7. 23 Jul, 2019 1 commit
  8. 20 Jul, 2019 1 commit
  9. 11 Jul, 2019 3 commits
  10. 08 Jul, 2019 1 commit
  11. 03 Jul, 2019 1 commit
    • Toby Boyd's avatar
      Unit tests pass TF 2.0 GPU and CPU locally. (#7101) · 49097655
      Toby Boyd authored
      * Fix unit tests failures.
      
      * 96% of TF 2.0 tests on GPU are passing.
      
      * Currently all passing GPU and CPU TF 2.0
      
      * Address code comments.
      
      * use tf 2.0 cast.
      
      * Comment about working on TF 2.0 CPU
      
      * Uses contrib turn off for TF 2.0.
      
      * Fix wide_deep and add keras_common_tests.
      
      * use context to get num_gpus.
      
      * Switch to tf.keras.metrics
      49097655
  12. 28 Jun, 2019 1 commit
  13. 22 Jun, 2019 1 commit
  14. 21 Jun, 2019 2 commits
  15. 20 Jun, 2019 2 commits
  16. 19 Jun, 2019 2 commits
    • Reed's avatar
      Add mixed precision support to Transformer (#7011) · f8ec01ae
      Reed authored
      f8ec01ae
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  17. 18 Jun, 2019 1 commit
  18. 06 Jun, 2019 1 commit
  19. 05 Jun, 2019 7 commits
  20. 31 May, 2019 2 commits
  21. 29 May, 2019 3 commits
  22. 28 May, 2019 1 commit