1. 10 Mar, 2021 2 commits
  2. 26 Oct, 2020 2 commits
  3. 06 Sep, 2020 2 commits
  4. 12 Aug, 2020 2 commits
  5. 17 Jun, 2020 1 commit
  6. 24 May, 2020 1 commit
  7. 14 May, 2020 1 commit
  8. 06 May, 2020 1 commit
  9. 27 Apr, 2020 1 commit
  10. 22 Apr, 2020 1 commit
  11. 25 Mar, 2020 1 commit
  12. 10 Mar, 2020 1 commit
  13. 05 Mar, 2020 1 commit
  14. 25 Feb, 2020 1 commit
  15. 13 Feb, 2020 1 commit
  16. 19 Dec, 2019 1 commit
  17. 12 Dec, 2019 1 commit
  18. 27 Nov, 2019 2 commits
  19. 25 Nov, 2019 1 commit
  20. 24 Oct, 2019 1 commit
  21. 09 Sep, 2019 1 commit
  22. 04 Sep, 2019 1 commit
  23. 30 Aug, 2019 1 commit
  24. 22 Aug, 2019 1 commit
  25. 20 Aug, 2019 1 commit
  26. 16 Aug, 2019 1 commit
  27. 15 Aug, 2019 1 commit
  28. 09 Aug, 2019 1 commit
  29. 23 Jul, 2019 1 commit
  30. 21 Jun, 2019 1 commit
  31. 19 Jun, 2019 2 commits
    • Reed's avatar
      Add mixed precision support to Transformer (#7011) · f8ec01ae
      Reed authored
      f8ec01ae
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  32. 18 Jun, 2019 1 commit
  33. 29 May, 2019 1 commit
  34. 24 May, 2019 1 commit
    • Toby Boyd's avatar
      Transformer v2 benchmark (#6860) · f2ea2f53
      Toby Boyd authored
      * Moved common keras code to utils.
      
      * Initial 1 gpu benchmark
      
      - Aligned flags with resnet example
      - removed code/features that are not super useful
      - eval as part of train if bleu source/ref provided
      - add exp_per_second hook
      
      * Rename benchmark classes, pass batch-size and log_steps.
      
      * fix docstring
      
      * Predict done with checkpoints inline
      
      - perfzero baseclass
      
      * steps not epochs with smoother training loop.
      
      * do not initialize history outside loop.
      
      * 5000 between eval not 500
      
      * estimator to keras.
      
      * remove epochs var.
      
      * use range not xrange.
      
      * 200K steps for 1 gpu
      
      * fix global step
      f2ea2f53