1. 01 Apr, 2022 1 commit
  2. 28 Mar, 2022 2 commits
  3. 25 Mar, 2022 1 commit
  4. 16 Mar, 2022 1 commit
  5. 15 Feb, 2022 1 commit
  6. 14 Feb, 2022 1 commit
  7. 26 Jan, 2022 1 commit
  8. 18 Jan, 2022 1 commit
  9. 29 Dec, 2021 1 commit
  10. 17 Nov, 2021 2 commits
  11. 16 Nov, 2021 1 commit
  12. 10 Nov, 2021 1 commit
  13. 02 Nov, 2021 1 commit
  14. 26 Oct, 2021 2 commits
  15. 15 Oct, 2021 1 commit
  16. 12 Oct, 2021 1 commit
  17. 24 Sep, 2021 2 commits
  18. 20 Sep, 2021 1 commit
  19. 10 Sep, 2021 1 commit
  20. 27 Aug, 2021 1 commit
  21. 23 Aug, 2021 1 commit
  22. 12 Aug, 2021 1 commit
  23. 11 Aug, 2021 1 commit
  24. 06 Aug, 2021 1 commit
  25. 05 Aug, 2021 1 commit
  26. 03 Aug, 2021 1 commit
  27. 02 Aug, 2021 1 commit
  28. 01 Aug, 2021 1 commit
  29. 24 Jul, 2021 1 commit
  30. 09 Jul, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      With float16, always use LossScaleOptimizer. · 21286f77
      Reed Wanderman-Milne authored
      Before, it was too easy to accidentally forget to set runtime.loss_scale, which had to always be done if mixed precision is used, otherwise the model would converge to worse accuracy. Now, all that needs to be done to use mixed precision is to set runtime.mixed_precision_dtype=float16.
      
      PiperOrigin-RevId: 383767033
      21286f77
  31. 05 Jul, 2021 1 commit
  32. 01 Jul, 2021 1 commit
  33. 30 Jun, 2021 1 commit
  34. 24 Jun, 2021 2 commits
  35. 07 Jun, 2021 1 commit