1. 19 Nov, 2021 1 commit
  2. 17 Nov, 2021 1 commit
  3. 02 Jun, 2021 1 commit
  4. 13 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · c0ac8d1c
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      c0ac8d1c
  5. 12 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · e6cda015
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      e6cda015
  6. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · 20ed8cf0
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      20ed8cf0
  7. 06 Apr, 2021 1 commit
  8. 23 Mar, 2021 1 commit
  9. 03 Mar, 2021 1 commit
  10. 13 Sep, 2020 1 commit
  11. 10 Sep, 2020 1 commit
  12. 09 Sep, 2020 1 commit
  13. 26 Aug, 2020 5 commits
  14. 12 Aug, 2020 1 commit
  15. 10 Aug, 2020 1 commit
  16. 11 Jun, 2020 1 commit
  17. 19 May, 2020 1 commit
  18. 14 May, 2020 1 commit
  19. 26 Apr, 2020 1 commit
  20. 20 Apr, 2020 1 commit
  21. 17 Apr, 2020 2 commits
  22. 10 Apr, 2020 1 commit
  23. 04 Apr, 2020 1 commit
  24. 25 Mar, 2020 1 commit
  25. 17 Mar, 2020 1 commit
  26. 16 Mar, 2020 2 commits
  27. 12 Mar, 2020 1 commit
  28. 11 Mar, 2020 1 commit
  29. 05 Mar, 2020 1 commit
  30. 26 Feb, 2020 1 commit
  31. 24 Feb, 2020 1 commit
  32. 28 Jan, 2020 1 commit
  33. 29 Aug, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Use new mixed_float16 policy for resnet. · dcd0e7ad
      Reed Wanderman-Milne authored
      The old infer_float32_policies policy will be removed from TensorFlow soon.
      
      To test convergence, I ran the Resnet50KerasAccuracy.benchmark_8_gpu_fp16 benchmark. I got an accuracy of 0.76037 and an exp_per_second of 6908.
      
      PiperOrigin-RevId: 266191126
      dcd0e7ad
  34. 22 Aug, 2019 1 commit