1. 22 May, 2021 2 commits
  2. 21 May, 2021 2 commits
  3. 20 May, 2021 3 commits
  4. 19 May, 2021 1 commit
  5. 18 May, 2021 1 commit
  6. 17 May, 2021 2 commits
  7. 14 May, 2021 1 commit
  8. 13 May, 2021 1 commit
  9. 11 May, 2021 2 commits
  10. 05 May, 2021 2 commits
  11. 26 Apr, 2021 2 commits
  12. 23 Apr, 2021 2 commits
  13. 20 Apr, 2021 2 commits
  14. 19 Apr, 2021 2 commits
  15. 17 Apr, 2021 2 commits
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · 550e4f21
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      550e4f21
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · ffaa4035
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      ffaa4035
  16. 15 Apr, 2021 2 commits
  17. 13 Apr, 2021 4 commits
    • A. Unique TensorFlower's avatar
      Internal change · b1aa44d9
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      b1aa44d9
    • A. Unique TensorFlower's avatar
      Internal change · f4bd58dd
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      f4bd58dd
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 9a4d14a9
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      9a4d14a9
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 4334a892
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      4334a892
  18. 12 Apr, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  19. 10 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · 3803472a
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      3803472a
  20. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · e353e4e5
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      e353e4e5
  21. 08 Apr, 2021 2 commits
  22. 07 Apr, 2021 1 commit