1. 18 May, 2021 1 commit
  2. 17 May, 2021 2 commits
  3. 14 May, 2021 1 commit
  4. 13 May, 2021 1 commit
  5. 11 May, 2021 2 commits
  6. 05 May, 2021 2 commits
  7. 26 Apr, 2021 2 commits
  8. 23 Apr, 2021 2 commits
  9. 20 Apr, 2021 2 commits
  10. 19 Apr, 2021 2 commits
  11. 17 Apr, 2021 2 commits
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · 550e4f21
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      550e4f21
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · ffaa4035
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      ffaa4035
  12. 15 Apr, 2021 2 commits
  13. 13 Apr, 2021 4 commits
    • A. Unique TensorFlower's avatar
      Internal change · b1aa44d9
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      b1aa44d9
    • A. Unique TensorFlower's avatar
      Internal change · f4bd58dd
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      f4bd58dd
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 9a4d14a9
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      9a4d14a9
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 4334a892
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      4334a892
  14. 12 Apr, 2021 2 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  15. 10 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · 3803472a
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      3803472a
  16. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · e353e4e5
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      e353e4e5
  17. 08 Apr, 2021 2 commits
  18. 07 Apr, 2021 2 commits
  19. 06 Apr, 2021 4 commits
    • Hongkun Yu's avatar
      Clarify the deprecation warning for bert/ readme. · 8ccc242c
      Hongkun Yu authored
      PiperOrigin-RevId: 367101911
      8ccc242c
    • Hongkun Yu's avatar
      Clarify the deprecation warning for bert/ readme. · 882f8259
      Hongkun Yu authored
      PiperOrigin-RevId: 367101911
      882f8259
    • Jeremiah Liu's avatar
      Disable temperature scaling during training. · fab47e9e
      Jeremiah Liu authored
      For the `GaussianProcessClassificationHead`, the temperature scaling needs to be disabled during training to avoid unexpected modification to the learning rate, which harms model quality. (Unfortunately, this seems to require adding `training` to the `call` method).
      
      Also set the default of `gp_cov_ridge_penalty` in `RandomFeatureGaussianProcess` to 1 to be consistent with that in the `GaussianProcessClassificationHead`.
      
      PiperOrigin-RevId: 366917075
      fab47e9e
    • Jeremiah Liu's avatar
      Disable temperature scaling during training. · ff3ed4cc
      Jeremiah Liu authored
      For the `GaussianProcessClassificationHead`, the temperature scaling needs to be disabled during training to avoid unexpected modification to the learning rate, which harms model quality. (Unfortunately, this seems to require adding `training` to the `call` method).
      
      Also set the default of `gp_cov_ridge_penalty` in `RandomFeatureGaussianProcess` to 1 to be consistent with that in the `GaussianProcessClassificationHead`.
      
      PiperOrigin-RevId: 366917075
      ff3ed4cc
  20. 05 Apr, 2021 3 commits