1. 19 Apr, 2021 2 commits
  2. 17 Apr, 2021 2 commits
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · 550e4f21
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      550e4f21
    • Fan Yang's avatar
      Internal change to image classification. · c52a287f
      Fan Yang authored
      PiperOrigin-RevId: 368957441
      c52a287f
  3. 16 Apr, 2021 3 commits
  4. 15 Apr, 2021 3 commits
  5. 14 Apr, 2021 2 commits
  6. 13 Apr, 2021 8 commits
    • A. Unique TensorFlower's avatar
      Internal change · f4bd58dd
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      f4bd58dd
    • Hongkun Yu's avatar
      Fix parse_configuration using "in" with ParseConfigOptions · f61d51ee
      Hongkun Yu authored
      PiperOrigin-RevId: 368257425
      f61d51ee
    • Rebecca Chen's avatar
      Internal change · fbc1a8c2
      Rebecca Chen authored
      PiperOrigin-RevId: 368157180
      fbc1a8c2
    • Jaehong Kim's avatar
      Internal change · 516f0402
      Jaehong Kim authored
      PiperOrigin-RevId: 368130039
      516f0402
    • Fan Yang's avatar
      Add more pytype checking. · 615609b8
      Fan Yang authored
      PiperOrigin-RevId: 368129317
      615609b8
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 9a4d14a9
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      9a4d14a9
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · c0ac8d1c
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      c0ac8d1c
    • Hongkun Yu's avatar
      Internal change · 08fe7f0a
      Hongkun Yu authored
      PiperOrigin-RevId: 368122127
      08fe7f0a
  7. 12 Apr, 2021 6 commits
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · ba8ad4f5
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      ba8ad4f5
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · e6cda015
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      e6cda015
    • Tianjian Meng's avatar
      Internal change · b71df5b1
      Tianjian Meng authored
      PiperOrigin-RevId: 368070382
      b71df5b1
    • Reed Wanderman-Milne's avatar
      Internal change · 1e1353da
      Reed Wanderman-Milne authored
      PiperOrigin-RevId: 368067415
      1e1353da
    • A. Unique TensorFlower's avatar
      Internal change · ec90b1e7
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368040234
      ec90b1e7
    • Abdullah Rashwan's avatar
      Internal change · 425c2f52
      Abdullah Rashwan authored
      PiperOrigin-RevId: 368036370
      425c2f52
  8. 10 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · 3803472a
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      3803472a
  9. 09 Apr, 2021 2 commits
  10. 08 Apr, 2021 2 commits
  11. 07 Apr, 2021 1 commit
  12. 06 Apr, 2021 8 commits