1. 26 May, 2021 1 commit
  2. 25 May, 2021 1 commit
  3. 24 May, 2021 1 commit
  4. 22 May, 2021 2 commits
  5. 21 May, 2021 2 commits
  6. 20 May, 2021 3 commits
  7. 19 May, 2021 1 commit
  8. 18 May, 2021 1 commit
  9. 17 May, 2021 2 commits
  10. 14 May, 2021 1 commit
  11. 13 May, 2021 1 commit
  12. 11 May, 2021 2 commits
  13. 05 May, 2021 1 commit
  14. 26 Apr, 2021 1 commit
  15. 23 Apr, 2021 1 commit
  16. 20 Apr, 2021 1 commit
  17. 19 Apr, 2021 1 commit
  18. 17 Apr, 2021 1 commit
    • Chen Chen's avatar
      Catch tf.text NotFoundError: · ffaa4035
      Chen Chen authored
      tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE
      
      PiperOrigin-RevId: 368978629
      ffaa4035
  19. 15 Apr, 2021 1 commit
  20. 13 Apr, 2021 2 commits
    • A. Unique TensorFlower's avatar
      Internal change · b1aa44d9
      A. Unique TensorFlower authored
      PiperOrigin-RevId: 368260712
      b1aa44d9
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API. · 4334a892
      Reed Wanderman-Milne authored
      This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness.
      
      Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale.
      
      PiperOrigin-RevId: 368123944
      4334a892
  21. 12 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Use nonexperimental mixed precision API for official models. · 0d8f9807
      Reed Wanderman-Milne authored
      For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact.
      
      Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version.
      
      PiperOrigin-RevId: 368101975
      0d8f9807
  22. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · e353e4e5
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      e353e4e5
  23. 08 Apr, 2021 1 commit
  24. 07 Apr, 2021 1 commit
  25. 06 Apr, 2021 2 commits
    • Hongkun Yu's avatar
      Clarify the deprecation warning for bert/ readme. · 8ccc242c
      Hongkun Yu authored
      PiperOrigin-RevId: 367101911
      8ccc242c
    • Jeremiah Liu's avatar
      Disable temperature scaling during training. · fab47e9e
      Jeremiah Liu authored
      For the `GaussianProcessClassificationHead`, the temperature scaling needs to be disabled during training to avoid unexpected modification to the learning rate, which harms model quality. (Unfortunately, this seems to require adding `training` to the `call` method).
      
      Also set the default of `gp_cov_ridge_penalty` in `RandomFeatureGaussianProcess` to 1 to be consistent with that in the `GaussianProcessClassificationHead`.
      
      PiperOrigin-RevId: 366917075
      fab47e9e
  26. 05 Apr, 2021 2 commits
  27. 01 Apr, 2021 5 commits