- 13 Apr, 2021 6 commits
-
-
Rebecca Chen authored
PiperOrigin-RevId: 368157180
-
Jaehong Kim authored
PiperOrigin-RevId: 368130039
-
Fan Yang authored
PiperOrigin-RevId: 368129317
-
Reed Wanderman-Milne authored
This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness. Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale. PiperOrigin-RevId: 368123944
-
Reed Wanderman-Milne authored
This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness. Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale. PiperOrigin-RevId: 368123944
-
Hongkun Yu authored
PiperOrigin-RevId: 368122127
-
- 12 Apr, 2021 7 commits
-
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
Tianjian Meng authored
PiperOrigin-RevId: 368070382
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 368067415
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 368040234
-
Abdullah Rashwan authored
PiperOrigin-RevId: 368036370
-
Ronny Votel authored
PiperOrigin-RevId: 368024164
-
Melanie Buehler authored
-
- 10 Apr, 2021 1 commit
-
-
Hongkun Yu authored
Use sparse_categorical_crossentropy for test as the loss object default does not work on tpustrategy + the single task trainer already handles the reduction. PiperOrigin-RevId: 367757677
-
- 09 Apr, 2021 3 commits
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
Fan Yang authored
PiperOrigin-RevId: 367679732
-
Hongkun Yu authored
PiperOrigin-RevId: 367564187
-
- 08 Apr, 2021 2 commits
-
-
Fan Yang authored
PiperOrigin-RevId: 367522154
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 367463455
-
- 07 Apr, 2021 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 367244904
-
- 06 Apr, 2021 7 commits
-
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 367105004
-
Hongkun Yu authored
PiperOrigin-RevId: 367101911
-
Xianzhi Du authored
PiperOrigin-RevId: 367083514
-
Reed Wanderman-Milne authored
This has no functional impact since the default is currently True, but I plan on changing the default to False soon. PiperOrigin-RevId: 367049942
-
Reed Wanderman-Milne authored
The function `tf.train.experimental.enable_mixed_precision_graph_rewrite` will be removed from the TF2 namespace soon, at which point it will only be accessible under tf.compat.v1. PiperOrigin-RevId: 367046393
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366939051
-
Jeremiah Liu authored
For the `GaussianProcessClassificationHead`, the temperature scaling needs to be disabled during training to avoid unexpected modification to the learning rate, which harms model quality. (Unfortunately, this seems to require adding `training` to the `call` method). Also set the default of `gp_cov_ridge_penalty` in `RandomFeatureGaussianProcess` to 1 to be consistent with that in the `GaussianProcessClassificationHead`. PiperOrigin-RevId: 366917075
-
- 05 Apr, 2021 12 commits
-
-
Scott Zhu authored
PiperOrigin-RevId: 366900579
-
Reed Wanderman-Milne authored
This shouldn't break any official models, since I changed all LossScaleOptimizer isinstance checks to use the nonexperimental version (the experimental LSO subclasses the nonexperimental LSO, so changing isinstance checks in this way is always safe). PiperOrigin-RevId: 366891847
-
Jeremiah Liu authored
This change allows `GaussianProcessClassificationHead` to output only the predictive logits during training and evaluation (instead of outputting a tuple `(logits, covmat)`). The goal is to make the layer more compatible with `SentencePredictionTask` and Keras' `model.fit()` API. PiperOrigin-RevId: 366891298
-
Fan Yang authored
PiperOrigin-RevId: 366889393
-
Allen Wang authored
PiperOrigin-RevId: 366883662
-
Hongkun Yu authored
PiperOrigin-RevId: 366883391
-
Hongkun Yu authored
PiperOrigin-RevId: 366883351
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366879385
-
A. Unique TensorFlower authored
Rewrite the GIOU loss op in matrix form to avoid the map_fn. map_fn could be prohibitively costly when dealing with large number of bounding boxes. PiperOrigin-RevId: 366854232
-
Vighnesh Birodkar authored
PiperOrigin-RevId: 366850188
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366819679
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366817283
-
- 04 Apr, 2021 1 commit
-
-
Pablo Ribalta Lorenzo authored
Signed-off-by:Pablo Ribalta Lorenzo <pribalta@nvidia.com>
-