- 27 Apr, 2021 2 commits
-
-
Dan Kondratyuk authored
PiperOrigin-RevId: 370717047
-
Xianzhi Du authored
PiperOrigin-RevId: 370677512
-
- 26 Apr, 2021 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 370548032
-
- 24 Apr, 2021 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 370198074
-
- 22 Apr, 2021 2 commits
- 21 Apr, 2021 4 commits
-
-
Dan Kondratyuk authored
PiperOrigin-RevId: 369747553
-
Yeqing Li authored
PiperOrigin-RevId: 369741002
-
Yeqing Li authored
PiperOrigin-RevId: 369712987
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 369697787
-
- 19 Apr, 2021 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 369249071
-
- 17 Apr, 2021 1 commit
-
-
Fan Yang authored
PiperOrigin-RevId: 368957441
-
- 16 Apr, 2021 1 commit
-
-
Fan Yang authored
PiperOrigin-RevId: 368935233
-
- 15 Apr, 2021 2 commits
- 14 Apr, 2021 1 commit
-
-
Jaehong Kim authored
This CL changes ResidualBlock and InvertedBottleneckBlock. PiperOrigin-RevId: 368383954
-
- 13 Apr, 2021 4 commits
-
-
Rebecca Chen authored
PiperOrigin-RevId: 368157180
-
Jaehong Kim authored
PiperOrigin-RevId: 368130039
-
Fan Yang authored
PiperOrigin-RevId: 368129317
-
Reed Wanderman-Milne authored
This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness. Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale. PiperOrigin-RevId: 368123944
-
- 12 Apr, 2021 4 commits
-
-
Tianjian Meng authored
PiperOrigin-RevId: 368070382
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 368067415
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 368040234
-
Abdullah Rashwan authored
PiperOrigin-RevId: 368036370
-
- 09 Apr, 2021 2 commits
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
Fan Yang authored
PiperOrigin-RevId: 367679732
-
- 08 Apr, 2021 1 commit
-
-
Fan Yang authored
PiperOrigin-RevId: 367522154
-
- 06 Apr, 2021 3 commits
-
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 367105004
-
Xianzhi Du authored
PiperOrigin-RevId: 367083514
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366939051
-
- 05 Apr, 2021 5 commits
-
-
Scott Zhu authored
PiperOrigin-RevId: 366900579
-
Reed Wanderman-Milne authored
This shouldn't break any official models, since I changed all LossScaleOptimizer isinstance checks to use the nonexperimental version (the experimental LSO subclasses the nonexperimental LSO, so changing isinstance checks in this way is always safe). PiperOrigin-RevId: 366891847
-
Fan Yang authored
PiperOrigin-RevId: 366889393
-
Allen Wang authored
PiperOrigin-RevId: 366883662
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366817283
-
- 03 Apr, 2021 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 366540340
-
- 02 Apr, 2021 4 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366496850
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366395416
-
Abdullah Rashwan authored
PiperOrigin-RevId: 366391425
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 366382220
-