- 19 Apr, 2021 2 commits
-
-
Yeqing Li authored
PiperOrigin-RevId: 369249071
-
A. Unique TensorFlower authored
Add `include_example_field` into `SentencePredictionTextDataLoader` so that we can use the data loader in the predict step of `SentencePrediction` task. PiperOrigin-RevId: 369215827
-
- 17 Apr, 2021 2 commits
-
-
Chen Chen authored
tensorflow.python.framework.errors_impl.NotFoundError: /usr/local/lib/python3.6/dist-packages/tensorflow_text/python/metrics/_text_similarity_metric_ops.so: undefined symbol: _ZN10tensorflow6StatusC1ENS_5error4CodeEN4absl12lts_2021032411string_viewEOSt6vectorINS_10StackFrameESaIS7_EE PiperOrigin-RevId: 368978629
-
Fan Yang authored
PiperOrigin-RevId: 368957441
-
- 16 Apr, 2021 5 commits
-
-
Fan Yang authored
PiperOrigin-RevId: 368935233
-
Pablo Ribalta Lorenzo authored
Signed-off-by:Pablo Ribalta Lorenzo <pribalta@nvidia.com>
-
Jekaterina Jaroslavceva authored
* Dataset utilities added. * Global model definition * Dataset modules added. * Dataset modules fix. * global features model training added * global features fix * Test dataset update * PR fixes * repo sync * repo sync * Syncing 2 * Syncing 2 * Added global model supporting modules * code style fixes * Minor style fixes
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 368868424
-
Yeqing Li authored
PiperOrigin-RevId: 368778443
-
- 15 Apr, 2021 3 commits
- 14 Apr, 2021 5 commits
-
-
Leandro Schelb authored
-
Hongkun Yu authored
PiperOrigin-RevId: 368535232
-
A. Unique TensorFlower authored
Call set_keypoint_visibilities on the non-expanded versions of the detection and groundtruth keypoints. set_keypoint_visibilities expects a rank-3 tensor, and was being provided a rank-4 tensor. This had the unintended effect of creating a keypoint visibilities tensor of the wrong shape, resulting in only 2 keypoints being visualized. PiperOrigin-RevId: 368458273
-
Jaehong Kim authored
This CL changes ResidualBlock and InvertedBottleneckBlock. PiperOrigin-RevId: 368383954
-
Liangzhe Yuan authored
Fix the typo-ed name "remove_unecessary_ema" -> "remove_unnecessary_ema" and also re-implement it with deep dictionary copy. PiperOrigin-RevId: 368338501
-
- 13 Apr, 2021 11 commits
-
-
Jekaterina Jaroslavceva authored
* Dataset utilities added. * Global model definition * Dataset modules added. * Dataset modules fix. * global features model training added * global features fix * Test dataset update * PR fixes * repo sync * repo sync * modules moving * Removed unnecessary modules * Removed unnecessary files
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 368260712
-
Hongkun Yu authored
PiperOrigin-RevId: 368257425
-
Ronny Votel authored
PiperOrigin-RevId: 368217758
-
Vighnesh Birodkar authored
PiperOrigin-RevId: 368215027
-
Rebecca Chen authored
PiperOrigin-RevId: 368157180
-
Jaehong Kim authored
PiperOrigin-RevId: 368130039
-
Fan Yang authored
PiperOrigin-RevId: 368129317
-
Reed Wanderman-Milne authored
This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness. Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale. PiperOrigin-RevId: 368123944
-
Reed Wanderman-Milne authored
This replaces symbols in tf.keras.mixed_precision.experimental with the corresponding nonexperimental symbols. In some cases, passing a Policy is replaced with passing a policy name for conciseness. Additionally, for the Shakespeare model, the loss_scale flag is removed, since supporting it with the nonexperimental API is slightly more verbose and it is recommended users use the default loss scale. PiperOrigin-RevId: 368123944
-
Hongkun Yu authored
PiperOrigin-RevId: 368122127
-
- 12 Apr, 2021 7 commits
-
-
Reed Wanderman-Milne authored
For all modified calls to set_mixed_precision_policy(), the loss_scale argument was removed, as it cannot be passed if the nonexperimental API is used. For all such callers, the loss_scale is later used to explicitly create a LossScaleOptimizer, so removing the argument has no impact. Switching to the non-experimental LossScaleOptimizer has no effect, as it has near identical behavior and all isinstance checks within the official models check for the non-experimental version. PiperOrigin-RevId: 368101975
-
Tianjian Meng authored
PiperOrigin-RevId: 368070382
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 368067415
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 368040234
-
Abdullah Rashwan authored
PiperOrigin-RevId: 368036370
-
Ronny Votel authored
PiperOrigin-RevId: 368024164
-
Melanie Buehler authored
-
- 10 Apr, 2021 1 commit
-
-
Hongkun Yu authored
Use sparse_categorical_crossentropy for test as the loss object default does not work on tpustrategy + the single task trainer already handles the reduction. PiperOrigin-RevId: 367757677
-
- 09 Apr, 2021 3 commits
-
-
Reed Wanderman-Milne authored
All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling. PiperOrigin-RevId: 367719521
-
Fan Yang authored
PiperOrigin-RevId: 367679732
-
Hongkun Yu authored
PiperOrigin-RevId: 367564187
-
- 08 Apr, 2021 1 commit
-
-
Fan Yang authored
PiperOrigin-RevId: 367522154
-