- 23 Nov, 2021 1 commit
-
-
Frederick Liu authored
PiperOrigin-RevId: 411729044
-
- 10 Mar, 2021 1 commit
-
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
- 17 Feb, 2021 1 commit
-
-
Hanhan Li authored
PiperOrigin-RevId: 357974971
-
- 04 Nov, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 340580428
-
- 26 Oct, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 339071563
-
- 24 Aug, 2020 2 commits
-
-
Hongkun Yu authored
Keras model serialization causes a lot of problems. PiperOrigin-RevId: 328162551
-
Hongkun Yu authored
Keras model serialization causes a lot of problems. PiperOrigin-RevId: 328162551
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 11 Aug, 2020 1 commit
-
-
xinliupitt authored
-
- 08 Aug, 2020 2 commits
-
-
xinliupitt authored
-
xinliupitt authored
-
- 17 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 321817352
-
Hongkun Yu authored
PiperOrigin-RevId: 321817352
-
- 08 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 320124801
-
Hongkun Yu authored
PiperOrigin-RevId: 320124801
-
- 03 Jun, 2020 4 commits
-
-
Hongkun Yu authored
This reverts commit 4bb13e61.
-
Hongkun Yu authored
This reverts commit c3c2386c.
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * remove prints * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * apply revised 3 files * rm prints
-
Tianqi Liu authored
PiperOrigin-RevId: 314451720
-
- 02 Jun, 2020 1 commit
-
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape
-
- 13 Feb, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 294997928
-
- 19 Dec, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 286325224
-
- 02 Dec, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 283266705
-
- 22 Nov, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 281872406
-
- 09 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Instead of needing to ensure variables are float32, casting inputs to float32, etc, instead dtype="float32" is passed to the layer constructor, which will do all that logic automatically. The only difference is the output of LayerNorm is now float32 instead of float16, so an extra cast is needed elsewhere. PiperOrigin-RevId: 273833286
-
- 07 Oct, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273371605
-
- 05 Sep, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 267435985
-
- 22 Aug, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 264853703
-
- 21 Aug, 2019 1 commit
-
-
Reed authored
-
- 20 Aug, 2019 1 commit
-
-
Reed authored
The old infer_float32_policies policy will be removed from TensorFlow soon.
-
- 08 Aug, 2019 1 commit
-
-
Reed authored
Also, do Transformer inference in fp16, as well as training, when --dtype=fp16. In TF 2, layers now cannot run in multiple different dtypes, so we must use the same dtype for training and inference.
-
- 24 Jul, 2019 1 commit
-
-
guptapriya authored
-
- 21 Jun, 2019 1 commit
-
-
guptapriya authored
* trying fake merge call * make metrics optional * Remove extra print
-
- 19 Jun, 2019 1 commit
-
-
Reed authored
-
- 28 May, 2019 1 commit
-
-
Igor authored
* Fixes that make transformer run. * Remove debug print statements. * Changed the permissions to 644. * Fix the rest of the permissions. * enable static batch in all benchmarks * Restrict dist strat hack to training mode For now we will do predict/eval without dist strat, so remove that hack in non training cases. * Use `inputs` instead of `x` as arg name for call Keras has different behavior based on whether the inputs are called `inputs` or not. Using `inputs` gives expected behaviors. * Avoid extra map fn on input in dist strat case * Update how we handle custom metrics This new approach works with and without dist strat. The previous one didn't work with dist strat. We need to fix that but this is reasonable in meantime (b/133724664). * Update benchmarks * typo in metrics code * Revert metrics change Didn't actually work in distributed case..
-
- 24 May, 2019 1 commit
-
-
Tian Lin authored
* Merged commit includes the following changes: 249776315 by tianlin<tianlin@google.com>: Internal change 249763206 by tianlin<tianlin@google.com>: For TF 2.0 (related to Beam Search), expand cond dims in tf.where(cond, x, y) to make all parameters broadcastable. -- 249392724 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 249776315 * Merged commit includes the following changes: 249823043 by tianlin<tianlin@google.com>: Bring back v2 test for predict and eval. -- PiperOrigin-RevId: 249823043
-
- 22 May, 2019 1 commit
-
-
Tian Lin authored
* Merged commit includes the following changes: 249218656 by tianlin<tianlin@google.com>: Deal with imports, fix a typo and make unit tests fast. -- 249198645 by tianlin<tianlin@google.com>: Trivial: Remove one empty line before "import tensorflow" -- 249195490 by tianlin<tianlin@google.com>: Initialize Transformer TF V2 Model with Keras subclassing implementation. (Compatible with TF V1) -- 249195008 by tianlin<tianlin@google.com>: Internal change 249173564 by hongkuny<hongkuny@google.com>: Internal change 249079258 by hongkuny<hongkuny@google.com>: Internal change 247691534 by haoyuzhang<haoyuzhang@google.com>: Internal change 247533725 by haoyuzhang<haoyuzhang@google.com>: Internal change 247509295 by haoyuzhang<haoyuzhang@google.com>: Internal change 247311355 by wangtz<wangtz@google.com>: Internal change 247303127 by wangtz<wangtz@google.com>: ...
-
- 04 Jun, 2018 1 commit
-
-
Taylor Robie authored
* port changes from previous branch now that transformer util changes are in master fix incorrect count correct (hopefully) treatment of batch_size set eval_metrics to a dummy function for now add some comments start bringing metrics to transformer TPU resolve logits shape metrics are now working except for tf.py_func metrics increase batch_size for tpu, and create summary host call fix host call reduce tpu default batch size further tune batch sizes add minibatch loss to summary handle case of single_iteration_train_steps > number points in an epoch begin to incorporate hooks add sleep workarounds disable hooks altogether generalize host call function and move to newly created tpu utils module remove all traces of params as an object switch from to address some PR comments, and change the number of data points. minor tweaks add tpu dry run for testing, and use matmul for TPU embedding infeed/outfeed queue issue is fixed. Sleeps are no longer necessary add some documentation. cleanup and address PR comments delint add accelerator __init__ fix embedding missed PR comment address PR comments fix validator bug rewrite cloud storage validator, and add oauth dependency to requirements.txt * delint
-
- 02 May, 2018 1 commit
-
-
Katherine Wu authored
-