- 23 Nov, 2021 1 commit
-
-
Frederick Liu authored
PiperOrigin-RevId: 411729044
-
- 10 Mar, 2021 2 commits
-
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
Frederick Liu authored
PiperOrigin-RevId: 361957289
-
- 17 Feb, 2021 2 commits
- 04 Nov, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 340580428
-
Hongkun Yu authored
PiperOrigin-RevId: 340580428
-
- 26 Oct, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 339071563
-
Hongkun Yu authored
PiperOrigin-RevId: 339071563
-
- 24 Aug, 2020 2 commits
-
-
Hongkun Yu authored
Keras model serialization causes a lot of problems. PiperOrigin-RevId: 328162551
-
Hongkun Yu authored
Keras model serialization causes a lot of problems. PiperOrigin-RevId: 328162551
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 11 Aug, 2020 1 commit
-
-
xinliupitt authored
-
- 08 Aug, 2020 2 commits
-
-
xinliupitt authored
-
xinliupitt authored
-
- 17 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 321817352
-
Hongkun Yu authored
PiperOrigin-RevId: 321817352
-
- 08 Jul, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 320124801
-
Hongkun Yu authored
PiperOrigin-RevId: 320124801
-
- 03 Jun, 2020 4 commits
-
-
Hongkun Yu authored
This reverts commit 4bb13e61.
-
Hongkun Yu authored
This reverts commit c3c2386c.
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * remove prints * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * apply revised 3 files * rm prints
-
Tianqi Liu authored
PiperOrigin-RevId: 314451720
-
- 02 Jun, 2020 1 commit
-
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape
-
- 13 Feb, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 294997928
-
- 19 Dec, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 286325224
-
- 02 Dec, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 283266705
-
- 22 Nov, 2019 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 281872406
-
- 09 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Instead of needing to ensure variables are float32, casting inputs to float32, etc, instead dtype="float32" is passed to the layer constructor, which will do all that logic automatically. The only difference is the output of LayerNorm is now float32 instead of float16, so an extra cast is needed elsewhere. PiperOrigin-RevId: 273833286
-
- 07 Oct, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273371605
-
- 05 Sep, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 267435985
-
- 22 Aug, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 264853703
-
- 21 Aug, 2019 1 commit
-
-
Reed authored
-
- 20 Aug, 2019 1 commit
-
-
Reed authored
The old infer_float32_policies policy will be removed from TensorFlow soon.
-
- 08 Aug, 2019 1 commit
-
-
Reed authored
Also, do Transformer inference in fp16, as well as training, when --dtype=fp16. In TF 2, layers now cannot run in multiple different dtypes, so we must use the same dtype for training and inference.
-
- 24 Jul, 2019 1 commit
-
-
guptapriya authored
-
- 21 Jun, 2019 1 commit
-
-
guptapriya authored
* trying fake merge call * make metrics optional * Remove extra print
-
- 19 Jun, 2019 1 commit
-
-
Reed authored
-
- 28 May, 2019 1 commit
-
-
Igor authored
* Fixes that make transformer run. * Remove debug print statements. * Changed the permissions to 644. * Fix the rest of the permissions. * enable static batch in all benchmarks * Restrict dist strat hack to training mode For now we will do predict/eval without dist strat, so remove that hack in non training cases. * Use `inputs` instead of `x` as arg name for call Keras has different behavior based on whether the inputs are called `inputs` or not. Using `inputs` gives expected behaviors. * Avoid extra map fn on input in dist strat case * Update how we handle custom metrics This new approach works with and without dist strat. The previous one didn't work with dist strat. We need to fix that but this is reasonable in meantime (b/133724664). * Update benchmarks * typo in metrics code * Revert metrics change Didn't actually work in distributed case..
-