- 03 Jun, 2020 5 commits
-
-
Hongkun Yu authored
This reverts commit 4bb13e61.
-
Hongkun Yu authored
This reverts commit c3c2386c.
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * remove prints * root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * remove docs * apply revised 3 files * rm prints
-
Maxim Neumann authored
PiperOrigin-RevId: 314486753
-
Tianqi Liu authored
PiperOrigin-RevId: 314451720
-
- 02 Jun, 2020 2 commits
-
-
xinliupitt authored
* root dir * zone updated * print mask * preview emb * tf print * input only * emb * tf print * emb after mask * masked_softmax print * print scores * multi folder * first pos emb * check input shape * add test temp * import math * two classes * prints * all get_pos replace * make time scale private * pos emb comments * print input * embedding_inputs * tf shape * dimention list * tf_util * print tf_util * concise * transformer pos change to layer * keep length var * length as input * None as input * print time signal * print time signal * remove print * test input shape * double check shape * double check shape * double check shape * more test * shape check * shape check * print 97 info * print 97 info new * test if sam * assert same * remove assert * tf print same * tf print diff * output example * output example * output example * formal test * formal test length * raise valurerror * test valurerror * double check * comments * remove prints * rename relative * delet naive test * delete docs in xinliu branch * code reformat * import order * indentation fix * more files * adjust char number * disable not callable * comment to length * error of length unequal to input_shape
-
Chen Chen authored
PiperOrigin-RevId: 314373769
-
- 30 May, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 313906815
-
- 29 May, 2020 2 commits
-
-
Hongkun Yu authored
Proposes the full functionality of MultiHeadAttention layer. This change first goes to model garden NLP library. PiperOrigin-RevId: 313847485
-
Chen Chen authored
PiperOrigin-RevId: 313812017
-
- 28 May, 2020 1 commit
-
-
Reed Wanderman-Milne authored
Float32 is used if the model uses mixed precision with bfloat16. Float16 activation are unchanged. The motivation is that BERT with the LAMB optimizer with a gelu activation has an unstable loss when gelu is in bfloat16. Unfortunately, it is not easy to check if the LAMB optimizer and gelu is used, and perhaps there are other cases that work better with float32 activations instead of bfloat16 activations, so we always do the activation in float32 instead of bfloat16. PiperOrigin-RevId: 313618322
-
- 21 May, 2020 2 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 312751112
-
Hongkun Yu authored
Transformer Encoder: when embedding width differs from hidden size, add a projection to hidden size. PiperOrigin-RevId: 312708922
-
- 19 May, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 312366167
-
- 18 May, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 312116965
-
- 17 May, 2020 1 commit
-
-
A. Unique TensorFlower authored
Update nlp.modeling.layers.ReZeroTransformer, to have the same interface with nlp.modeling.layers.Transformer PiperOrigin-RevId: 311937563
-
- 15 May, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 311773503
-
- 14 May, 2020 2 commits
-
-
Chen Chen authored
PiperOrigin-RevId: 311597242
-
Jeremiah Harmsen authored
Add network and BERT model to perform per-token classification (e.g., for named entity recognition tasks). PiperOrigin-RevId: 311480326
-
- 13 May, 2020 2 commits
-
-
Chen Chen authored
PiperOrigin-RevId: 311428193
-
Scott Zhu authored
The intention of this change is to reduce the code complexity within Keras class, especially for Network, which currently contains logic for both subclass Model and functional Model. After this change, the subclass model and functional model become individual class and become self contained. 1. Model is now the base class for subclass model. It doesn't contains network structure management, and the topology will be created within __init__ and __call__, which is for user to implement. It also contains compile/fit/eval/predict, which is the basic functionality for model training. 2. Functional is created based on existing Network class. It extends the Model, which allows it leverage compile/fit/eval/predict. In addition, it also take input/output as init parameter and manage the network topology. 3. Sequential model is now a subclass of Functional, since it will use Functional's method to manage it topology (layer stacking). Model(input, output) will create a Functional under the hood, and behave the same way as before. PiperOrigin-RevId: 311232972
-
- 12 May, 2020 3 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 311196489
-
Hongkun Yu authored
PiperOrigin-RevId: 311165658
-
Chen Chen authored
PiperOrigin-RevId: 311072125
-
- 10 May, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 310767440
-
- 09 May, 2020 1 commit
-
-
A. Unique TensorFlower authored
Add an interface in nlp.modeling.networks.encoder_scaffold.EncoderScaffold and nlp.modeling.networks.transformer_encoder.TransformerEncoder, to get the pooler Dense layer reference. PiperOrigin-RevId: 310675150
-
- 08 May, 2020 1 commit
-
-
A. Unique TensorFlower authored
Add interfaces in nlp.modeling.networks.encoder_scaffold.EncoderScaffold, allowing it to output all hidden layer references, and all intermediate output data references. PiperOrigin-RevId: 310509202
-
- 05 May, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 310032518
-
- 25 Apr, 2020 1 commit
-
-
Sergey Mironov authored
-
- 21 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 307689094
-
- 20 Apr, 2020 2 commits
-
-
Chen Chen authored
PiperOrigin-RevId: 307500045
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 307425903
-
- 19 Apr, 2020 1 commit
-
-
Le Hou authored
PiperOrigin-RevId: 307297217
-
- 17 Apr, 2020 4 commits
- 15 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306748161
-
- 13 Apr, 2020 1 commit
-
-
Chen Chen authored
PiperOrigin-RevId: 306182576
-
- 08 Apr, 2020 1 commit
-
-
Chen Chen authored
num_output_classes -> pooled_output_dim classification_layer_initializer -> pooler_layer_initializer PiperOrigin-RevId: 305550415
-