1. 21 Jul, 2020 2 commits
  2. 20 Jul, 2020 1 commit
  3. 08 Jul, 2020 1 commit
  4. 25 Jun, 2020 1 commit
  5. 22 Jun, 2020 1 commit
  6. 19 Jun, 2020 1 commit
  7. 16 Jun, 2020 1 commit
  8. 12 Jun, 2020 2 commits
  9. 10 Jun, 2020 3 commits
  10. 09 Jun, 2020 1 commit
  11. 08 Jun, 2020 1 commit
  12. 03 Jun, 2020 4 commits
    • Hongkun Yu's avatar
      a6c0e677
    • Hongkun Yu's avatar
      4bb13e61
    • xinliupitt's avatar
      Add relative positional embedding to KerasBERT (#8617) · c3c2386c
      xinliupitt authored
      * root dir
      
      * zone updated
      
      * print mask
      
      * preview emb
      
      * tf print
      
      * input only
      
      * emb
      
      * tf print
      
      * emb after mask
      
      * masked_softmax print
      
      * print scores
      
      * multi folder
      
      * first pos emb
      
      * check input shape
      
      * add test temp
      
      * import math
      
      * two classes
      
      * prints
      
      * all get_pos replace
      
      * make time scale private
      
      * pos emb comments
      
      * print input
      
      * embedding_inputs
      
      * tf shape
      
      * dimention list
      
      * tf_util
      
      * print tf_util
      
      * concise
      
      * transformer pos change to layer
      
      * keep length var
      
      * length as input
      
      * None as input
      
      * print time signal
      
      * print time signal
      
      * remove print
      
      * test input shape
      
      * double check shape
      
      * double check shape
      
      * double check shape
      
      * more test
      
      * shape check
      
      * shape check
      
      * print 97 info
      
      * print 97 info new
      
      * test if sam
      
      * assert same
      
      * remove assert
      
      * tf print same
      
      * tf print diff
      
      * output example
      
      * output example
      
      * output example
      
      * formal test
      
      * formal test length
      
      * raise valurerror
      
      * test valurerror
      
      * double check
      
      * comments
      
      * remove prints
      
      * rename relative
      
      * delet naive test
      
      * delete docs in xinliu branch
      
      * code reformat
      
      * import order
      
      * indentation fix
      
      * more files
      
      * adjust char number
      
      * disable not callable
      
      * comment to length
      
      * error of length unequal to input_shape
      
      * root dir
      
      * zone updated
      
      * print mask
      
      * preview emb
      
      * tf print
      
      * input only
      
      * emb
      
      * tf print
      
      * emb after mask
      
      * masked_softmax print
      
      * print scores
      
      * multi folder
      
      * remove docs
      
      * remove prints
      
      * root dir
      
      * zone updated
      
      * print mask
      
      * preview emb
      
      * tf print
      
      * input only
      
      * emb
      
      * tf print
      
      * emb after mask
      
      * masked_softmax print
      
      * print scores
      
      * multi folder
      
      * remove docs
      
      * apply revised 3 files
      
      * rm prints
      c3c2386c
    • Tianqi Liu's avatar
      Internal change · 20897493
      Tianqi Liu authored
      PiperOrigin-RevId: 314451720
      20897493
  13. 02 Jun, 2020 2 commits
    • xinliupitt's avatar
      Add relative positional embedding to KerasBERT (#8606) · 2db2501b
      xinliupitt authored
      * root dir
      
      * zone updated
      
      * print mask
      
      * preview emb
      
      * tf print
      
      * input only
      
      * emb
      
      * tf print
      
      * emb after mask
      
      * masked_softmax print
      
      * print scores
      
      * multi folder
      
      * first pos emb
      
      * check input shape
      
      * add test temp
      
      * import math
      
      * two classes
      
      * prints
      
      * all get_pos replace
      
      * make time scale private
      
      * pos emb comments
      
      * print input
      
      * embedding_inputs
      
      * tf shape
      
      * dimention list
      
      * tf_util
      
      * print tf_util
      
      * concise
      
      * transformer pos change to layer
      
      * keep length var
      
      * length as input
      
      * None as input
      
      * print time signal
      
      * print time signal
      
      * remove print
      
      * test input shape
      
      * double check shape
      
      * double check shape
      
      * double check shape
      
      * more test
      
      * shape check
      
      * shape check
      
      * print 97 info
      
      * print 97 info new
      
      * test if sam
      
      * assert same
      
      * remove assert
      
      * tf print same
      
      * tf print diff
      
      * output example
      
      * output example
      
      * output example
      
      * formal test
      
      * formal test length
      
      * raise valurerror
      
      * test valurerror
      
      * double check
      
      * comments
      
      * remove prints
      
      * rename relative
      
      * delet naive test
      
      * delete docs in xinliu branch
      
      * code reformat
      
      * import order
      
      * indentation fix
      
      * more files
      
      * adjust char number
      
      * disable not callable
      
      * comment to length
      
      * error of length unequal to input_shape
      2db2501b
    • Chen Chen's avatar
      Internal change · 8eb91073
      Chen Chen authored
      PiperOrigin-RevId: 314373769
      8eb91073
  14. 30 May, 2020 1 commit
  15. 29 May, 2020 2 commits
  16. 28 May, 2020 1 commit
    • Reed Wanderman-Milne's avatar
      Use float32 activation in Transformer. · 94b1efc1
      Reed Wanderman-Milne authored
      Float32 is used if the model uses mixed precision with bfloat16. Float16 activation are unchanged.
      
      The motivation is that BERT with the LAMB optimizer with a gelu activation has an unstable loss when gelu is in bfloat16. Unfortunately, it is not easy to check if the LAMB optimizer and gelu is used, and perhaps there are other cases that work better with float32 activations instead of bfloat16 activations, so we always do the activation in float32 instead of bfloat16.
      
      PiperOrigin-RevId: 313618322
      94b1efc1
  17. 19 May, 2020 1 commit
  18. 18 May, 2020 1 commit
  19. 17 May, 2020 1 commit
  20. 12 May, 2020 3 commits
  21. 10 May, 2020 1 commit
  22. 05 May, 2020 1 commit
  23. 21 Apr, 2020 1 commit
  24. 20 Apr, 2020 1 commit
  25. 19 Apr, 2020 1 commit
  26. 17 Apr, 2020 4 commits