- 11 Oct, 2019 2 commits
-
-
Hongkun Yu authored
* Revert "Update tf.contrib.data to tf.data.experimental. (#7650)" This reverts commit faf4bbb3. * revert research
-
Derek Murray authored
-
- 03 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Fix unit tests failures. * 96% of TF 2.0 tests on GPU are passing. * Currently all passing GPU and CPU TF 2.0 * Address code comments. * use tf 2.0 cast. * Comment about working on TF 2.0 CPU * Uses contrib turn off for TF 2.0. * Fix wide_deep and add keras_common_tests. * use context to get num_gpus. * Switch to tf.keras.metrics
-
- 11 Jun, 2019 1 commit
-
-
saberkun authored
252534787 by hongkuny<hongkuny@google.com>: Transformer vocab fix to strip correctly in py2 -- PiperOrigin-RevId: 252534787
-
- 06 Jun, 2019 1 commit
-
-
saberkun authored
251762562 by hongkuny<hongkuny@google.com>: Fix blue score inconsistency -- PiperOrigin-RevId: 251762562
-
- 22 May, 2019 2 commits
-
-
Toby Boyd authored
-
Tian Lin authored
* Merged commit includes the following changes: 249218656 by tianlin<tianlin@google.com>: Deal with imports, fix a typo and make unit tests fast. -- 249198645 by tianlin<tianlin@google.com>: Trivial: Remove one empty line before "import tensorflow" -- 249195490 by tianlin<tianlin@google.com>: Initialize Transformer TF V2 Model with Keras subclassing implementation. (Compatible with TF V1) -- 249195008 by tianlin<tianlin@google.com>: Internal change 249173564 by hongkuny<hongkuny@google.com>: Internal change 249079258 by hongkuny<hongkuny@google.com>: Internal change 247691534 by haoyuzhang<haoyuzhang@google.com>: Internal change 247533725 by haoyuzhang<haoyuzhang@google.com>: Internal change 247509295 by haoyuzhang<haoyuzhang@google.com>: Internal change 247311355 by wangtz<wangtz@google.com>: Internal change 247303127 by wangtz<wangtz@google.com>: ...
-
- 09 May, 2019 1 commit
-
-
Toby Boyd authored
* Add first benchmark and return stats. * Remove print statements update training steps. * Revert print T: in print statement. * Remove print(stats) * add 2 gpu accuracy test for base. * Fixed total_batch_size when using gpu + gFile deprecations. * 8 GPU test name fix * Add 4 and 8 GPU tests. * typo fixes. * Clean up test names and methods. * bleu uncased. docstring format fix.
-
- 17 Dec, 2018 1 commit
-
-
bananabowl authored
Explicitly pass values kwarg to tf.name_scope as it is currently being treated as the default_name kwarg instead. This causes an exception to be thrown in eager mode.
-
- 16 Aug, 2018 1 commit
-
-
Jules Gagnon-Marchand authored
* Deterministic dataset order fix In order for the order of the files to be deterministic, in `tf.data.Dataset.list_files(..., shuffle)`, shuffle needs to be True, otherwise different iterator inits will yield different file orders * removed unnecessary shuffle of filenames * Removed the `_FILE_SHUFFLE_BUFFER` definition
-
- 11 Jul, 2018 1 commit
-
-
cclauss authored
* Use six and feature detection in string conversion Leverage [__six.ensure_text()__](https://github.com/benjaminp/six/blob/master/six.py#L890) to deliver Unicode text in both Python 2 and Python 3. Follow Python porting best practice [use feature detection instead of version detection](https://docs.python.org/3/howto/pyporting.html#use-feature-detection-instead-of-version-detection) in ___unicode_to_native()__. * Revert the use of six.ensure_text() Thanks for catching that! I jumped the gun. It is I who have brought shame...
-
- 18 Jun, 2018 1 commit
-
-
Taylor Robie authored
* remove unused imports and lint * fix schedule.py * address PR comments
-
- 12 Jun, 2018 1 commit
-
-
Katherine Wu authored
* Add DistributionStrategy to transformer model * add num_gpu flag * Calculate per device batch size for transformer * remove reference to flags_core * Add synthetic data option to transformer * fix typo * add import back in * Use hierarchical copy * address PR comments * lint * fix spaces * group train op together to fix single GPU error * Fix translate bug (sorted_keys is a dict, not a list) * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.) * Address PR comments. Removed multi gpu flag + more * fix lint * fix more lints * add todo for Synthetic dataset * Update docs
-
- 07 Jun, 2018 1 commit
-
-
Katherine Wu authored
-
- 06 Jun, 2018 1 commit
-
-
Taylor Robie authored
* add tests for matmul embedding and schedule manager, as well as some minor cleanup * delint * address PR comments
-
- 04 Jun, 2018 1 commit
-
-
Taylor Robie authored
* port changes from previous branch now that transformer util changes are in master fix incorrect count correct (hopefully) treatment of batch_size set eval_metrics to a dummy function for now add some comments start bringing metrics to transformer TPU resolve logits shape metrics are now working except for tf.py_func metrics increase batch_size for tpu, and create summary host call fix host call reduce tpu default batch size further tune batch sizes add minibatch loss to summary handle case of single_iteration_train_steps > number points in an epoch begin to incorporate hooks add sleep workarounds disable hooks altogether generalize host call function and move to newly created tpu utils module remove all traces of params as an object switch from to address some PR comments, and change the number of data points. minor tweaks add tpu dry run for testing, and use matmul for TPU embedding infeed/outfeed queue issue is fixed. Sleeps are no longer necessary add some documentation. cleanup and address PR comments delint add accelerator __init__ fix embedding missed PR comment address PR comments fix validator bug rewrite cloud storage validator, and add oauth dependency to requirements.txt * delint
-
- 15 May, 2018 1 commit
-
-
Katherine Wu authored
-
- 11 May, 2018 1 commit
-
-
Katherine Wu authored
-
- 02 May, 2018 1 commit
-
-
Katherine Wu authored
-