- 22 Jul, 2019 1 commit
-
-
Hongkun Yu authored
* Update pylint.rcfile * Update pylint.rcfile * Update pylint.rcfile * add new sanity check script for lint to replace current lint script. * Revert "Update pylint.rcfile" This reverts commit f6036cd7e7c4b9e3eeb47bb56a63927a040a2761. * Revert "Update pylint.rcfile" This reverts commit e3af497342e26bbbbecfc8c8f79cb0e24a2ef960. * Revert "Update pylint.rcfile" This reverts commit 6136636eee6e90fd191ebbb4ccaa9fb89c0290f4. * update scripts * disable trailing-newlines
-
- 21 Jul, 2019 1 commit
-
-
Zongwei Zhou authored
-
- 20 Jul, 2019 3 commits
-
-
Zongwei Zhou authored
-
Toby Boyd authored
-
Toby Boyd authored
-
- 19 Jul, 2019 8 commits
-
-
Igor authored
259030078 by isaprykin<isaprykin@google.com>: Clean up the --clone_model_in_keras_dist_strat from Keras Resnet. The cloning flag has been removed. The current rule is that cloning is only done in graph mode. That resulted in duplicate benchmarks: eager+no-cloning vs eager+cloning. I removed eager+cloning ones. -- 259026454 by isaprykin<isaprykin@google.com>: Internal change PiperOrigin-RevId: 259030078 -
Jing Li authored
* Merged commit includes the following changes: 258867180 by jingli<jingli@google.com>: Add new folders for upcoming reorg in model garden. -- 258893811 by hongkuny<hongkuny@google.com>: Adds summaries for metrics, allowing metrics inside keras.model. -- 258893048 by isaprykin<isaprykin@google.com>: Remove the `cloning` argument to `compile()`. Keras models are distributed by cloning in graph mode and without cloning in eager mode as of the change # 258652546. -- 258881002 by hongkuny<hongkuny@google.com>: Fix lint. -- 258874998 by hongkuny<hongkuny@google.com>: Internal -- 258872662 by hongkuny<hongkuny@google.com>: Fix doc -- PiperOrigin-RevId: 258867180 * Create __init__.py * Update __init__.py * Update __init__.py * Update __init__.py -
guptapriya authored
-
guptapriya authored
-
guptapriya authored
This combination does not yet work. Fail early with an explicit message instead of throwing error later on.
-
guptapriya authored
The current approach checks for presence of contrib. Sometimes this is not sufficient (for e..g when testing TF 1 + enable_v2_behavior=True which is what internal tests currently do)
-
Hongkun Yu authored
258881002 by hongkuny<hongkuny@google.com>: Fix lint. -- 258874998 by hongkuny<hongkuny@google.com>: Internal -- 258872662 by hongkuny<hongkuny@google.com>: Fix doc -- 258871624 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 258881002
-
- 18 Jul, 2019 3 commits
-
-
Hongkun Yu authored
258597234 by rxsang<rxsang@google.com>: Update all the TPUStrategy examples to use the new v2 APIs, i.e. make_dataset_iterator -> experimental_distribute_dataset, make_input_fn_iterator -> experimental_distribute_datasets_from_function, unwrap -> experimental_local_results, experimental_run -> experimental_run_v2 -- 258581998 by taylorrobie<taylorrobie@google.com>: Update keras v2 optimizers to reuse coefficients which are shared across all updates, which reduces the total number of ops created by between 5% (for simple optimizers such as SGD and Adagrad) and 25% (for complicated optimizers such as Adam and NAdam). Separate copies are made for each device and dtype. The effect of this change on run time is fairly minimal since Grappler is expected to consolidate most of these ops; however it does improve graph construction time. -- PiperOrigin-RevId: 258597234 -
Toby Boyd authored
* Added benchmarks and common flags. * Add cpu tests. * Add tracking epoch times. * fix transformer. * Add examples_per_second. * fix pylint
-
Haoyu Zhang authored
* Config threadpool, cuDNN persistent BN, and grappler layout optimizer properly for ResNet56 * Add tweaked tests for Resnet56 * Avoid triggering the last partial batch overhead by explicitly dropping remainder
-
- 16 Jul, 2019 2 commits
-
-
Hongkun Yu authored
258208153 by hongkuny<hongkuny@google.com>: Adds run_eagerly option for bert. -- PiperOrigin-RevId: 258208153 -
nnigania authored
* Ncf perf changes 1)exclude metric layer from CTL train step 2)dataset optimization to fix size of the sample_weights, preventing a costly broadcast during loss calculation for multi-gpu case
-
- 15 Jul, 2019 2 commits
-
-
Bruce Fontaine authored
* Initial implementation of Shakespeare character LSTM. * Fix import order
-
Hongkun Yu authored
257883986 by hongkuny<hongkuny@google.com>: Adds tf.summary for bert training -- PiperOrigin-RevId: 257883986
-
- 11 Jul, 2019 5 commits
-
-
Toby Boyd authored
-
Toby Boyd authored
* Record highest uncased bleu found. * change to bleu_best_score_iteration
-
Toby Boyd authored
-
saberkun authored
257314238 by hongkuny<hongkuny@google.com>: Creates transformer v2 README. Remove contents that are not implemented. -- PiperOrigin-RevId: 257314238 -
Toby Boyd authored
* Move to global_step. * Hook to use global_step. * fix comment start step 1 not step 0. * remove hack used for testing. * Add docstring.
-
- 09 Jul, 2019 1 commit
-
-
Haoyu Zhang authored
* Improve performance for Cifar ResNet benchmarks * Revert batch size changes to benchmarks
-
- 08 Jul, 2019 2 commits
- 03 Jul, 2019 1 commit
-
-
Toby Boyd authored
* Fix unit tests failures. * 96% of TF 2.0 tests on GPU are passing. * Currently all passing GPU and CPU TF 2.0 * Address code comments. * use tf 2.0 cast. * Comment about working on TF 2.0 CPU * Uses contrib turn off for TF 2.0. * Fix wide_deep and add keras_common_tests. * use context to get num_gpus. * Switch to tf.keras.metrics
-
- 02 Jul, 2019 3 commits
-
-
saberkun authored
256204636 by hongkuny<hongkuny@google.com>: Internal -- 256079834 by hongkuny<hongkuny@google.com>: Clean up: move common flags together for further refactoring Enable steps_per_loop option for all applications. -- PiperOrigin-RevId: 256204636 -
Yuefeng Zhou authored
* Add StepCounterHook to hooks_helper.py * Update symbol.
-
Yuefeng Zhou authored
when there are multiple workers.
-
- 28 Jun, 2019 4 commits
-
-
Toby Boyd authored
-
nnigania authored
* borrowing a tf1.x optimization which converts gradients from sparse to dense for better perf * cleanup after code review
-
saberkun authored
* Merged commit includes the following changes: 255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- 254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change 254785517 by zongweiz<zongweiz@google.com>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- PiperOrigin-RevId: 255493073 * Update README.md -
David M. Chen authored
255493073 by hongkuny<hongkuny@google.com>: BERT initial OSS readme update. -- 255470372 by dmchen<dmchen@google.com>: Slightly expand expected range for F1 score in BERT SQuAD accuracy test -- 255109240 by hongkuny<hongkuny@google.com>: Update eval/predict batch sizes. -- 255010016 by hongkuny<hongkuny@google.com>: Internal -- PiperOrigin-RevId: 255493073
-
- 25 Jun, 2019 1 commit
-
-
saberkun authored
254874613 by hongkuny<hongkuny@google.com>: Update glue tasks enum to match directory name -- 254866171 by taylorrobie<taylorrobie@google.com>: Internal change PiperOrigin-RevId: 254874613
-
- 24 Jun, 2019 2 commits
-
-
saberkun authored
254785517 by A. Unique TensorFlower<gardener@tensorflow.org>: Use train_single_step for BERT GPU models to temporarily work around some performance bugs in GPU runs -- 254497647 by hongkuny<hongkuny@google.com>: Fix device placement for TPU export model. -- PiperOrigin-RevId: 254785517 -
nnigania authored
-
- 22 Jun, 2019 1 commit
-
-
Toby Boyd authored
-