- 10 Mar, 2021 1 commit
-
-
Frederick Liu authored
PiperOrigin-RevId: 362075728
-
- 17 Nov, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 342770296
-
Hongkun Yu authored
PiperOrigin-RevId: 342770296
-
- 13 Sep, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 331359058
-
Hongkun Yu authored
PiperOrigin-RevId: 331359058
-
- 12 Aug, 2020 2 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
Hongkun Yu authored
PiperOrigin-RevId: 326286926
-
- 18 Jul, 2020 2 commits
-
-
Yuefeng Zhou authored
PiperOrigin-RevId: 321890251
-
Yuefeng Zhou authored
PiperOrigin-RevId: 321890251
-
- 09 May, 2020 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 310658964
-
- 17 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306994199
-
- 14 Apr, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 306453144
-
- 05 Mar, 2020 1 commit
-
-
Hongkun Yu authored
PiperOrigin-RevId: 299160422
-
- 16 Dec, 2019 1 commit
-
-
Hongkun Yu authored
Remove not maintained code path. PiperOrigin-RevId: 285869559
-
- 26 Nov, 2019 1 commit
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 282650416
-
- 16 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command: python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise. PiperOrigin-RevId: 275059871
-
- 17 Sep, 2019 1 commit
-
-
Hongkun Yu authored
Move movielens to recommendation PiperOrigin-RevId: 269680664
-
- 09 Sep, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--stop_threshold, --num_gpu, --hooks, --export_dir, and --distribution_strategy have been unexposed from models which do not use them PiperOrigin-RevId: 268032080
-
- 04 Sep, 2019 1 commit
-
-
Reed Wanderman-Milne authored
--clean, --train_epochs, and --epochs_between_evals have been unexposed from models which do not use them PiperOrigin-RevId: 267065651
-
- 12 Aug, 2019 1 commit
-
-
Hongjun Choi authored
262988559 by A. Unique TensorFlower<gardener@tensorflow.org>: Enable NCF TF 2.0 model to run on TPUStrategy. -- 262971756 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change 262967691 by hongkuny<hongkuny@google.com>: Internal -- PiperOrigin-RevId: 262988559
-
- 09 Aug, 2019 1 commit
-
-
Nimit Nigania authored
-
- 29 Jul, 2019 1 commit
-
-
Hongjun Choi authored
260228553 by priyag<priyag@google.com>: Enable transformer and NCF official model tests. Also fix some minor issues so that all tests pass with TF 1 + enable_v2_behavior. -- 260043210 by A. Unique TensorFlower<gardener@tensorflow.org>: Add logic to train NCF model using offline generated data. -- 259778607 by priyag<priyag@google.com>: Internal change 259656389 by hongkuny<hongkuny@google.com>: Internal change PiperOrigin-RevId: 260228553
-
- 23 Jul, 2019 2 commits
-
-
Toby Boyd authored
* Add force_run_distributed tests. * Added enable_eager * r/force_run_distributed/force_v2_in_keras_compile * Adding force_v2 tests and FLAGs. * Rename method to avoid conflict. * Add cpu force_v2 tests. * fix lint, wrap line. * change to force_v2_in_keras_compile * Update method name. * Lower mlperf target to 0.736.
-
Hongjun Choi authored
* Merged commit includes the following changes: 259442882 by hongkuny<hongkuny@google.com>: Internal -- 259377621 by A. Unique TensorFlower<gardener@tensorflow.org>: Fix NCF serialization/de-serialization logic in NCF input pipeline to use tf.FixedLenFeature instead of raw string/binary decoding. -- 259373183 by A. Unique TensorFlower<gardener@tensorflow.org>: Create binary to generate NCF training/evaluation dataset offline. -- 259026454 by isaprykin<isaprykin@google.com>: Internal change 258871624 by hongkuny<hongkuny@google.com>: Internal change 257285772 by haoyuzhang<haoyuzhang@google.com>: Internal change 256202287 by A. Unique TensorFlower<gardener@tensorflow.org>: Internal change. -- 254069984 by hongkuny<hongkuny@google.com>: Automated rollback of changelist 254060732. 254060732 by yifeif<yifeif@google.com>: Automated rollback of changelist 254027750. 254027750 by hongkuny<hongkuny@google.com>: Internal change 253118910 by hongkuny<hongkuny@google.com>: Internal change 251906769 by hongkuny<hongkuny@google.com>: Internal change 251303452 by haoyuzhang<haoyuzhang@google.com>: Internal change PiperOrigin-RevId: 259442882 * Update ncf_keras_main.py
-
- 21 Jun, 2019 1 commit
-
-
Toby Boyd authored
* XLA FP32 and first test * More XLA benchmarks FP32. * Add eager to NCF and refactor resnet. * fix v2_0 calls and more flag refactor. * Remove extra flag args. * 90 epoch default * add return * remove xla not used by estimator. * Remove duplicate run_eagerly. * fix flag defaults. * Remove fp16_implementation flag option. * Remove stop early on mlperf test. * remove unneeded args. * load flags from keras mains.
-
- 13 Jun, 2019 2 commits
-
-
guptapriya authored
-
guptapriya authored
-
- 03 Jun, 2019 1 commit
-
-
guptapriya authored
-
- 31 May, 2019 2 commits
-
-
Haoyu Zhang authored
-
Haoyu Zhang authored
* Fix various lint errors * Fix logging format
-
- 29 May, 2019 1 commit
-
-
Bruce Fontaine authored
* Add flag to use custom training loop for keras NCF model. * Add error check to NCF model for custom training loop + tf1.0.
-
- 28 May, 2019 1 commit
-
-
Bruce Fontaine authored
* Add a custom training loop for NCF model with TF2.0 * Fix long line in ncf_keras_main.py * Remove dataset repeat when using custom training loop.
-
- 24 May, 2019 1 commit
-
-
Priya Gupta authored
Add early stopping logic to ncf keras when desired threshold is met. Also change the default batch size to match the tuned hyperparams
-
- 29 Apr, 2019 1 commit
-
-
Igor authored
* Add benchmarks with the --cloning flag to Resnet and NFC. * Renamed cloning to clone_model_in_keras_dist_strat. Dropped a few tests that aren't essential. * Fixed up the formatting after re-naming the flag to a much longer name. Thanks, lint. * Fixed the lint error in nfc_common.py
-
- 20 Apr, 2019 1 commit
-
-
Shining Sun authored
* Remove contrib imports, or move them inline * Use exposed API for FixedLenFeature * Replace tf.logging with absl logging * Change GFile to v2 APIs * replace tf.logging with absl loggin in movielens * Fixing an import bug * Change gfile to v2 APIs in code * Swap to keras optimizer v2 * Bug fix for optimizer * Change tf.log to tf.keras.backend.log * Change the loss function to keras loss * convert another loss to keras loss * Resolve comments and fix lint * Add a doc string * Fix existing tests and add new tests for DS * Added tests for multi-replica * Fix lint * resolve comments * make estimator run in tf2.0 * use compat v1 loss * fix lint issue
-
- 01 Mar, 2019 1 commit
-
-
Shining Sun authored
* tmp commit * tmp commit * first attempt (without eval) * Bug fixes * bug fixes * training done * Loss NAN, no eval * Loss weight problem solved * resolve the NAN loss problem * Problem solved. Clean up needed * Added a todo * Remove debug prints * Extract get_optimizer to ncf_common * Move metrics computation back to neumf; use DS.scope api * Extract DS.scope code to utils * lint fixes * Move obtaining DS above producer.start to avoid race condition * move pt 1 * move pt 2 * Update the run script * Wrap keras_model related code into functions * Update the doc for softmax_logitfy and change the method name * Resolve PR comments * working version with: eager, DS, batch and no masks * Remove git conflict indicator * move reshape to neumf_model * working version, not converge * converged * fix a test * more lint fix * more lint fix * more lint fixes * more lint fix * Removed unused imports * fix test * dummy commit for kicking of checks * fix lint issue * dummy input to kick off checks * dummy input to kick off checks * add collective to dist strat * addressed review comments * add a doc string
-
- 08 Jan, 2019 1 commit
-
-
Taylor Robie authored
-
- 07 Jan, 2019 3 commits
-
-
Taylor Robie authored
-
Taylor Robie authored
Add bisection based producer for increased scalability, enable fully deterministic data production, and use the materialized and bisection producer to check each other (via expected output md5's)
-
Taylor Robie authored
-