"vscode:/vscode.git/clone" did not exist on "919280aaa1a1085f246fe04f6f8ecc761df6b23a"
- 17 Oct, 2019 3 commits
-
-
Hongkun Yu authored
PiperOrigin-RevId: 275288636
-
Hongkun Yu authored
PiperOrigin-RevId: 275192365
-
Tyler authored
After Eager was moved to core Tensorflow, this notebook gives the error: AttributeError: module 'tensorflow.contrib.eager' has no attribute 'Variable' I just fixed it.
-
- 16 Oct, 2019 5 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 275142626
-
David Chen authored
PiperOrigin-RevId: 275103426
-
Yeqing Li authored
PiperOrigin-RevId: 275080469
-
Reed Wanderman-Milne authored
To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command: python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise. PiperOrigin-RevId: 275059871
-
Yeqing Li authored
PiperOrigin-RevId: 274921478
-
- 15 Oct, 2019 6 commits
-
-
Yeqing Li authored
PiperOrigin-RevId: 274918820
-
Hongkun Yu authored
PiperOrigin-RevId: 274917111
-
Jing Li authored
Add to option to init checkpoint from transformer-xl model. PiperOrigin-RevId: 274875006
-
Hongkun Yu authored
PiperOrigin-RevId: 274844449
-
Yeqing Li authored
PiperOrigin-RevId: 274807747
-
Jing Li authored
PiperOrigin-RevId: 274699918
-
- 14 Oct, 2019 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 274642627
-
- 13 Oct, 2019 2 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274460885
-
Hongkun Yu authored
PiperOrigin-RevId: 274386468
-
- 12 Oct, 2019 3 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274347990
-
Rajagopal Ananthanarayanan authored
PiperOrigin-RevId: 274281911
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274278626
-
- 11 Oct, 2019 10 commits
-
-
Hongkun Yu authored
This reverts commit b4e560dc.
-
Hongkun Yu authored
* Revert "Update tf.contrib.data to tf.data.experimental. (#7650)" This reverts commit faf4bbb3. * revert research
-
Derek Murray authored
-
Yeqing Li authored
PiperOrigin-RevId: 274241934
-
A. Unique TensorFlower authored
Change summary directory and model checkpoint directory so that training via Keras Compile/Fit() and custom training loop is consistent. PiperOrigin-RevId: 274202793
-
Hongkun Yu authored
PiperOrigin-RevId: 274201399
-
Gideão Pelegrino de Abreu authored
-
Tao authored
-
Hongkun Yu authored
PiperOrigin-RevId: 274090672
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 274090348
-
- 10 Oct, 2019 9 commits
-
-
A. Unique TensorFlower authored
change benchmark's log verbosity to logging.INFO. it seems to me that DEBUG map to ---v=1 internally, which is way to verbose for the purpose of benchmarking. PiperOrigin-RevId: 274040907
-
Yeqing Li authored
PiperOrigin-RevId: 274035928
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274028786
-
Yeqing Li authored
PiperOrigin-RevId: 274023277
-
Hongkun Yu authored
PiperOrigin-RevId: 274015143
-
Yeqing Li authored
PiperOrigin-RevId: 274010788
-
Navid Lambert-Shirzad authored
-
Hongkun Yu authored
PiperOrigin-RevId: 273966871
-
Hongkun Yu authored
PiperOrigin-RevId: 273861263
-
- 09 Oct, 2019 1 commit
-
-
Reed Wanderman-Milne authored
Instead of needing to ensure variables are float32, casting inputs to float32, etc, instead dtype="float32" is passed to the layer constructor, which will do all that logic automatically. The only difference is the output of LayerNorm is now float32 instead of float16, so an extra cast is needed elsewhere. PiperOrigin-RevId: 273833286
-