- 16 Oct, 2019 5 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 275142626
-
David Chen authored
PiperOrigin-RevId: 275103426
-
Yeqing Li authored
PiperOrigin-RevId: 275080469
-
Reed Wanderman-Milne authored
To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command: python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise. PiperOrigin-RevId: 275059871
-
Yeqing Li authored
PiperOrigin-RevId: 274921478
-
- 15 Oct, 2019 6 commits
-
-
Yeqing Li authored
PiperOrigin-RevId: 274918820
-
Hongkun Yu authored
PiperOrigin-RevId: 274917111
-
Jing Li authored
Add to option to init checkpoint from transformer-xl model. PiperOrigin-RevId: 274875006
-
Hongkun Yu authored
PiperOrigin-RevId: 274844449
-
Yeqing Li authored
PiperOrigin-RevId: 274807747
-
Jing Li authored
PiperOrigin-RevId: 274699918
-
- 14 Oct, 2019 1 commit
-
-
Yeqing Li authored
PiperOrigin-RevId: 274642627
-
- 13 Oct, 2019 2 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274460885
-
Hongkun Yu authored
PiperOrigin-RevId: 274386468
-
- 12 Oct, 2019 3 commits
-
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274347990
-
Rajagopal Ananthanarayanan authored
PiperOrigin-RevId: 274281911
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274278626
-
- 11 Oct, 2019 10 commits
-
-
Hongkun Yu authored
This reverts commit b4e560dc.
-
Hongkun Yu authored
* Revert "Update tf.contrib.data to tf.data.experimental. (#7650)" This reverts commit faf4bbb3. * revert research
-
Derek Murray authored
-
Yeqing Li authored
PiperOrigin-RevId: 274241934
-
A. Unique TensorFlower authored
Change summary directory and model checkpoint directory so that training via Keras Compile/Fit() and custom training loop is consistent. PiperOrigin-RevId: 274202793
-
Hongkun Yu authored
PiperOrigin-RevId: 274201399
-
Gideão Pelegrino de Abreu authored
-
Tao authored
-
Hongkun Yu authored
PiperOrigin-RevId: 274090672
-
Reed Wanderman-Milne authored
PiperOrigin-RevId: 274090348
-
- 10 Oct, 2019 9 commits
-
-
A. Unique TensorFlower authored
change benchmark's log verbosity to logging.INFO. it seems to me that DEBUG map to ---v=1 internally, which is way to verbose for the purpose of benchmarking. PiperOrigin-RevId: 274040907
-
Yeqing Li authored
PiperOrigin-RevId: 274035928
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 274028786
-
Yeqing Li authored
PiperOrigin-RevId: 274023277
-
Hongkun Yu authored
PiperOrigin-RevId: 274015143
-
Yeqing Li authored
PiperOrigin-RevId: 274010788
-
Navid Lambert-Shirzad authored
-
Hongkun Yu authored
PiperOrigin-RevId: 273966871
-
Hongkun Yu authored
PiperOrigin-RevId: 273861263
-
- 09 Oct, 2019 4 commits
-
-
Reed Wanderman-Milne authored
Instead of needing to ensure variables are float32, casting inputs to float32, etc, instead dtype="float32" is passed to the layer constructor, which will do all that logic automatically. The only difference is the output of LayerNorm is now float32 instead of float16, so an extra cast is needed elsewhere. PiperOrigin-RevId: 273833286
-
Pooya Davoodi authored
* Updating python API to use CombinedNonMaxSuppresion TF operator 1. Adds a unit test to test post_processing python API 2. Currently sets clip_window to None as the kernel uses the default clip_window of [0,0,1,1] 3. Added use_static_shapes to the API. In old API if use_static_shapes is true, then it pads/clips outputs to max_total_size, if specified. If not specified, it pads to num_classes*max_size_per_class. If use_static_shapes is false, it always pads/clips to max_total_size. Update unit test to account for clipped bouding boxes Changed the name to CombinedNonMaxSuppression based on feedback from Google Added additional parameters to combinedNMS python function. They are currently unused and required for networks like FasterRCNN and MaskRCNN * Delete selected_indices from API Because it was removed from CombinedNMS recently in the PR. * Improve doc of function combined_non_max_suppression * Enable CombinedNonMaxSuppression for first_stage_nms * fix bug * Ensure agnostic_nms is not used with combined_nms Remove redundant arguments from combined_nms * Fix pylint * Add checks for unsupported args * Fix pylint * Move combined_non_max_suppression to batch_multiclass_non_max_suppression Also rename combined_nms to use_combined_nms * Delete combined_nms for first_stage_nms because it does not work * Revert "Delete combined_nms for first_stage_nms because it does not work" This reverts commit 2a3cc5145f17cee630a67ddedd20e90c2920fa9f. * Use nmsed_additional_fields.get to avoid error * Merge combined_non_max_suppression with main nms function * Rename combined_nms for first stage nms * Improve docs * Use assertListEqual for numpy arrays * Fix pylint errors * End comments with period
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273795511
-
A. Unique TensorFlower authored
PiperOrigin-RevId: 273653001
-