"git@developer.sourcefind.cn:lacacy/qwen_lmdeploy.git" did not exist on "902a3e16f937d4f142968dea265c9e03c8559bb8"
  1. 02 Mar, 2020 1 commit
  2. 28 Feb, 2020 1 commit
  3. 25 Feb, 2020 1 commit
  4. 21 Feb, 2020 2 commits
  5. 20 Feb, 2020 1 commit
  6. 13 Feb, 2020 1 commit
  7. 29 Jan, 2020 1 commit
  8. 27 Jan, 2020 1 commit
  9. 21 Jan, 2020 1 commit
  10. 17 Jan, 2020 1 commit
  11. 15 Dec, 2019 1 commit
  12. 14 Dec, 2019 2 commits
  13. 11 Dec, 2019 1 commit
  14. 06 Dec, 2019 1 commit
  15. 05 Dec, 2019 1 commit
  16. 04 Dec, 2019 1 commit
  17. 27 Nov, 2019 1 commit
  18. 25 Nov, 2019 1 commit
  19. 21 Nov, 2019 1 commit
  20. 19 Nov, 2019 1 commit
  21. 18 Nov, 2019 1 commit
  22. 11 Nov, 2019 1 commit
  23. 28 Oct, 2019 1 commit
  24. 21 Oct, 2019 1 commit
  25. 16 Oct, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Add support for the tf.keras.mixed_precision API in NCF · cb913691
      Reed Wanderman-Milne authored
      To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command:
      
      python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl
      
      For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise.
      
      PiperOrigin-RevId: 275059871
      cb913691
  26. 10 Oct, 2019 1 commit
  27. 07 Oct, 2019 1 commit
  28. 24 Sep, 2019 1 commit
  29. 09 Sep, 2019 1 commit
  30. 04 Sep, 2019 1 commit
  31. 30 Aug, 2019 1 commit
  32. 26 Aug, 2019 1 commit
  33. 23 Aug, 2019 2 commits
  34. 21 Aug, 2019 1 commit
  35. 20 Aug, 2019 3 commits