"src/diffusers/schedulers/scheduling_ddim_inverse.py" did not exist on "bd8df2da89d99f630e5aa2ddb8f8cb45456561f1"
  1. 09 Apr, 2021 1 commit
    • Reed Wanderman-Milne's avatar
      Remove dynamic_loss_scale argument to define_performance. · e353e4e5
      Reed Wanderman-Milne authored
      All models which support loss scaling support dynamic loss scaling, so the argument has no purpose. It used to be that some models scaled the loss manually instead of using a LossScaleOptimizer, and so did not support dynamic loss scaling.
      
      PiperOrigin-RevId: 367719521
      e353e4e5
  2. 06 Apr, 2021 1 commit
  3. 28 Feb, 2021 2 commits
  4. 12 Aug, 2020 2 commits
  5. 05 Mar, 2020 1 commit
  6. 25 Feb, 2020 1 commit
  7. 28 Oct, 2019 1 commit
  8. 16 Oct, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Add support for the tf.keras.mixed_precision API in NCF · cb913691
      Reed Wanderman-Milne authored
      To test, I did 50 fp32 runs and 50 fp16 runs. I used the following command:
      
      python ncf_keras_main.py --dataset=ml-20m --num_gpus=1 --train_epochs=10 --clean --batch_size=99000 --learning_rate=0.00382059 --beta1=0.783529 --beta2=0.909003 --epsilon=1.45439e-7 --layers=256,256,128,64 --num_factors=64 --hr_threshold=0.635 --ml_perf --nouse_synthetic_data --data_dir ~/ncf_data_dir_python3 --model_dir ~/tmp_model_dir --keras_use_ctl
      
      For the fp16 runs, I added --dtype=fp16. The average hit-rate for both fp16 and fp32 was 0.6365. I also did 50 runs with the mixed precision graph rewrite, and the average hit-rate was 0.6363. The difference is likely due to noise.
      
      PiperOrigin-RevId: 275059871
      cb913691
  9. 07 Oct, 2019 1 commit
  10. 30 Aug, 2019 1 commit
  11. 26 Aug, 2019 1 commit
  12. 23 Aug, 2019 1 commit
  13. 20 Aug, 2019 2 commits
  14. 19 Aug, 2019 1 commit
    • Reed Wanderman-Milne's avatar
      Do not expose --max_train_steps in models that do not use it. · 824ff2d6
      Reed Wanderman-Milne authored
      Only the V1 resnet model uses --max_train_steps. This unexposes the flag in the keras_application_models, mnist, keras resnet, CTL resnet Models. Before this change, such models allowed the flag to be specified, but ignored it.
      
      I also removed the "max_train" argument from the run_synthetic function, since this only had any meaning for the V1 resnet model. Instead, the V1 resnet model now directly passes --max_train_steps=1 to run_synthetic.
      
      PiperOrigin-RevId: 264269836
      824ff2d6
  15. 06 Aug, 2019 1 commit
  16. 23 Jul, 2019 1 commit
  17. 19 Jun, 2019 1 commit
    • Toby Boyd's avatar
      Add XLA to transformer (#7048) · 269581dc
      Toby Boyd authored
      
      
      * set default steps to 300K.
      
      * Log flags to perfzero.
      
      * Add XLA support to transformer
      
      - Moved config logic to keras_utils
      - Added enable_xla flag to _performance flags
      - Did not refactor enable_xla flag from keras resnet due to
        reliance on calling FLAGs in estimator keras and that is
        a needed refactor for another time.
      
      * fix g3 lint complaint.
      
      * Refactor set config into keras_utils.
      
      * Move flags out of main.
      
      * pipe through enable_xla
      
      * Update official/transformer/v2/misc.py
      Co-Authored-By: default avatarReed <reedwm@google.com>
      269581dc
  18. 06 Jun, 2019 1 commit
  19. 15 May, 2019 1 commit
  20. 11 May, 2019 1 commit
  21. 01 May, 2019 1 commit
    • Reed's avatar
      Add --fp16_implementation option. (#6703) · b691578c
      Reed authored
      This options allows the new tf.train.experimental.enable_mixed_precision_graph_rewrite() function to be used for fp16, instead of manual casts.
      b691578c
  22. 26 Apr, 2019 1 commit
  23. 03 Apr, 2019 1 commit
  24. 20 Mar, 2019 1 commit
  25. 07 Mar, 2019 1 commit
  26. 13 Oct, 2018 1 commit
  27. 12 Oct, 2018 1 commit
  28. 12 Jun, 2018 1 commit
    • Katherine Wu's avatar
      Transformer multi gpu, remove multi_gpu flag, distribution helper functions (#4457) · 29c9f985
      Katherine Wu authored
      * Add DistributionStrategy to transformer model
      
      * add num_gpu flag
      
      * Calculate per device batch size for transformer
      
      * remove reference to flags_core
      
      * Add synthetic data option to transformer
      
      * fix typo
      
      * add import back in
      
      * Use hierarchical copy
      
      * address PR comments
      
      * lint
      
      * fix spaces
      
      * group train op together to fix single GPU error
      
      * Fix translate bug (sorted_keys is a dict, not a list)
      
      * Change params to a default dict (translate.py was throwing errors because params didn't have the TPU parameters.)
      
      * Address PR comments. Removed multi gpu flag + more
      
      * fix lint
      
      * fix more lints
      
      * add todo for Synthetic dataset
      
      * Update docs
      29c9f985
  29. 03 May, 2018 1 commit