• Reed Wanderman-Milne's avatar
    With float16, always use LossScaleOptimizer. · be3575f5
    Reed Wanderman-Milne authored
    Before, it was too easy to accidentally forget to set runtime.loss_scale, which had to always be done if mixed precision is used, otherwise the model would converge to worse accuracy. Now, all that needs to be done to use mixed precision is to set runtime.mixed_precision_dtype=float16.
    
    PiperOrigin-RevId: 383767033
    be3575f5
performance.py 2.19 KB