tf.keras.mixed_precision.LossScaleOptimizer( inner_optimizer, dynamic=True, initial_scale=None, dynamic_growth_steps=None )
An optimizer that applies loss scaling to prevent numeric underflow.
Loss scaling is a technique to prevent numeric underflow in intermediate gradients when float16 is used. To prevent underflow, the loss is multiplied (or "scaled") by a certain factor called the "loss scale", which causes intermediate gradients to be scaled by the loss scale as well. The final gradients are divided (or "unscaled") by the loss scale to bring them back to their original value.
LossScaleOptimizer wraps another optimizer and applies loss scaling to it.
By default, the loss scale is dynamically updated over time so you do not have
to choose the loss scale. The
minimize method automatically scales the loss,
unscales the gradients, and updates the loss scale so all you have to do is
wrap your optimizer with a
LossScaleOptimizer if you use
>>> opt = tf.keras.optimizers.SGD(0.25) >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt) >>> var = tf.Variable(1.) >>> loss_fn = lambda: var ** 2 >>> # 'minimize' applies loss scaling and updates the loss sale. >>> opt.minimize(loss_fn, var_list=var) >>> var.numpy() 0.5
tf.GradientTape is used to compute gradients instead of
must scale the loss and gradients manually. This can be done with the
LossScaleOptimizer.get_unscaled_gradients methods. For example:
>>> with tf.GradientTape() as tape: ... loss = loss_fn() ... scaled_loss = opt.get_scaled_loss(loss) >>> scaled_grad = tape.gradient(scaled_loss, var) >>> (grad,) = opt.get_unscaled_gradients([scaled_grad]) >>> opt.apply_gradients([(grad, var)]) # Loss scale is updated here >>> var.numpy() 0.25
Warning: If you forget to call
(or both) when using a
tf.GradientTape, the model will likely converge to a
worse quality. Please make sure you call each function exactly once.
When mixed precision with float16 is used, there is typically no risk of underflow affecting model quality if loss scaling is properly used. See the mixed precision guide for more information on how to use mixed precision.
tf.keras.optimizers.Optimizerinstance to wrap.
initial_scalemust be specified, which is used as the loss scale. Recommended to keep as True, as choosing a fixed loss scale can be tricky. Currently, there is a small performance overhead to dynamic loss scaling compared to fixed loss scaling.
dynamicis True, this defaults to
2 ** 15. If
dynamicis False, this must be specified and acts as the sole loss scale, as the loss scale does not change over time. When dynamic loss scaling is used, is better for this to be a very high number, because a loss scale that is too high gets lowered far more quickly than a loss scale that is too low gets raised.
dynamic_growth_stepssteps with finite gradients, the loss scale is doubled. Defaults to 2000. If a nonfinite gradient is encountered, the count is reset back to zero, gradients are skipped that step, and the loss scale is halved. The count can be queried with
LossScaleOptimizer.dynamic_counter. This argument can only be specified if
LossScaleOptimizer will occasionally skip applying gradients to the
variables, in which case the trainable variables will not change that step.
This is done because the dynamic loss scale will sometimes be raised too
high, causing overflow in the gradients. Typically, the first 2 to 15 steps of
the model are skipped as the initial loss scale is very high, but afterwards
steps will only be skipped on average 0.05% of the time (the fraction of steps
1 / dynamic_growth_steps).
LossScaleOptimizer delegates all public
Optimizer methods to the inner
optimizer. Additionally, in methods
get_gradients, it scales
the loss and unscales the gradients. In methods
apply_gradients, it additionally updates the loss scale and skips applying
gradients if any gradient has a nonfinite value.
Hyperparameters can be accessed and set on the LossScaleOptimizer, which will be delegated to the wrapped optimizer.
>>> opt = tf.keras.optimizers.Adam(beta_1=0.8, epsilon=1e-5) >>> opt = tf.keras.mixed_precision.LossScaleOptimizer(opt) >>> opt.beta_1 # Equivalent to `opt.inner_optimizer.beta_1` 0.8 >>> opt.beta_1 = 0.7 # Equivalent to `opt.inner_optimizer.beta_1 = 0.7` >>> opt.beta_1 0.7 >>> opt.inner_optimizer.beta_1 0.7
However, accessing or setting non-hyperparameters is not delegated to the
LossScaleOptimizer. In an Adam optimizer,
beta_1 is a hyperparameter but
epsilon is not, as the Adam optimizer only calls
>>> opt.inner_optimizer.epsilon 1e-5 >>> opt.epsilon Traceback (most recent call last): ... AttributeError: 'LossScaleOptimizer' object has no attribute 'epsilon' >>> opt.epsilon = 1e-4 # This does NOT set epsilon on `opt.inner_optimizer` >>> opt.inner_optimizer.epsilon >>> 1e-5
In the above example, despite epsilon being set on the LossScaleOptimizer, the old epsilon value will still be used when training as epsilon was not set on the inner optimizer.