MeanSquaredError
classkeras.losses.MeanSquaredError(
reduction="sum_over_batch_size", name="mean_squared_error", dtype=None
)
Computes the mean of squares of errors between labels and predictions.
Formula:
loss = mean(square(y_true - y_pred))
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.MeanAbsoluteError
classkeras.losses.MeanAbsoluteError(
reduction="sum_over_batch_size", name="mean_absolute_error", dtype=None
)
Computes the mean of absolute difference between labels and predictions.
Formula:
loss = mean(abs(y_true - y_pred))
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.MeanAbsolutePercentageError
classkeras.losses.MeanAbsolutePercentageError(
reduction="sum_over_batch_size", name="mean_absolute_percentage_error", dtype=None
)
Computes the mean absolute percentage error between y_true
& y_pred
.
Formula:
loss = 100 * mean(abs((y_true - y_pred) / y_true))
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.MeanSquaredLogarithmicError
classkeras.losses.MeanSquaredLogarithmicError(
reduction="sum_over_batch_size", name="mean_squared_logarithmic_error", dtype=None
)
Computes the mean squared logarithmic error between y_true
& y_pred
.
Formula:
loss = mean(square(log(y_true + 1) - log(y_pred + 1)))
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.CosineSimilarity
classkeras.losses.CosineSimilarity(
axis=-1, reduction="sum_over_batch_size", name="cosine_similarity", dtype=None
)
Computes the cosine similarity between y_true
& y_pred
.
Note that it is a number between -1 and 1. When it is a negative number
between -1 and 0, 0 indicates orthogonality and values closer to -1
indicate greater similarity. This makes it usable as a loss function in a
setting where you try to maximize the proximity between predictions and
targets. If either y_true
or y_pred
is a zero vector, cosine similarity
will be 0 regardless of the proximity between predictions and targets.
Formula:
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Arguments
-1
."sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.Huber
classkeras.losses.Huber(
delta=1.0, reduction="sum_over_batch_size", name="huber_loss", dtype=None
)
Computes the Huber loss between y_true
& y_pred
.
Formula:
for x in error:
if abs(x) <= delta:
loss.append(0.5 * x^2)
elif abs(x) > delta:
loss.append(delta * abs(x) - 0.5 * delta^2)
loss = mean(loss, axis=-1)
See: Huber loss.
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.LogCosh
classkeras.losses.LogCosh(reduction="sum_over_batch_size", name="log_cosh", dtype=None)
Computes the logarithm of the hyperbolic cosine of the prediction error.
Formula:
error = y_pred - y_true
logcosh = mean(log((exp(error) + exp(-error))/2), axis=-1)`
where x is the error y_pred - y_true
.
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.Tversky
classkeras.losses.Tversky(
alpha=0.5, beta=0.5, reduction="sum_over_batch_size", name="tversky", dtype=None
)
Computes the Tversky loss value between y_true
and y_pred
.
This loss function is weighted by the alpha and beta coefficients that penalize false positives and false negatives.
With alpha=0.5
and beta=0.5
, the loss value becomes equivalent to
Dice Loss.
Arguments
0.5
.0.5
."sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.Returns
Tversky loss value.
Reference
Dice
classkeras.losses.Dice(
reduction="sum_over_batch_size", name="dice", axis=None, dtype=None
)
Computes the Dice loss value between y_true
and y_pred
.
Formula:
loss = 1 - (2 * sum(y_true * y_pred)) / (sum(y_true) + sum(y_pred))
Arguments
"sum_over_batch_size"
. Supported options are
"sum"
, "sum_over_batch_size"
, "mean"
,
"mean_with_sample_weight"
or None
. "sum"
sums the loss,
"sum_over_batch_size"
and "mean"
sum the loss and divide by the
sample size, and "mean_with_sample_weight"
sums the loss and
divides by the sum of the sample weights. "none"
and None
perform no aggregation. Defaults to "sum_over_batch_size"
.None
.None
, which
means using keras.backend.floatx()
. keras.backend.floatx()
is a
"float32"
unless set to different value
(via keras.backend.set_floatx()
). If a keras.DTypePolicy
is
provided, then the compute_dtype
will be utilized.Returns
Dice loss value.
Example
>>> y_true = [[[[1.0], [1.0]], [[0.0], [0.0]]],
... [[[1.0], [1.0]], [[0.0], [0.0]]]]
>>> y_pred = [[[[0.0], [1.0]], [[0.0], [1.0]]],
... [[[0.4], [0.0]], [[0.0], [0.9]]]]
>>> axis = (1, 2, 3)
>>> loss = keras.losses.dice(y_true, y_pred, axis=axis)
>>> assert loss.shape == (2,)
>>> loss
array([0.5, 0.75757575], shape=(2,), dtype=float32)
>>> loss = keras.losses.dice(y_true, y_pred)
>>> assert loss.shape == ()
>>> loss
array(0.6164384, shape=(), dtype=float32)
>>> y_true = np.array(y_true)
>>> y_pred = np.array(y_pred)
>>> loss = keras.losses.Dice(axis=axis, reduction=None)(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> loss
array([0.5, 0.75757575], shape=(2,), dtype=float32)
mean_squared_error
functionkeras.losses.mean_squared_error(y_true, y_pred)
Computes the mean squared error between labels and predictions.
Formula:
loss = mean(square(y_true - y_pred), axis=-1)
Example
>>> y_true = np.random.randint(0, 2, size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = keras.losses.mean_squared_error(y_true, y_pred)
Arguments
[batch_size, d0, .. dN]
.[batch_size, d0, .. dN]
.Returns
Mean squared error values with shape = [batch_size, d0, .. dN-1]
.
mean_absolute_error
functionkeras.losses.mean_absolute_error(y_true, y_pred)
Computes the mean absolute error between labels and predictions.
loss = mean(abs(y_true - y_pred), axis=-1)
Arguments
[batch_size, d0, .. dN]
.[batch_size, d0, .. dN]
.Returns
Mean absolute error values with shape = [batch_size, d0, .. dN-1]
.
Example
>>> y_true = np.random.randint(0, 2, size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = keras.losses.mean_absolute_error(y_true, y_pred)
mean_absolute_percentage_error
functionkeras.losses.mean_absolute_percentage_error(y_true, y_pred)
Computes the mean absolute percentage error between y_true
& y_pred
.
Formula:
loss = 100 * mean(abs((y_true - y_pred) / y_true), axis=-1)
Division by zero is prevented by dividing by maximum(y_true, epsilon)
where epsilon = keras.backend.epsilon()
(default to 1e-7
).
Arguments
[batch_size, d0, .. dN]
.[batch_size, d0, .. dN]
.Returns
Mean absolute percentage error values with shape = [batch_size, d0, ..
dN-1]
.
Example
>>> y_true = np.random.random(size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = keras.losses.mean_absolute_percentage_error(y_true, y_pred)
mean_squared_logarithmic_error
functionkeras.losses.mean_squared_logarithmic_error(y_true, y_pred)
Computes the mean squared logarithmic error between y_true
& y_pred
.
Formula:
loss = mean(square(log(y_true + 1) - log(y_pred + 1)), axis=-1)
Note that y_pred
and y_true
cannot be less or equal to 0. Negative
values and 0 values will be replaced with keras.backend.epsilon()
(default to 1e-7
).
Arguments
[batch_size, d0, .. dN]
.[batch_size, d0, .. dN]
.Returns
Mean squared logarithmic error values with shape = [batch_size, d0, ..
dN-1]
.
Example
>>> y_true = np.random.randint(0, 2, size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = keras.losses.mean_squared_logarithmic_error(y_true, y_pred)
cosine_similarity
functionkeras.losses.cosine_similarity(y_true, y_pred, axis=-1)
Computes the cosine similarity between labels and predictions.
Formula:
loss = -sum(l2_norm(y_true) * l2_norm(y_pred))
Note that it is a number between -1 and 1. When it is a negative number
between -1 and 0, 0 indicates orthogonality and values closer to -1
indicate greater similarity. This makes it usable as a loss function in a
setting where you try to maximize the proximity between predictions and
targets. If either y_true
or y_pred
is a zero vector, cosine
similarity will be 0 regardless of the proximity between predictions
and targets.
Arguments
-1
.Returns
Cosine similarity tensor.
Example
>>> y_true = [[0., 1.], [1., 1.], [1., 1.]]
>>> y_pred = [[1., 0.], [1., 1.], [-1., -1.]]
>>> loss = keras.losses.cosine_similarity(y_true, y_pred, axis=-1)
[-0., -0.99999994, 0.99999994]
huber
functionkeras.losses.huber(y_true, y_pred, delta=1.0)
Computes Huber loss value.
Formula:
for x in error:
if abs(x) <= delta:
loss.append(0.5 * x^2)
elif abs(x) > delta:
loss.append(delta * abs(x) - 0.5 * delta^2)
loss = mean(loss, axis=-1)
See: Huber loss.
Example
>>> y_true = [[0, 1], [0, 0]]
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
>>> loss = keras.losses.huber(y_true, y_pred)
0.155
Arguments
1.0
.Returns
Tensor with one scalar loss entry per sample.
log_cosh
functionkeras.losses.log_cosh(y_true, y_pred)
Logarithm of the hyperbolic cosine of the prediction error.
Formula:
loss = mean(log(cosh(y_pred - y_true)), axis=-1)
Note that log(cosh(x))
is approximately equal to (x ** 2) / 2
for small
x
and to abs(x) - log(2)
for large x
. This means that 'logcosh' works
mostly like the mean squared error, but will not be so strongly affected by
the occasional wildly incorrect prediction.
Example
>>> y_true = [[0., 1.], [0., 0.]]
>>> y_pred = [[1., 1.], [0., 0.]]
>>> loss = keras.losses.log_cosh(y_true, y_pred)
0.108
Arguments
[batch_size, d0, .. dN]
.[batch_size, d0, .. dN]
.Returns
Logcosh error values with shape = [batch_size, d0, .. dN-1]
.
tversky
functionkeras.losses.tversky(y_true, y_pred, alpha=0.5, beta=0.5)
Computes the Tversky loss value between y_true
and y_pred
.
This loss function is weighted by the alpha and beta coefficients that penalize false positives and false negatives.
With alpha=0.5
and beta=0.5
, the loss value becomes equivalent to
Dice Loss.
Arguments
Returns
Tversky loss value.
Reference
dice
functionkeras.losses.dice(y_true, y_pred, axis=None)
Computes the Dice loss value between y_true
and y_pred
.
Formula:
loss = 1 - (2 * sum(y_true * y_pred)) / (sum(y_true) + sum(y_pred))
Arguments
Returns
Dice loss value.