Keras 2 API documentation / Losses / Hinge losses for "maximum-margin" classification

Hinge losses for "maximum-margin" classification

[source]

Hinge class

tf_keras.losses.Hinge(reduction="auto", name="hinge")

Computes the hinge loss between y_true & y_pred.

loss = maximum(1 - y_true * y_pred, 0)

y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1.

Standalone usage:

>>> y_true = [[0., 1.], [0., 0.]]
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
>>> h = tf.keras.losses.Hinge()
>>> h(y_true, y_pred).numpy()
1.3
>>> # Calling with 'sample_weight'.
>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()
0.55
>>> # Using 'sum' reduction type.
>>> h = tf.keras.losses.Hinge(
...     reduction=tf.keras.losses.Reduction.SUM)
>>> h(y_true, y_pred).numpy()
2.6
>>> # Using 'none' reduction type.
>>> h = tf.keras.losses.Hinge(
...     reduction=tf.keras.losses.Reduction.NONE)
>>> h(y_true, y_pred).numpy()
array([1.1, 1.5], dtype=float32)

Usage with the compile() API:

model.compile(optimizer='sgd', loss=tf.keras.losses.Hinge())

[source]

SquaredHinge class

tf_keras.losses.SquaredHinge(reduction="auto", name="squared_hinge")

Computes the squared hinge loss between y_true & y_pred.

loss = square(maximum(1 - y_true * y_pred, 0))

y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1.

Standalone usage:

>>> y_true = [[0., 1.], [0., 0.]]
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
>>> h = tf.keras.losses.SquaredHinge()
>>> h(y_true, y_pred).numpy()
1.86
>>> # Calling with 'sample_weight'.
>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()
0.73
>>> # Using 'sum' reduction type.
>>> h = tf.keras.losses.SquaredHinge(
...     reduction=tf.keras.losses.Reduction.SUM)
>>> h(y_true, y_pred).numpy()
3.72
>>> # Using 'none' reduction type.
>>> h = tf.keras.losses.SquaredHinge(
...     reduction=tf.keras.losses.Reduction.NONE)
>>> h(y_true, y_pred).numpy()
array([1.46, 2.26], dtype=float32)

Usage with the compile() API:

model.compile(optimizer='sgd', loss=tf.keras.losses.SquaredHinge())

[source]

CategoricalHinge class

tf_keras.losses.CategoricalHinge(reduction="auto", name="categorical_hinge")

Computes the categorical hinge loss between y_true & y_pred.

loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)

Standalone usage:

>>> y_true = [[0, 1], [0, 0]]
>>> y_pred = [[0.6, 0.4], [0.4, 0.6]]
>>> # Using 'auto'/'sum_over_batch_size' reduction type.
>>> h = tf.keras.losses.CategoricalHinge()
>>> h(y_true, y_pred).numpy()
1.4
>>> # Calling with 'sample_weight'.
>>> h(y_true, y_pred, sample_weight=[1, 0]).numpy()
0.6
>>> # Using 'sum' reduction type.
>>> h = tf.keras.losses.CategoricalHinge(
...     reduction=tf.keras.losses.Reduction.SUM)
>>> h(y_true, y_pred).numpy()
2.8
>>> # Using 'none' reduction type.
>>> h = tf.keras.losses.CategoricalHinge(
...     reduction=tf.keras.losses.Reduction.NONE)
>>> h(y_true, y_pred).numpy()
array([1.2, 1.6], dtype=float32)

Usage with the compile() API:

model.compile(optimizer='sgd', loss=tf.keras.losses.CategoricalHinge())

[source]

hinge function

tf_keras.losses.hinge(y_true, y_pred)

Computes the hinge loss between y_true & y_pred.

loss = mean(maximum(1 - y_true * y_pred, 0), axis=-1)

Standalone usage:

>>> y_true = np.random.choice([-1, 1], size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.hinge(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> assert np.array_equal(
...     loss.numpy(),
...     np.mean(np.maximum(1. - y_true * y_pred, 0.), axis=-1))

Arguments

  • y_true: The ground truth values. y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns

Hinge loss values. shape = [batch_size, d0, .. dN-1].


[source]

squared_hinge function

tf_keras.losses.squared_hinge(y_true, y_pred)

Computes the squared hinge loss between y_true & y_pred.

loss = mean(square(maximum(1 - y_true * y_pred, 0)), axis=-1)

Standalone usage:

>>> y_true = np.random.choice([-1, 1], size=(2, 3))
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.squared_hinge(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> assert np.array_equal(
...     loss.numpy(),
...     np.mean(np.square(np.maximum(1. - y_true * y_pred, 0.)), axis=-1))

Arguments

  • y_true: The ground truth values. y_true values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. shape = [batch_size, d0, .. dN].
  • y_pred: The predicted values. shape = [batch_size, d0, .. dN].

Returns

Squared hinge loss values. shape = [batch_size, d0, .. dN-1].


[source]

categorical_hinge function

tf_keras.losses.categorical_hinge(y_true, y_pred)

Computes the categorical hinge loss between y_true & y_pred.

loss = maximum(neg - pos + 1, 0) where neg=maximum((1-y_true)*y_pred) and pos=sum(y_true*y_pred)

Standalone usage:

>>> y_true = np.random.randint(0, 3, size=(2,))
>>> y_true = tf.keras.utils.to_categorical(y_true, num_classes=3)
>>> y_pred = np.random.random(size=(2, 3))
>>> loss = tf.keras.losses.categorical_hinge(y_true, y_pred)
>>> assert loss.shape == (2,)
>>> pos = np.sum(y_true * y_pred, axis=-1)
>>> neg = np.amax((1. - y_true) * y_pred, axis=-1)
>>> assert np.array_equal(loss.numpy(), np.maximum(0., neg - pos + 1.))

Arguments

  • y_true: The ground truth values. y_true values are expected to be either {-1, +1} or {0, 1} (i.e. a one-hot-encoded tensor).
  • y_pred: The predicted values.

Returns

Categorical hinge loss values.