Accuracy
classtf.keras.metrics.Accuracy(name="accuracy", dtype=None)
Calculates how often predictions equal labels.
This metric creates two local variables, total
and count
that are used to
compute the frequency with which y_pred
matches y_true
. This frequency is
ultimately returned as binary accuracy
: an idempotent operation that simply
divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.Accuracy()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])
>>> m.result().numpy()
0.75
>>> m.reset_states()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],
... sample_weight=[1, 1, 0, 0])
>>> m.result().numpy()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.Accuracy()])
BinaryAccuracy
classtf.keras.metrics.BinaryAccuracy(
name="binary_accuracy", dtype=None, threshold=0.5
)
Calculates how often predictions match binary labels.
This metric creates two local variables, total
and count
that are used to
compute the frequency with which y_pred
matches y_true
. This frequency is
ultimately returned as binary accuracy
: an idempotent operation that simply
divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.BinaryAccuracy()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])
>>> m.result().numpy()
0.75
>>> m.reset_states()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],
... sample_weight=[1, 0, 0, 1])
>>> m.result().numpy()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.BinaryAccuracy()])
CategoricalAccuracy
classtf.keras.metrics.CategoricalAccuracy(name="categorical_accuracy", dtype=None)
Calculates how often predictions matches one-hot labels.
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used to
compute the frequency with which y_pred
matches y_true
. This frequency is
ultimately returned as categorical accuracy
: an idempotent operation that
simply divides total
by count
.
y_pred
and y_true
should be passed in as vectors of probabilities, rather
than as labels. If necessary, use tf.one_hot
to expand y_true
as a vector.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.CategoricalAccuracy()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_states()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.CategoricalAccuracy()])
TopKCategoricalAccuracy
classtf.keras.metrics.TopKCategoricalAccuracy(
k=5, name="top_k_categorical_accuracy", dtype=None
)
Computes how often targets are in the top K
predictions.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1)
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_states()
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.TopKCategoricalAccuracy()])
SparseTopKCategoricalAccuracy
classtf.keras.metrics.SparseTopKCategoricalAccuracy(
k=5, name="sparse_top_k_categorical_accuracy", dtype=None
)
Computes how often integer targets are in the top K
predictions.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1)
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_states()
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()])