Accuracy
classtf_keras.metrics.Accuracy(name="accuracy", dtype=None)
Calculates how often predictions equal labels.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as binary accuracy
: an idempotent
operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.Accuracy()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])
>>> m.result().numpy()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],
... sample_weight=[1, 1, 0, 0])
>>> m.result().numpy()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.Accuracy()])
BinaryAccuracy
classtf_keras.metrics.BinaryAccuracy(name="binary_accuracy", dtype=None, threshold=0.5)
Calculates how often predictions match binary labels.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as binary accuracy
: an idempotent
operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.BinaryAccuracy()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])
>>> m.result().numpy()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],
... sample_weight=[1, 0, 0, 1])
>>> m.result().numpy()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.BinaryAccuracy()])
CategoricalAccuracy
classtf_keras.metrics.CategoricalAccuracy(name="categorical_accuracy", dtype=None)
Calculates how often predictions match one-hot labels.
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as categorical accuracy
: an idempotent
operation that simply divides total
by count
.
y_pred
and y_true
should be passed in as vectors of probabilities,
rather than as labels. If necessary, use tf.one_hot
to expand y_true
as
a vector.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.CategoricalAccuracy()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.CategoricalAccuracy()])
SparseCategoricalAccuracy
classtf_keras.metrics.SparseCategoricalAccuracy(
name="sparse_categorical_accuracy", dtype=None
)
Calculates how often predictions match integer labels.
acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as sparse categorical accuracy
: an
idempotent operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Standalone usage:
>>> m = tf.keras.metrics.SparseCategoricalAccuracy()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_state()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
TopKCategoricalAccuracy
classtf_keras.metrics.TopKCategoricalAccuracy(
k=5, name="top_k_categorical_accuracy", dtype=None
)
Computes how often targets are in the top K
predictions.
Arguments
5
.Standalone usage:
>>> m = tf.keras.metrics.TopKCategoricalAccuracy(k=1)
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.TopKCategoricalAccuracy()])
SparseTopKCategoricalAccuracy
classtf_keras.metrics.SparseTopKCategoricalAccuracy(
k=5, name="sparse_top_k_categorical_accuracy", dtype=None
)
Computes how often integer targets are in the top K
predictions.
Arguments
5
.Standalone usage:
>>> m = tf.keras.metrics.SparseTopKCategoricalAccuracy(k=1)
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result().numpy()
0.5
>>> m.reset_state()
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result().numpy()
0.3
Usage with compile()
API:
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.SparseTopKCategoricalAccuracy()])