Accuracy
classkeras.metrics.Accuracy(name="accuracy", dtype=None)
Calculates how often predictions equal labels.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as binary accuracy
: an idempotent
operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Examples
>>> m = keras.metrics.Accuracy()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])
>>> m.result()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],
... sample_weight=[1, 1, 0, 0])
>>> m.result()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=[keras.metrics.Accuracy()])
BinaryAccuracy
classkeras.metrics.BinaryAccuracy(name="binary_accuracy", dtype=None, threshold=0.5)
Calculates how often predictions match binary labels.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as binary accuracy
: an idempotent
operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Example
>>> m = keras.metrics.BinaryAccuracy()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])
>>> m.result()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],
... sample_weight=[1, 0, 0, 1])
>>> m.result()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=[keras.metrics.BinaryAccuracy()])
CategoricalAccuracy
classkeras.metrics.CategoricalAccuracy(name="categorical_accuracy", dtype=None)
Calculates how often predictions match one-hot labels.
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as categorical accuracy
: an idempotent
operation that simply divides total
by count
.
y_pred
and y_true
should be passed in as vectors of probabilities,
rather than as labels. If necessary, use ops.one_hot
to expand y_true
as
a vector.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Example
>>> m = keras.metrics.CategoricalAccuracy()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=[keras.metrics.CategoricalAccuracy()])
SparseCategoricalAccuracy
classkeras.metrics.SparseCategoricalAccuracy(
name="sparse_categorical_accuracy", dtype=None
)
Calculates how often predictions match integer labels.
acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))
You can provide logits of classes as y_pred
, since argmax of
logits and probabilities are same.
This metric creates two local variables, total
and count
that are used
to compute the frequency with which y_pred
matches y_true
. This
frequency is ultimately returned as sparse categorical accuracy
: an
idempotent operation that simply divides total
by count
.
If sample_weight
is None
, weights default to 1.
Use sample_weight
of 0 to mask values.
Arguments
Example
>>> m = keras.metrics.SparseCategoricalAccuracy()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[keras.metrics.SparseCategoricalAccuracy()])
TopKCategoricalAccuracy
classkeras.metrics.TopKCategoricalAccuracy(
k=5, name="top_k_categorical_accuracy", dtype=None
)
Computes how often targets are in the top K
predictions.
Arguments
5
.Example
>>> m = keras.metrics.TopKCategoricalAccuracy(k=1)
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=[keras.metrics.TopKCategoricalAccuracy()])
SparseTopKCategoricalAccuracy
classkeras.metrics.SparseTopKCategoricalAccuracy(
k=5, name="sparse_top_k_categorical_accuracy", dtype=None, from_sorted_ids=False
)
Computes how often integer targets are in the top K
predictions.
By default, the arguments expected by update_state()
are:
- y_true
: a tensor of shape (batch_size)
representing indices of true
categories.
- y_pred
: a tensor of shape (batch_size, num_categories)
containing the
scores for each sample for all possible categories.
With from_sorted_ids=True
, the arguments expected by update_state
are:
- y_true
: a tensor of shape (batch_size)
representing indices or IDs of
true categories.
- y_pred
: a tensor of shape (batch_size, N)
containing the indices or
IDs of the top N
categories sorted in order from highest score to
lowest score. N
must be greater or equal to k
.
The from_sorted_ids=True
option can be more efficient when the set of
categories is very large and the model has an optimized way to retrieve the
top ones either without scoring or without maintaining the scores for all
the possible categories.
Arguments
5
.False
, the default, the tensor passed
in y_pred
contains the unsorted scores of all possible categories.
When True
, y_pred
contains a the indices or IDs for the top
categories.Example
>>> m = keras.metrics.SparseTopKCategoricalAccuracy(k=1)
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
>>> m = keras.metrics.SparseTopKCategoricalAccuracy(k=1,
... from_sorted_ids=True)
>>> m.update_state([2, 1], [[1, 0, 3], [1, 2, 3]])
>>> m.result()
0.5
Usage with compile()
API:
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[keras.metrics.SparseTopKCategoricalAccuracy()])