COCOMeanAveragePrecision
classkeras_cv.metrics.COCOMeanAveragePrecision(
class_ids,
bounding_box_format,
recall_thresholds=None,
iou_thresholds=None,
area_range=None,
max_detections=100,
num_buckets=10000,
**kwargs
)
COCOMeanAveragePrecision computes an approximation of MaP.
A usage guide is available on keras.io: Using KerasCV COCO metrics. Full implementation details are available in the KerasCV COCO metrics whitepaper.
Arguments
range(classes)
.None
, which makes the metric
count all bounding boxes. Must be a tuple of floats. The first
number in the tuple represents a lower bound for areas, while the
second value represents an upper bound. For example, when
(0, 32**2)
is passed to the metric, recall is only evaluated for
objects with areas less than 32*32
. If (32**2, 1000000**2)
is
passed the metric will only be evaluated for boxes with areas larger
than 32**2
, and smaller than 1000000**2
.100
.Usage:
COCOMeanAveragePrecision accepts two Tensors as input to it's
update_state()
method. These Tensors represent bounding boxes in
corners
format. Utilities to convert Tensors from xywh
to corners
format can be found in keras_cv.utils.bounding_box
.
Each image in a dataset may have a different number of bounding boxes,
both in the ground truth dataset and the prediction set. In order to
account for this, you may either pass a tf.RaggedTensor
, or pad Tensors
with -1
s to indicate unused boxes. A utility function to perform this
padding is available at
keras_cv.bounding_box.pad_batch_to_shape()
.
coco_map = keras_cv.metrics.COCOMeanAveragePrecision(
bounding_box_format='xyxy',
max_detections=100,
class_ids=[1]
)
y_true = np.array([[[0, 0, 10, 10, 1], [20, 20, 10, 10, 1]]]).astype(np.float32)
y_pred = np.array([[[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]]]).astype(
np.float32
)
coco_map.update_state(y_true, y_pred)
coco_map.result()
# 0.24752477