ยป Keras API reference / KerasCV / Metrics / COCORecall metric

COCORecall metric

[source]

COCORecall class

keras_cv.metrics.COCORecall(
    class_ids,
    bounding_box_format,
    iou_thresholds=None,
    area_range=None,
    max_detections=100,
    **kwargs
)

COCORecall computes the COCO recall metric.

A usage guide is available on keras.io: Using KerasCV COCO metrics. Full implementation details are available in the KerasCV COCO metrics whitepaper.

Arguments

  • class_ids: The class IDs to evaluate the metric for. To evaluate for all classes in over a set of sequentially labelled classes, pass range(classes).
  • bounding_box_format: Format of the incoming bounding boxes. Supported values are "xywh", "center_xywh", "xyxy".
  • iou_thresholds: IoU thresholds over which to evaluate the recall. Must be a tuple of floats, defaults to [0.5:0.05:0.95].
  • area_range: area range to constrict the considered bounding boxes in metric computation. Defaults to None, which makes the metric count all bounding boxes. Must be a tuple of floats. The first number in the tuple represents a lower bound for areas, while the second value represents an upper bound. For example, when (0, 32**2) is passed to the metric, recall is only evaluated for objects with areas less than 32*32. If (32**2, 1000000**2) is passed the metric will only be evaluated for boxes with areas larger than 32**2, and smaller than 1000000**2.
  • max_detections: number of maximum detections a model is allowed to make. Must be an integer, defaults to 100.

Usage:

COCORecall accepts two Tensors as input to it's update_state method. These Tensors represent bounding boxes in corners format. Utilities to convert Tensors from xywh to corners format can be found in keras_cv.utils.bounding_box.

Each image in a dataset may have a different number of bounding boxes, both in the ground truth dataset and the prediction set. In order to account for this, you may either pass a tf.RaggedTensor, or pad Tensors with -1s to indicate unused boxes. A utility function to perform this padding is available at keras_cv.bounding_box.pad_batch_to_shape.

coco_recall = keras_cv.metrics.COCORecall(
    bounding_box_format='xyxy',
    max_detections=100,
    class_ids=[1]
)

y_true = np.array([[[0, 0, 10, 10, 1], [20, 20, 10, 10, 1]]]).astype(np.float32)
y_pred = np.array([[[0, 0, 10, 10, 1, 1.0], [5, 5, 10, 10, 1, 0.9]]]).astype(
    np.float32
)
coco_recall.update_state(y_true, y_pred)
coco_recall.result()
# 0.5