easy_vision.python.evaluation¶
easy_vision.python.evaluation.action_detection_evaluation¶
-
class
easy_vision.python.evaluation.action_detection_evaluation.
ActionDetectionEvaluator
(categories, tiou_thresholds=(0.1, 0.3, 0.5, 0.7, 0.9), top_k=3, metrics_set='action_detection')[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
NOT CHECK BLOCKED VIDEOS!!
-
__init__
(categories, tiou_thresholds=(0.1, 0.3, 0.5, 0.7, 0.9), top_k=3, metrics_set='action_detection')[source]¶ Construct eval ops from tensor
Parameters: list of string, metric names this evaluator will return (metric_names) –
-
add_batch_image_info
(image_path_batched, groundtruth_classes_batched, groundtruth_boxes_batched, num_groundtruth_boxes_batched, detection_classes_batched, detection_boxes_batched, detection_scores_batched, detection_classes_scores_batched, num_detections_batched)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
evaluate
()[source]¶ python evaluation code which will be run after all test batched data are predicted
Returns: a dict, each key is metric_name, value is metric value
-
get_metric_ops
(info_dict)[source]¶ return self-defined metric_ops for tensorflow evaluation
Parameters: tensor_dict – a dict of tensors for evaluation, each key-value represents a param in function add_batch_sample, key represents the arg-name, and the tensor value will be converted to numpy array through pyfunc
-
easy_vision.python.evaluation.classification_evaluation¶
-
class
easy_vision.python.evaluation.classification_evaluation.
ClassificationEvaluator
(is_multi_class=True, metric_prefix=None, topk=5, include_metrics_per_category=False, categories=None)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
Classification Evaluator
-
__init__
(is_multi_class=True, metric_prefix=None, topk=5, include_metrics_per_category=False, categories=None)[source]¶ Constructor.
Parameters: - is_multi_class – use multi class evaluation or multi label
- metric_prefix – string, metric name prefix
- include_metrics_per_category – return precision recall f1 for each class
- categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
-
add_batch_image_info
(batch_predictions, batch_batch_probs, batch_labels)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict –
A dictionary of groundtruth and prediction numpy arrays required for evaluations. predictions: prediction result, numpy boolean array with shape [num_classes]
or numpy int scalarprobs: prediction probability, numpy float32 array with shape [num_classes] groundtruth_image_classes: numpy boolean array with shape [num_classes] or numpy int scalar
-
calc_top_k_accuracy
(label_ids, top_k_predictions)[source]¶ Calcuate top-k accuracy
Parameters: - label_ids – 1-d numpy array with shape [batch_size]
- top_k_predictions – 2-d numpy array with shape [batch_size, top_k_class]
-
get_metric_ops
(info_dict)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec. :param predictions: prediction result, numpy boolean/int array with shape [batch_size, num_classes]
or int array with shape [batch_size] where each element is class idParameters: - probs – prediction probability, numpy float32 array with shape [batch_size, num_classes]
- labels – label indicator, numpy boolean/int array with shape [batch_size, num_classes], for example, label_id 2 in 5-class is [0, 0, 1, 0, 0]
-
easy_vision.python.evaluation.coco_evaluation¶
Class for evaluating object detections with COCO metrics.
-
class
easy_vision.python.evaluation.coco_evaluation.
CocoDetectionEvaluator
(categories, include_metrics_per_category=False, all_metrics_per_category=False, coco_analyze=False)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.DetectionEvaluator
Class to evaluate COCO detection metrics.
-
__init__
(categories, include_metrics_per_category=False, all_metrics_per_category=False, coco_analyze=False)[source]¶ Constructor.
Parameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- include_metrics_per_category – If True, include metrics for each category.
- all_metrics_per_category – Whether to include all the summary metrics for each category in per_category_ap. Be careful with setting it to true if you have more than handful of categories, because it will pollute your mldash.
-
add_batch_image_info
(image_id_batched, groundtruth_boxes_batched, groundtruth_classes_batched, groundtruth_is_crowd_batched, num_gt_boxes_per_image, detection_boxes_batched, detection_scores_batched, detection_classes_batched, num_det_boxes_per_image)[source]¶ Update operation for adding batch of images to Coco evaluator.
-
add_single_detected_image_info
(image_id, detections_dict)[source]¶ Adds detections for a single image to be used for evaluation.
If a detection has already been added for this image id, a warning is logged, and the detection is skipped.
Parameters: - image_id – A unique string/integer identifier for the image.
- detections_dict –
A dictionary containing - DetectionResultFields.detection_boxes: float32 numpy array of shape
[num_boxes, 4] containing num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- DetectionResultFields.detection_scores: float32 numpy array of shape
- [num_boxes] containing detection scores for the boxes.
- DetectionResultFields.detection_classes: integer numpy array of shape
- [num_boxes] containing 1-indexed detection classes for the boxes.
Raises: ValueError
– If groundtruth for the image_id is not available.
-
add_single_ground_truth_image_info
(image_id, groundtruth_dict)[source]¶ Adds groundtruth for a single image to be used for evaluation.
If the image has already been added, a warning is logged, and groundtruth is ignored.
Parameters: - image_id – A unique string/integer identifier for the image.
- groundtruth_dict –
A dictionary containing - InputDataFields.groundtruth_boxes: float32 numpy array of shape
[num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- InputDataFields.groundtruth_classes: integer numpy array of shape
- [num_boxes] containing 1-indexed groundtruth classes for the boxes.
- InputDataFields.groundtruth_is_crowd (optional): integer numpy array of
- shape [num_boxes] containing iscrowd flag for groundtruth boxes.
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
evaluate
(analyze=False)[source]¶ Evaluates the detection boxes and returns a dictionary of coco metrics. :param analyze: if set True, will call coco analyze to analyze false positive,
return result images for each class, this process is very slow.Returns: Two dictionaries First dictionary holding - 1. summary_metrics: ‘DetectionBoxes_Precision/mAP’: mean average precision over classes
averaged over IOU thresholds ranging from .5 to .95 with .05 increments.’DetectionBoxes_Precision/mAP@.50IOU’: mean average precision at 50% IOU ‘DetectionBoxes_Precision/mAP@.75IOU’: mean average precision at 75% IOU ‘DetectionBoxes_Precision/mAP (small)’: mean average precision for small
objects (area < 32^2 pixels).- ’DetectionBoxes_Precision/mAP (medium)’: mean average precision for
- medium sized objects (32^2 pixels < area < 96^2 pixels).
- ’DetectionBoxes_Precision/mAP (large)’: mean average precision for large
- objects (96^2 pixels < area < 10000^2 pixels).
’DetectionBoxes_Recall/AR@1’: average recall with 1 detection. ‘DetectionBoxes_Recall/AR@10’: average recall with 10 detections. ‘DetectionBoxes_Recall/AR@100’: average recall with 100 detections. ‘DetectionBoxes_Recall/AR@100 (small)’: average recall for small objects
with 100.- ’DetectionBoxes_Recall/AR@100 (medium)’: average recall for medium objects
- with 100.
- ’DetectionBoxes_Recall/AR@100 (large)’: average recall for large objects
- with 100 detections.
2. per_category_ap: if include_metrics_per_category is True, category specific results with keys of the form: ‘Precision mAP ByCategory/category’ (without the supercategory part if no supercategories exist). For backward compatibility ‘PerformanceByCategory’ is included in the output regardless of all_metrics_per_category.
- Second dictionary holding a number of images which is analyze result, if analyze
- is set False, second dicionary will not be returned
-
get_metric_ops
(info_dict)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec.
Note that once value_op is called, the detections and groundtruth added via update_op are cleared.
This function can take in groundtruth and detections for a batch of images, or for a single image. For the latter case, the batch dimension for input tensors need not be present.
Parameters: info_dict – a dict of tensor containing following key-value image_id: string/integer tensor of shape [batch] with unique identifiers
for the images.- groundtruth_boxes: float32 tensor of shape [batch, num_boxes, 4]
- containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- groundtruth_classes: int32 tensor of shape [batch, num_boxes] containing
- 1-indexed groundtruth classes for the boxes.
- detection_boxes: float32 tensor of shape [batch, num_boxes, 4] containing
- num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- detection_scores: float32 tensor of shape [batch, num_boxes] containing
- detection scores for the boxes.
- detection_classes: int32 tensor of shape [batch, num_boxes] containing
- 1-indexed detection classes for the boxes.
- groundtruth_is_crowd: bool tensor of shape [batch, num_boxes] containing
- is_crowd annotations. This field is optional, and if not passed, then all boxes are treated as not is_crowd.
- num_gt_boxes_per_image: int32 tensor of shape [batch] containing the
- number of groundtruth boxes per image. If None, will assume no padding in groundtruth tensors.
- num_det_boxes_per_image: int32 tensor of shape [batch] containing the
- number of detection boxes per image. If None, will assume no padding in the detection tensors.
Returns: a dictionary of metric names to tuple of value_op and update_op that can be used as eval metric ops in tf.EstimatorSpec. Note that all update ops must be run together and similarly all value ops must be run together to guarantee correct behaviour.
-
-
class
easy_vision.python.evaluation.coco_evaluation.
CocoMaskEvaluator
(categories, include_metrics_per_category=False)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.DetectionEvaluator
Class to evaluate COCO detection metrics.
-
__init__
(categories, include_metrics_per_category=False)[source]¶ Constructor.
Parameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- include_metrics_per_category – If True, include metrics for each category.
-
add_batch_image_info
(image_id_batched, original_image_shape_batched, groundtruth_boxes_batched, groundtruth_classes_batched, groundtruth_instance_masks_batched, groundtruth_is_crowd_batched, num_gt_boxes_per_image, detection_scores_batched, detection_classes_batched, detection_masks_batched, num_det_boxes_per_image)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_detected_image_info
(image_id, detections_dict)[source]¶ Adds detections for a single image to be used for evaluation.
If a detection has already been added for this image id, a warning is logged, and the detection is skipped.
Parameters: - image_id – A unique string/integer identifier for the image.
- detections_dict –
A dictionary containing - DetectionResultFields.detection_scores: float32 numpy array of shape
[num_boxes] containing detection scores for the boxes.- DetectionResultFields.detection_classes: integer numpy array of shape
- [num_boxes] containing 1-indexed detection classes for the boxes.
- DetectionResultFields.detection_masks: optional uint8 numpy array of
- shape [num_boxes, image_height, image_width] containing instance masks corresponding to the boxes. The elements of the array must be in {0, 1}.
Raises: ValueError
– If groundtruth for the image_id is not available or if spatial shapes of groundtruth_instance_masks and detection_masks are incompatible.
-
add_single_ground_truth_image_info
(image_id, groundtruth_dict)[source]¶ Adds groundtruth for a single image to be used for evaluation.
If the image has already been added, a warning is logged, and groundtruth is ignored.
Parameters: - image_id – A unique string/integer identifier for the image.
- groundtruth_dict –
A dictionary containing - InputDataFields.groundtruth_boxes: float32 numpy array of shape
[num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- InputDataFields.groundtruth_classes: integer numpy array of shape
- [num_boxes] containing 1-indexed groundtruth classes for the boxes.
- InputDataFields.groundtruth_instance_masks: uint8 numpy array of shape
- [num_boxes, image_height, image_width] containing groundtruth masks corresponding to the boxes. The elements of the array must be in {0, 1}.
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
evaluate
()[source]¶ Evaluates the detection masks and returns a dictionary of coco metrics.
Returns: A dictionary holding - 1. summary_metrics: ‘DetectionMasks_Precision/mAP’: mean average precision over classes
averaged over IOU thresholds ranging from .5 to .95 with .05 increments.’DetectionMasks_Precision/mAP@.50IOU’: mean average precision at 50% IOU. ‘DetectionMasks_Precision/mAP@.75IOU’: mean average precision at 75% IOU. ‘DetectionMasks_Precision/mAP (small)’: mean average precision for small
objects (area < 32^2 pixels).- ’DetectionMasks_Precision/mAP (medium)’: mean average precision for medium
- sized objects (32^2 pixels < area < 96^2 pixels).
- ’DetectionMasks_Precision/mAP (large)’: mean average precision for large
- objects (96^2 pixels < area < 10000^2 pixels).
’DetectionMasks_Recall/AR@1’: average recall with 1 detection. ‘DetectionMasks_Recall/AR@10’: average recall with 10 detections. ‘DetectionMasks_Recall/AR@100’: average recall with 100 detections. ‘DetectionMasks_Recall/AR@100 (small)’: average recall for small objects
with 100 detections.- ’DetectionMasks_Recall/AR@100 (medium)’: average recall for medium objects
- with 100 detections.
- ’DetectionMasks_Recall/AR@100 (large)’: average recall for large objects
- with 100 detections.
2. per_category_ap: if include_metrics_per_category is True, category specific results with keys of the form: ‘Precision mAP ByCategory/category’ (without the supercategory part if no supercategories exist). For backward compatibility ‘PerformanceByCategory’ is included in the output regardless of all_metrics_per_category.
-
get_metric_ops
(info_dict)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec.
Note that once value_op is called, the detections and groundtruth added via update_op are cleared.
Parameters: info_dict – a dict of tensor containing following key-value image_id: Unique string/integer identifier for the image. groundtruth_boxes: float32 tensor of shape [num_boxes, 4] containing
num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- groundtruth_classes: int32 tensor of shape [num_boxes] containing
- 1-indexed groundtruth classes for the boxes.
- groundtruth_instance_masks: uint8 tensor array of shape
- [num_boxes, image_height, image_width] containing groundtruth masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- detection_scores: float32 tensor of shape [num_boxes] containing
- detection scores for the boxes.
- detection_classes: int32 tensor of shape [num_boxes] containing
- 1-indexed detection classes for the boxes.
- detection_masks: uint8 tensor array of shape
- [num_boxes, image_height, image_width] containing instance masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- groundtruth_is_crowd: bool tensor of shape [batch, num_boxes] containing
- is_crowd annotations. This field is optional, and if not passed, then all boxes are treated as not is_crowd.
- analyze: bool value, if set True, will call coco analyze() and summary
- images of anlysis results to tensorboard
Returns: a dictionary of metric names to tuple of value_op and update_op that can be used as eval metric ops in tf.EstimatorSpec. Note that all update ops must be run together and similarly all value ops must be run together to guarantee correct behaviour.
-
easy_vision.python.evaluation.eval_util¶
-
easy_vision.python.evaluation.eval_util.
get_evaluators
(eval_config, categories=None, char_dict=None)[source]¶ get corresponding evaluators :param eval_config: protobuf object for eval.proto, details can be seen in python/proto/eval.proto. :param categories: A list of dicts, each of which has the following keys -
‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.Parameters: char_dict – a instance of CharDict Returns: a list of instance of Evaluator
-
easy_vision.python.evaluation.eval_util.
get_metric_ops
(eval_config, info_dict, categories=None, char_dict=None)[source]¶ get corresponding metric ops :param eval_config: protobuf object for eval.proto, details can be seen in python/proto/eval.proto. :param info_dict: A dict containing groundtruth info and detection info for building metric ops, please
see each model for different implementationsParameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- char_dict – a instance of CharDict
Returns: a dict of metric_op, key is the metric name and value is a pair of update_op and value_op
easy_vision.python.evaluation.evaluator¶
-
class
easy_vision.python.evaluation.evaluator.
DetectionEvaluator
(categories)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
Interface for object detection evalution classes.
evaluator = DetectionEvaluator(categories)
# Detections and groundtruth for image 1. evaluator.add_single_groundtruth_image_info(…) evaluator.add_single_detected_image_info(…)
# Detections and groundtruth for image 2. evaluator.add_single_groundtruth_image_info(…) evaluator.add_single_detected_image_info(…)
metrics_dict = evaluator.evaluate()
-
__init__
(categories)[source]¶ Constructor.
Parameters: categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
-
add_single_detected_image_info
(image_id, detections_dict)[source]¶ Adds detections for a single image to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- detections_dict – A dictionary of detection numpy arrays required for evaluation.
-
-
class
easy_vision.python.evaluation.evaluator.
Evaluator
(metric_names=[])[source]¶ Bases:
object
Evaluator interface
-
__init__
(metric_names=[])[source]¶ Construct eval ops from tensor
Parameters: list of string, metric names this evaluator will return (metric_names) –
-
add_batch_image_info
(*arg_list)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
evaluate
()[source]¶ python evaluation code which will be run after all test batched data are predicted
Returns: a dict, each key is metric_name, value is metric value
-
get_metric_ops
(tensor_dict)[source]¶ return self-defined metric_ops for tensorflow evaluation
Parameters: tensor_dict – a dict of tensors for evaluation, each key-value represents a param in function add_batch_sample, key represents the arg-name, and the tensor value will be converted to numpy array through pyfunc
-
easy_vision.python.evaluation.icdar_detection_evaluation¶
-
class
easy_vision.python.evaluation.icdar_detection_evaluation.
TextDetectionEvaluator
(categories, metrics_set='icdar_detection_metrics')[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
-
__init__
(categories, metrics_set='icdar_detection_metrics')[source]¶ Construct eval ops from tensor
Parameters: list of string, metric names this evaluator will return (metric_names) –
-
add_batch_image_info
(image_path_batched, groundtruth_classes_batched, groundtruth_keypoints_batched, groundtruth_difficult_batched, num_groundtruth_boxes_batched, detection_classes_batched, detection_keypoints_batched, num_detections_batched)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
get_metric_ops
(info_dict)[source]¶ return self-defined metric_ops for tensorflow evaluation
Parameters: tensor_dict – a dict of tensors for evaluation, each key-value represents a param in function add_batch_sample, key represents the arg-name, and the tensor value will be converted to numpy array through pyfunc
-
easy_vision.python.evaluation.icdar_end2end_evaluation¶
-
class
easy_vision.python.evaluation.icdar_end2end_evaluation.
TextEnd2EndEvaluator
(char_dict, categories, metrics_set='icdar_end2end_metrics')[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
TextSpottingEvaluator
-
__init__
(char_dict, categories, metrics_set='icdar_end2end_metrics')[source]¶ Construct eval ops from tensor
Parameters: list of string, metric names this evaluator will return (metric_names) –
-
add_batch_image_info
(image_path_batched, groundtruth_classes_batched, groundtruth_keypoints_batched, groundtruth_difficult_batched, num_groundtruth_boxes_batched, detection_classes_batched, detection_keypoints_batched, num_detections_batched, groundtruth_texts_batched, detection_texts_ids_batched, detection_texts_scores_batched)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
get_metric_ops
(info_dict)[source]¶ return self-defined metric_ops for tensorflow evaluation
Parameters: tensor_dict – a dict of tensors for evaluation, each key-value represents a param in function add_batch_sample, key represents the arg-name, and the tensor value will be converted to numpy array through pyfunc
-
easy_vision.python.evaluation.pascal_evaluation¶
object_detection_evaluation module.
PascalEvaluationImpl is a class which manages ground truth information of a object detection dataset, and computes frequently used detection metrics such as Precision, Recall, CorLoc of the provided detection results. It supports the following operations: 1) Add ground truth information of images sequentially. 2) Add detection result of images sequentially. 3) Evaluate detection metrics on already inserted detection results. 4) Write evaluation result into a pickle file for future processing or
visualization.
Note: This module operates on numpy boxes and box lists.
-
class
easy_vision.python.evaluation.pascal_evaluation.
ObjectDetectionEvalMetrics
(average_precisions, mean_ap, precisions, recalls, corlocs, mean_corloc)¶ Bases:
tuple
-
average_precisions
¶ Alias for field number 0
-
corlocs
¶ Alias for field number 4
-
mean_ap
¶ Alias for field number 1
-
mean_corloc
¶ Alias for field number 5
-
precisions
¶ Alias for field number 2
-
recalls
¶ Alias for field number 3
-
-
class
easy_vision.python.evaluation.pascal_evaluation.
PascalDetectionEvaluator
(categories, matching_iou_threshold=0.5, use_07_metric=False, use_weighted_mean_ap=False, metric_prefix='PascalBoxes')[source]¶ Bases:
easy_vision.python.evaluation.pascal_evaluation.PascalObjectDetectionEvaluatorBase
A class to evaluate detections using PASCAL metrics.
if use_weighted_mean_ap is set True, return Weighted PASCAL metrics Weighted PASCAL metrics computes the mean average precision as the average precision given the scores and tp_fp_labels of all classes. In comparison, PASCAL metrics computes the mean average precision as the mean of the per-class average precisions.
This definition is very similar to the mean of the per-class average precisions weighted by class frequency. However, they are typically not the same as the average precision is not a linear function of the scores and tp_fp_labels.
-
__init__
(categories, matching_iou_threshold=0.5, use_07_metric=False, use_weighted_mean_ap=False, metric_prefix='PascalBoxes')[source]¶ Constructor.
Parameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- matching_iou_threshold – IOU threshold to use for matching groundtruth boxes to detection boxes.
- evaluate_corlocs – (optional) boolean which determines if corloc scores are to be returned or not.
- metric_prefix – (optional) string prefix for metric name; if None, no prefix is used.
- use_weighted_mean_ap – (optional) boolean which determines if the mean average precision is computed directly from the scores and tp_fp_labels of all classes.
- evaluate_masks – If False, evaluation will be performed based on boxes. If True, mask evaluation will be performed instead.
- group_of_weight – Weight of group-of boxes.If set to 0, detections of the correct class within a group-of box are ignored. If weight is > 0, then if at least one detection falls within a group-of box with matching_iou_threshold, weight group_of_weight is added to true positives. Consequently, if no detection falls within a group-of box, weight group_of_weight is added to false negatives.
Raises: ValueError
– If the category ids are not 1-indexed.
-
add_batch_image_info
(image_id_batched, groundtruth_boxes_batched, groundtruth_classes_batched, groundtruth_is_difficult_batched, num_gt_boxes_per_image, detection_boxes_batched, detection_scores_batched, detection_classes_batched, num_det_boxes_per_image)[source]¶ Update operation for adding batch of images to Coco evaluator.
-
get_metric_ops
(info_dict)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec.
Note that once value_op is called, the detections and groundtruth added via update_op are cleared.
This function can take in groundtruth and detections for a batch of images, or for a single image. For the latter case, the batch dimension for input tensors need not be present.
Parameters: info_dict – a dict of tensor containing following key-value image_id: string/integer tensor of shape [batch] with unique identifiers
for the images.- groundtruth_boxes: float32 tensor of shape [batch, num_boxes, 4]
- containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- groundtruth_classes: int32 tensor of shape [batch, num_boxes] containing
- 1-indexed groundtruth classes for the boxes.
- detection_boxes: float32 tensor of shape [batch, num_boxes, 4] containing
- num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- detection_scores: float32 tensor of shape [batch, num_boxes] containing
- detection scores for the boxes.
- detection_classes: int32 tensor of shape [batch, num_boxes] containing
- 1-indexed detection classes for the boxes.
- groundtruth_is_difficult: bool tensor of shape [batch, num_boxes] containing
- is_difficult annotations. This field is optional, and if not passed, then all boxes are treated as not is_difficult.
- num_groundtruth_boxes: int32 tensor of shape [batch] containing the
- number of groundtruth boxes per image. If None, will assume no padding in groundtruth tensors.
- num_detections: int32 tensor of shape [batch] containing the
- number of detection boxes per image. If None, will assume no padding in the detection tensors.
Returns: a dictionary of metric names to tuple of value_op and update_op that can be used as eval metric ops in tf.EstimatorSpec. Note that all update ops must be run together and similarly all value ops must be run together to guarantee correct behaviour.
-
-
class
easy_vision.python.evaluation.pascal_evaluation.
PascalEvaluationImpl
(num_groundtruth_classes, matching_iou_threshold=0.5, nms_iou_threshold=1.0, nms_max_output_boxes=10000, use_07_metric=False, use_weighted_mean_ap=False, label_id_offset=0, group_of_weight=0.0)[source]¶ Bases:
object
Internal implementation of Pascal object detection metrics.
-
__init__
(num_groundtruth_classes, matching_iou_threshold=0.5, nms_iou_threshold=1.0, nms_max_output_boxes=10000, use_07_metric=False, use_weighted_mean_ap=False, label_id_offset=0, group_of_weight=0.0)[source]¶ x.__init__(…) initializes x; see help(type(x)) for signature
-
add_single_detected_image_info
(image_key, detected_boxes, detected_scores, detected_class_labels, detected_masks=None)[source]¶ Adds detections for a single image to be used for evaluation.
Parameters: - image_key – A unique string/integer identifier for the image.
- detected_boxes – float32 numpy array of shape [num_boxes, 4] containing num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- detected_scores – float32 numpy array of shape [num_boxes] containing detection scores for the boxes.
- detected_class_labels – integer numpy array of shape [num_boxes] containing 0-indexed detection classes for the boxes.
- detected_masks – np.uint8 numpy array of shape [num_boxes, height, width] containing num_boxes detection masks with values ranging between 0 and 1.
Raises: ValueError
– if the number of boxes, scores and class labels differ in length.
-
add_single_ground_truth_image_info
(image_key, groundtruth_boxes, groundtruth_class_labels, groundtruth_is_difficult_list=None, groundtruth_is_group_of_list=None, groundtruth_masks=None)[source]¶ Adds groundtruth for a single image to be used for evaluation.
Parameters: - image_key – A unique string/integer identifier for the image.
- groundtruth_boxes – float32 numpy array of shape [num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- groundtruth_class_labels – integer numpy array of shape [num_boxes] containing 0-indexed groundtruth classes for the boxes.
- groundtruth_is_difficult_list – A length M numpy boolean array denoting whether a ground truth box is a difficult instance or not. To support the case that no boxes are difficult, it is by default set as None.
- groundtruth_is_group_of_list – A length M numpy boolean array denoting whether a ground truth box is a group-of box or not. To support the case that no boxes are groups-of, it is by default set as None.
- groundtruth_masks – uint8 numpy array of shape [num_boxes, height, width] containing num_boxes groundtruth masks. The mask values range from 0 to 1.
-
evaluate
()[source]¶ Compute evaluation result.
Returns: - A named tuple with the following fields -
- average_precision: float numpy array of average precision for
- each class.
mean_ap: mean average precision of all classes, float scalar precisions: List of precisions, each precision is a float numpy
arrayrecalls: List of recalls, each recall is a float numpy array corloc: numpy float array mean_corloc: Mean CorLoc score for each class, float scalar
-
-
class
easy_vision.python.evaluation.pascal_evaluation.
PascalMaskEvaluator
(categories, matching_iou_threshold=0.5, use_weighted_mean_ap=False, metric_prefix='PascalMasks')[source]¶ Bases:
easy_vision.python.evaluation.pascal_evaluation.PascalObjectDetectionEvaluatorBase
A class to evaluate instance masks using PASCAL metrics.
-
__init__
(categories, matching_iou_threshold=0.5, use_weighted_mean_ap=False, metric_prefix='PascalMasks')[source]¶ Constructor.
Parameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- matching_iou_threshold – IOU threshold to use for matching groundtruth boxes to detection boxes.
- evaluate_corlocs – (optional) boolean which determines if corloc scores are to be returned or not.
- metric_prefix – (optional) string prefix for metric name; if None, no prefix is used.
- use_weighted_mean_ap – (optional) boolean which determines if the mean average precision is computed directly from the scores and tp_fp_labels of all classes.
- evaluate_masks – If False, evaluation will be performed based on boxes. If True, mask evaluation will be performed instead.
- group_of_weight – Weight of group-of boxes.If set to 0, detections of the correct class within a group-of box are ignored. If weight is > 0, then if at least one detection falls within a group-of box with matching_iou_threshold, weight group_of_weight is added to true positives. Consequently, if no detection falls within a group-of box, weight group_of_weight is added to false negatives.
Raises: ValueError
– If the category ids are not 1-indexed.
-
add_batch_image_info
(image_id_batched, groundtruth_boxes_batched, groundtruth_classes_batched, groundtruth_instance_masks, groundtruth_is_difficult_batched, num_gt_boxes_per_image, detection_boxes_batched, detection_scores_batched, detection_classes_batched, detection_masks_batched, num_det_boxes_per_image)[source]¶ Update operation for adding batch of images to Coco evaluator.
-
get_metric_ops
(info_dict)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec.
Note that once value_op is called, the detections and groundtruth added via update_op are cleared.
This function can take in groundtruth and detections for a batch of images, or for a single image. For the latter case, the batch dimension for input tensors need not be present.
Parameters: info_dict – a dict of tensor containing following key-value image_id: string/integer identifier for the images. groundtruth_boxes: float32 tensor of shape [num_boxes, 4]
containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- groundtruth_classes: int32 tensor of shape [num_boxes] containing
- 1-indexed groundtruth classes for the boxes.
- groundtruth_instance_masks: uint8 tensor array of shape
- [num_boxes, image_height, image_width] containing groundtruth masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- detection_boxes: float32 tensor of shape [num_boxes, 4] containing
- num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- detection_scores: float32 tensor of shape [num_boxes] containing
- detection scores for the boxes.
- detection_classes: int32 tensor of shape [num_boxes] containing
- 1-indexed detection classes for the boxes.
- detection_masks: uint8 tensor array of shape
- [num_boxes, image_height, image_width] containing instance masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- groundtruth_is_difficult: bool tensor of shape [num_boxes] containing
- is_difficult annotations. This field is optional, and if not passed, then all boxes are treated as not is_difficult.
Returns: a dictionary of metric names to tuple of value_op and update_op that can be used as eval metric ops in tf.EstimatorSpec. Note that all update ops must be run together and similarly all value ops must be run together to guarantee correct behaviour.
-
-
class
easy_vision.python.evaluation.pascal_evaluation.
PascalObjectDetectionEvaluatorBase
(categories, matching_iou_threshold=0.5, evaluate_corlocs=False, metric_prefix=None, use_07_metric=False, use_weighted_mean_ap=False, evaluate_masks=False, group_of_weight=0.0)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.DetectionEvaluator
A class to evaluate detections.
-
__init__
(categories, matching_iou_threshold=0.5, evaluate_corlocs=False, metric_prefix=None, use_07_metric=False, use_weighted_mean_ap=False, evaluate_masks=False, group_of_weight=0.0)[source]¶ Constructor.
Parameters: - categories – A list of dicts, each of which has the following keys - ‘id’: (required) an integer id uniquely identifying this category. ‘name’: (required) string representing category name e.g., ‘cat’, ‘dog’.
- matching_iou_threshold – IOU threshold to use for matching groundtruth boxes to detection boxes.
- evaluate_corlocs – (optional) boolean which determines if corloc scores are to be returned or not.
- metric_prefix – (optional) string prefix for metric name; if None, no prefix is used.
- use_weighted_mean_ap – (optional) boolean which determines if the mean average precision is computed directly from the scores and tp_fp_labels of all classes.
- evaluate_masks – If False, evaluation will be performed based on boxes. If True, mask evaluation will be performed instead.
- group_of_weight – Weight of group-of boxes.If set to 0, detections of the correct class within a group-of box are ignored. If weight is > 0, then if at least one detection falls within a group-of box with matching_iou_threshold, weight group_of_weight is added to true positives. Consequently, if no detection falls within a group-of box, weight group_of_weight is added to false negatives.
Raises: ValueError
– If the category ids are not 1-indexed.
-
add_single_detected_image_info
(image_id, detections_dict)[source]¶ Adds detections for a single image to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- detections_dict –
A dictionary containing - standard_fields.DetectionResultFields.detection_boxes: float32 numpy
array of shape [num_boxes, 4] containing num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- standard_fields.DetectionResultFields.detection_scores: float32 numpy
- array of shape [num_boxes] containing detection scores for the boxes.
- standard_fields.DetectionResultFields.detection_classes: integer numpy
- array of shape [num_boxes] containing 1-indexed detection classes for the boxes.
- standard_fields.DetectionResultFields.detection_masks: uint8 numpy
- array of shape [num_boxes, height, width] containing num_boxes masks of values ranging between 0 and 1.
Raises: ValueError
– If detection masks are not in detections dictionary.
-
add_single_ground_truth_image_info
(image_id, groundtruth_dict)[source]¶ Adds groundtruth for a single image to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- groundtruth_dict –
A dictionary containing - groundtruth_boxes: float32 numpy array
of shape [num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.- groundtruth_classes: integer numpy array
- of shape [num_boxes] containing 1-indexed groundtruth classes for the boxes.
- groundtruth_difficult: Optional length
- M numpy boolean array denoting whether a ground truth box is a difficult instance or not. This field is optional to support the case that no boxes are difficult.
- groundtruth_instance_masks: Optional
- numpy array of shape [num_boxes, height, width] with values in {0, 1}.
Raises: ValueError
– On adding groundtruth for an image more than once. Will also raise error if instance masks are not in groundtruth dictionary.
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
evaluate
()[source]¶ Compute evaluation result.
Returns: A dictionary of metrics with the following fields - - summary_metrics:
’Precision/mAP@<matching_iou_threshold>IOU’: mean average precision at the specified IOU threshold.- per_category_ap: category specific results with keys of the form
’PerformanceByCategory/mAP@<matching_iou_threshold>IOU/category’.
-
get_metric_ops
(image_id, groundtruth_boxes, groundtruth_classes, detection_boxes, detection_scores, detection_classes, groundtruth_instance_masks=None, detection_masks=None, groundtruth_is_difficult=None, num_gt_boxes_per_image=None, num_det_boxes_per_image=None)[source]¶ Returns a dictionary of eval metric ops to use with tf.EstimatorSpec.
Note that once value_op is called, the detections and groundtruth added via update_op are cleared.
This function can take in groundtruth and detections for a batch of images, or for a single image. For the latter case, the batch dimension for input tensors need not be present.
Parameters: - image_id – string/integer tensor of shape [batch] with unique identifiers for the images.
- groundtruth_boxes – float32 tensor of shape [batch, num_boxes, 4] containing num_boxes groundtruth boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- groundtruth_classes – int32 tensor of shape [batch, num_boxes] containing 1-indexed groundtruth classes for the boxes.
- detection_boxes – float32 tensor of shape [batch, num_boxes, 4] containing num_boxes detection boxes of the format [ymin, xmin, ymax, xmax] in absolute image coordinates.
- detection_scores – float32 tensor of shape [batch, num_boxes] containing detection scores for the boxes.
- detection_classes – int32 tensor of shape [batch, num_boxes] containing 1-indexed detection classes for the boxes.
- groundtruth_instance_masks – uint8 tensor array of shape [num_boxes, image_height, image_width] containing groundtruth masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- detection_masks – uint8 tensor array of shape [num_boxes, image_height, image_width] containing instance masks corresponding to the boxes. The elements of the array must be in {0, 1}.
- groundtruth_is_difficult – bool tensor of shape [batch, num_boxes] containing is_difficult annotations. This field is optional, and if not passed, then all boxes are treated as not is_difficult.
- num_gt_boxes_per_image – int32 tensor of shape [batch] containing the number of groundtruth boxes per image. If None, will assume no padding in groundtruth tensors.
- num_det_boxes_per_image – int32 tensor of shape [batch] containing the number of detection boxes per image. If None, will assume no padding in the detection tensors.
Returns: a dictionary of metric names to tuple of value_op and update_op that can be used as eval metric ops in tf.EstimatorSpec. Note that all update ops must be run together and similarly all value ops must be run together to guarantee correct behaviour.
-
easy_vision.python.evaluation.text_recognition_evaluation¶
-
class
easy_vision.python.evaluation.text_recognition_evaluation.
TextRecognitionEvaluator
(char_dict)[source]¶ Bases:
easy_vision.python.evaluation.evaluator.Evaluator
-
__init__
(char_dict)[source]¶ Construct eval ops from tensor
Parameters: list of string, metric names this evaluator will return (metric_names) –
-
add_batch_image_info
(image_path_batched, groundtruth_text_batched, sequence_predict_ids_batched, sequence_probability_batched)[source]¶ store prediction and labels in a internal list
Parameters: a list of data containing prediction and label info (arg_list) –
-
add_single_image_info
(image_id, info_dict)[source]¶ - Adds groundtruth and prediction info for a single image
- to be used for evaluation.
Parameters: - image_id – A unique string/integer identifier for the image.
- info_dict – A dictionary of groundtruth and detection numpy arrays required for evaluations.
-
get_metric_ops
(info_dict)[source]¶ return self-defined metric_ops for tensorflow evaluation
Parameters: tensor_dict – a dict of tensors for evaluation, each key-value represents a param in function add_batch_sample, key represents the arg-name, and the tensor value will be converted to numpy array through pyfunc
-