メインコンテンツ

Object Detector Analyzer

Interactively visualize and evaluate object detection results against ground truth

Since R2026a

Description

The Object Detector Analyzer app enables you to visualize and evaluate object detection results against ground truth data. Using the app, you can:

  • Evaluate object detection performance either without ground truth data or by comparing detections to ground truth and generating performance metrics. To get started, see Get Started with Object Detector Analyzer App.

  • Run a pretrained object detector in the app, or import precomputed object detection results from the workspace. For a list of supported object detectors to run in the app, see Run Supported Object Detectors. For information about the format of precomputed detection results, see Import Precomputed Detection Results.

  • Visualize and compare detections and ground truth annotations in an interactive image browser. Inspect individual detections, and overlay their confidence scores, false positives, and false negatives overlaid on images.

  • Compute and visualize detector performance metrics: precision-recall curves, confusion matrices, AP, mAP, and mLAMR across data sets, classes, and overlap thresholds, and detection performance by object area. For more information on performance metrics, see Evaluate Object Detector Performance.

  • Interactively adjust the detection threshold and overlap (IoU) threshold to analyze how stricter or more lenient thresholds impact detector performance, enabling you to tune your detector for the optimal trade-off between false negatives and false positives.

  • Export all detections or filtered results to the workspace for further analysis.

  • Export computed performance metrics as an objectDetectionMetrics object. You can use this object to create custom visualizations, compare different detector models, or perform further performance analysis.

To learn more about this app, see Get Started with Object Detector Analyzer App.

Object Detector Analyzer App

Open the Object Detector Analyzer App

  • MATLAB® Toolstrip: On the Apps tab, under Image Processing and Computer Vision, click the app icon.

  • MATLAB command prompt: Enter objectDetectorAnalyzer.

Examples

expand all

This example shows how to visualize object detections on a video using the Object Detector Analyzer App.

Load detector.

detector = peopleDetector();

Load a video using VideoReader.

videoURL = "https://ssd.mathworks.com/supportfiles/vision/data/PedestrianTrackingVideo.avi";
videoFilename = "PedestrianTrackingVideo.avi";
if ~exist("PedestrianTrackingVideo.avi","file")
disp("Downloading Pedestrian Tracking Video (35 MB)")
    websave(videoFilename,videoURL);
end

reader = VideoReader(videoFilename);

The Object Detector Analyzer App can import any datastore that produces image data. Use an arrayDatastore to produce video frame indices and create a transform datastore that calls VideoReader read object function with the frame indices returned by the arrayDatastore. This will read one frame of video data every time data is read from the datastore.

frameIndexDS = arrayDatastore((1:reader.NumFrames)',IterationDimension=1,OutputType="same");
videoFrameDS = transform(frameIndexDS,@(frameIdx)read(reader,frameIdx));

Launch Object Detector Analyzer App with the detector and the datastore as input. The app will run the detector on the video frames and display the detections on each frame. By default, the video frames are sorted by the number of detections in each frame. You can sort the video frames by Import order using the Sort By dropdown to visualize the video frames in sequence from start to finish.

objectDetectorAnalyzer(detector,videoFrameDS)

Related Examples

Programmatic Use

objectDetectorAnalyzer opens a new session of the app. To start a new analysis session, select New Session, then Evaluate detections without ground truth or Evaluate detections against ground truth.

objectDetectorAnalyzer(results,imds) opens the app and visualizes the object detection results, results, on all of the test images from the ImageDatastore object imds. You must specify results as a table with N rows, where N is the number of images, and three columns in this order:

  • Bounding boxes for the corresponding image, specified as an M-by-4 matrix. Each row of the matrix contains a bounding box of the form [x y width height], where x, y, width, height specify the upper-left corner coordinates, width, and height of each bounding box, respectively.

  • Detection confidence scores for the corresponding image, specified as an M-by-1 vector.

  • Object labels for the corresponding image, specified as an M-by-1 categorical array of class names which correspond to each bounding box.

M is the number of bounding boxes in each image.

objectDetectorAnalyzer(results,groundTruthData) opens the app, evaluates the object detection results, results, and visualizes them on all of the ground truth images, groundTruthData. Specify the ground truth data as a datastore or a groundTruth object. You can create a groundTruth object by exporting labels you create using the Image Labeler app into the workspace. If groundTruthData is a datastore, you must format the datastore such that calling the read function on it returns a cell array with these columns.

  • The first column must contain the RGB, grayscale, or binary image data for each image.

  • The second column must contain the bounding boxes for the corresponding image, specified as an M-by-4 matrix. M is the number of ground truth objects.

  • The third column must contain the object class names for the corresponding image, specified as a categorical vector of size M-by-1. M is the number of ground truth objects.

More About

expand all

Version History

Introduced in R2026a