Main Content

acfObjectDetectorMonoCamera

Detect objects in monocular camera using aggregate channel features

Description

The acfObjectDetectorMonoCamera contains information about an aggregate channel features (ACF) object detector that is configured for use with a monocular camera sensor. To detect objects in an image that was captured by the camera, pass the detector to the detect function.

Creation

  1. Create an acfObjectDetector object by calling the trainACFObjectDetector function with training data.

    detector = trainACFObjectDetector(trainingData,...);

    Alternatively, create a pretrained detector using functions such as vehicleDetectorACF or peopleDetectorACF.

  2. Create a monoCamera object to model the monocular camera sensor.

    sensor = monoCamera(...);
  3. Create an acfObjectDetectorMonoCamera object by passing the detector and sensor as inputs to the configureDetectorMonoCamera function. The configured detector inherits property values from the original detector.

    configuredDetector = configureDetectorMonoCamera(detector,sensor,...);

Properties

expand all

Name of the classification model, specified as a character vector or string scalar. By default, the name is set to the heading of the second column of the trainingData table specified in the trainACFObjectDetector function. You can modify this name after creating your acfObjectDetectorMonoCamera object.

Example: 'stopSign'

This property is read-only.

Size of training images, specified as a [height width] vector.

Example: [100 100]

This property is read-only.

Number of weak learners used in the detector, specified as an integer. NumWeakLearners is less than or equal to the maximum number of weak learners for the last training stage. To restrict this maximum, you can use the 'MaxWeakLearners' name-value pair in the trainACFObjectDetector function.

This property is read-only.

Camera configuration, specified as a monoCamera object. The object contains the camera intrinsics, the location, the pitch, yaw, and roll placement, and the world units for the parameters. Use the intrinsics to transform the object points in the image to world coordinates, which you can then compare to the values in the WorldObjectSize property.

This property is read-only.

Range of object widths and lengths in world units, specified as a [minWidth maxWidth] vector or [minWidth maxWidth; minLength maxLength] matrix. Specifying the range of object lengths is optional.

Object Functions

detectDetect objects using ACF object detector configured for monocular camera

Examples

collapse all

Configure an ACF object detector for use with a monocular camera mounted on an ego vehicle. Use this detector to detect vehicles within video frames captured by the camera.

Load an acfObjectDetector object pretrained to detect vehicles.

detector = vehicleDetectorACF;

Model a monocular camera sensor by creating a monoCamera object. This object contains the camera intrinsics and the location of the camera on the ego vehicle.

focalLength = [309.4362 344.2161];    % [fx fy]
principalPoint = [318.9034 257.5352]; % [cx cy]
imageSize = [480 640];                % [mrows ncols]
height = 2.1798;                      % height of camera above ground, in meters
pitch = 14;                           % pitch of camera, in degrees
intrinsics = cameraIntrinsics(focalLength,principalPoint,imageSize);

monCam = monoCamera(intrinsics,height,'Pitch',pitch);

Configure the detector for use with the camera. Limit the width of detected objects to a typical range for vehicle widths: 1.5–2.5 meters. The configured detector is an acfObjectDetectorMonoCamera object.

vehicleWidth = [1.5 2.5];
detectorMonoCam = configureDetectorMonoCamera(detector,monCam,vehicleWidth);

Load a video captured from the camera, and create a video reader and player.

videoFile = fullfile(toolboxdir('driving'),'drivingdata','caltech_washington1.avi');
reader = VideoReader(videoFile);
videoPlayer = vision.VideoPlayer('Position',[29 597 643 386]);

Run the detector in a loop over the video. Annotate the video with the bounding boxes for the detections and the detection confidence scores.

cont = hasFrame(reader);
while cont
   I = readFrame(reader);

   % Run the detector.
   [bboxes,scores] = detect(detectorMonoCam,I);
   if ~isempty(bboxes)
       I = insertObjectAnnotation(I, ...
                           'rectangle',bboxes, ...
                           scores, ...
                           'AnnotationColor','g');
   end
   videoPlayer(I)
   % Exit the loop if the video player figure is closed.
   cont = hasFrame(reader) && isOpen(videoPlayer);
end

release(videoPlayer);

Extended Capabilities

Version History

Introduced in R2017a