This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

detect

Detect objects using R-CNN deep learning detector

Syntax

bboxes = detect(detector,I)
[bboxes,scores] = detect(detector,I)
[___,labels] = detect(detector,I)
[___] = detect(___,roi)
[___] = detect(___,Name,Value)

Description

bboxes = detect(detector,I) detects objects within image I using an R-CNN (regions with convolutional neural networks) object detector. The locations of objects detected are returned as a set of bounding boxes.

When using this function, use of a CUDA® enabled NVIDIA® GPU with a compute capability of 3.0 or higher is highly recommended. The GPU reduces computation time significantly. Usage of the GPU requires Parallel Computing Toolbox™.

[bboxes,scores] = detect(detector,I) also returns the detection scores for each bounding box.

example

[___,labels] = detect(detector,I) also returns a categorical array of labels assigned to the bounding boxes, using either of the preceding syntaxes. The labels used for object classes are defined during training using the trainRCNNObjectDetector function.

[___] = detect(___,roi) detects objects within the rectangular search region specified by roi.

[___] = detect(___,Name,Value) specifies options using one or more Name,Value pair arguments. For example, detect(detector,I,'NumStongestRegions',1000) limits the number of strongest region proposals to 1000.

Examples

collapse all

Load training data and network layers.

load('rcnnStopSigns.mat', 'stopSigns', 'layers')

Add the image directory to the MATLAB path.

imDir = fullfile(matlabroot, 'toolbox', 'vision', 'visiondata',...
  'stopSignImages');
addpath(imDir);

Set network training options to use mini-batch size of 32 to reduce GPU memory usage. Lower the InitialLearningRate to reduce the rate at which network parameters are changed. This is beneficial when fine-tuning a pre-trained network and prevents the network from changing too rapidly.

options = trainingOptions('sgdm', ...
  'MiniBatchSize', 32, ...
  'InitialLearnRate', 1e-6, ...
  'MaxEpochs', 10);

Train the R-CNN detector. Training can take a few minutes to complete.

rcnn = trainRCNNObjectDetector(stopSigns, layers, options, 'NegativeOverlapRange', [0 0.3]);
*******************************************************************
Training an R-CNN Object Detector for the following object classes:

* stopSign

Step 1 of 3: Extracting region proposals from 27 training images...done.

Step 2 of 3: Training a neural network to classify objects in training data...

|=========================================================================================|
|     Epoch    |   Iteration  | Time Elapsed |  Mini-batch  |  Mini-batch  | Base Learning|
|              |              |  (seconds)   |     Loss     |   Accuracy   |     Rate     |
|=========================================================================================|
|            3 |           50 |         9.27 |       0.2895 |       96.88% |     0.000001 |
|            5 |          100 |        14.77 |       0.2443 |       93.75% |     0.000001 |
|            8 |          150 |        20.29 |       0.0013 |      100.00% |     0.000001 |
|           10 |          200 |        25.94 |       0.1524 |       96.88% |     0.000001 |
|=========================================================================================|

Network training complete.

Step 3 of 3: Training bounding box regression models for each object class...100.00%...done.

R-CNN training complete.
*******************************************************************

Test the R-CNN detector on a test image.

img = imread('stopSignTest.jpg');

[bbox, score, label] = detect(rcnn, img, 'MiniBatchSize', 32);

Display strongest detection result.

[score, idx] = max(score);

bbox = bbox(idx, :);
annotation = sprintf('%s: (Confidence = %f)', label(idx), score);

detectedImg = insertObjectAnnotation(img, 'rectangle', bbox, annotation);

figure
imshow(detectedImg)

Remove the image directory from the path.

rmpath(imDir);

Input Arguments

collapse all

R-CNN object detector, specified as an rcnnObjectDetector object. To create this object, call the trainRCNNObjectDetector function with training data as input.

Input image, specified as a real, nonsparse, grayscale or truecolor image.

The detector is sensitive to the range of the input image. Therefore, ensure that the input image range is similar to the range of the images used to train the detector. For example, if the detector was trained on uint8 images, rescale the input image to the range [0, 255] by using im2uint8 or rescale.

Data Types: uint8 | uint16 | int16 | double | single | logical

Search region of interest, specified as an [x y width height] vector. The vector specifies the upper left corner and size of a region in pixels.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'NumStongestRegions',1000

Maximum number of strongest region proposals, specified as the comma-separated pair consisting of 'NumStrongestRegions' and an integer. Reduce this value to speed up processing time at the cost of detection accuracy. To use all region proposals, specify this value as Inf.

Select strongest bounding box for each detected object, specified as the comma-separated pair consisting of 'SelectStrongest' and either true or false.

  • true — Return the strongest bounding box per object. To select these boxes, detect calls the selectStrongestBboxMulticlass function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their scores.

    For example:

     selectStrongestBboxMulticlass(bbox,scores, ...
                'RatioType','Min', ...
                'OverlapThreshold',0.5);

  • false — Return all detected bounding boxes. You can then use a custom operation to eliminate overlapping bounding boxes.

Size of smaller batches for R-CNN data processing, specified as the comma-separated pair consisting of 'MiniBatchSize' and an integer. Larger batch sizes lead to faster processing but take up more memory.

Hardware resource on which to run the detector, specified as the comma-separated pair consisting of 'ExecutionEnvironment' and 'auto', 'gpu', or 'cpu'. The table shows the valid hardware resource values.

Resource Action
'auto'Use a GPU if it is available. Otherwise, use the CPU.
'gpu'Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU with a compute capability of 3.0 or higher. If a suitable GPU is not available, the function returns an error.
'cpu'Use the CPU.

Output Arguments

collapse all

Location of objects detected within the image, returned as an M-by-4 matrix defining M bounding boxes. Each row of bboxes contains a four-element vector of the form [x y width height]. This vector specifies the upper left corner and size of a bounding box in pixels.

Detection scores, returned as an M-by-1 vector. A higher score indicates higher confidence in the detection.

Labels for bounding boxes, returned as an M-by-1 categorical array of M labels. You define the class names used to label the objects when you train the input detector.

Introduced in R2016b