Why is this mistake?

5 ビュー (過去 30 日間)
Santiago
Santiago 2013 年 10 月 1 日
回答済み: Dima Lisin 2013 年 12 月 23 日
Hi everyone, i'm working on visiontracekingfaceKLT. I'm tryng to run this code:
faceDetector = vision.CascadeObjectDetector();
videoFileReader = vision.VideoFileReader('tilted_face.avi');
videoFrame = step(videoFileReader); bbox = step(faceDetector, videoFrame);
x = bbox(1); y = bbox(2); w = bbox(3); h = bbox(4); bboxPolygon = [x, y, x+w, y, x+w, y+h, x, y+h];
shapeInserter = vision.ShapeInserter('Shape', 'Polygons', 'BorderColor','Custom',... 'CustomBorderColor',[255 255 0]); videoFrame = step(shapeInserter, videoFrame, bboxPolygon); figure; imshow(videoFrame); title('Detected face');
points = double(points); points(:, 1) = points(:, 1) + double(bbox(1)); points(:, 2) = points(:, 2) + double(bbox(2));
markerInserter = vision.MarkerInserter('Shape', 'Plus', ... 'BorderColor', 'White'); videoFrame = step(markerInserter, videoFrame, points); figure, imshow(videoFrame), title('Detected features');
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
initialize(pointTracker, double(points), rgb2gray(videoFrame));
videoInfo = info(videoFileReader); videoPlayer = vision.VideoPlayer('Position',... [100 100 videoInfo.VideoSize(1:2)+30]);
geometricTransformEstimator = vision.GeometricTransformEstimator(... 'PixelDistanceThreshold', 4, 'Transform', 'Nonreflective similarity');
oldPoints = double(points);
while ~isDone(videoFileReader)
videoFrame = step(videoFileReader);
[points, isFound] = step(pointTracker, rgb2gray(videoFrame));
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if ~isempty(visiblePoints)
[xform, geometricInlierIdx] = step(geometricTransformEstimator, ...
double(oldInliers), double(visiblePoints));
visiblePoints = visiblePoints(geometricInlierIdx, :);
oldInliers = oldInliers(geometricInlierIdx, :);
boxPoints = [reshape(bboxPolygon, 2, 4)', ones(4, 1)];
boxPoints = boxPoints * xform;
bboxPolygon = reshape(boxPoints', 1, numel(boxPoints));
videoFrame = step(shapeInserter, videoFrame, bboxPolygon);
videoFrame = step(markerInserter, videoFrame, visiblePoints);
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
step(videoPlayer, videoFrame);
end
release(videoFileReader); release(videoPlayer); release(geometricTransformEstimator); release(pointTracker); close all
But it doesn't run, it appears: Attempted to access bbox(1); index out of bounds because numel(bbox)=0.
Error in visionfacetrackingKLT (line 9) x = bbox(1); y = bbox(2); w = bbox(3); h = bbox(4);
So if someone can help me to repair this mistake please do it. I will appreciate that a lot.
  1 件のコメント
Jan
Jan 2013 年 10 月 1 日
Do you see how bad the omitted formatting looks like? Follow the "? Help" link to learn more about formatting in this forum.

サインインしてコメントする。

回答 (2 件)

dpb
dpb 2013 年 10 月 1 日
aceDetector = vision.CascadeObjectDetector();
videoFileReader = vision.VideoFileReader('tilted_face.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
x = bbox(1);
y = bbox(2);
w = bbox(3);
h = bbox(4);
...
Attempted to access bbox(1); index out of bounds because numel(bbox)=0.
Error in visionfacetrackingKLT (line 9) x = bbox(1); y = bbox(2); w = bbox(3); h = bbox(4);
Well, there's no indication of what step is supposed to be but it ends up returning [] for bbox
Hence, since there are no elements in the array, referencing any is an error.
Your mission, should you choose to accept it, is to determine why that is so...or at least provide enough relevant information that someone here can have a clue, perhaps.

Dima Lisin
Dima Lisin 2013 年 12 月 23 日
It looks like the face detector is not finding any faces in the first frame. You need to account for that in your code. Check bbox is empty, and if so, go to the next frame.

カテゴリ

Help Center および File ExchangeComputer Vision with Simulink についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by