リアルタイムから重心を求める
古いコメントを表示
以前、下のサイトのように画像で重心等を求めました。
今回、それをリアルタイムからの映像で行おうと思っています。
下のサイトの内容を組み込み、画像のような目の設定が出来るようになりました。
そこで今回教えていただきたいのは、目として四角で囲んだ部分の重心点を求める事です。
色々と試みたのですが、上手く反映されず出来ませんでした。
ご教授願います。

% Create the face detector object.
eyeDetector = vision.CascadeObjectDetector('EyePairBig');
mouthDetector = vision.CascadeObjectDetector('Mouth');
% Create the point tracker object.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Create the webcam object.
cam = webcam();
% Capture one frame to get its size.
videoFrame = snapshot(cam);
frameSize = size(videoFrame);
% Create the video player object.
videoPlayer = vision.VideoPlayer('Position', [100 100 [frameSize(2), frameSize(1)]+30]);
runLoop = true;
numPts1 = 0;
numPts2 = 0;
while runLoop
% Get the next frame.
videoFrame = snapshot(cam);
videoFrameGray = rgb2gray(videoFrame);
if numPts1 < 10
% Detection mode.
bbox1 = eyeDetector.step(videoFrameGray);
bbox2 = mouthDetector.step(videoFrameGray);
if ~isempty(bbox1)
% Find corner points inside the detected region.
points1 = detectMinEigenFeatures(videoFrameGray, 'ROI', bbox1(1, :));
points2 = detectMinEigenFeatures(videoFrameGray, 'ROI', bbox2(1, :));
% Re-initialize the point tracker.
xyPoints1 = points1.Location;
numPts1 = size(xyPoints1,4);
release(pointTracker);
initialize(pointTracker, xyPoints1, videoFrameGray);
xyPoints2 = points2.Location;
numPts2 = size(xyPoints2,4);
release(pointTracker);
initialize(pointTracker, xyPoints2, videoFrameGray);
% Save a copy of the points.
oldPoints1 = xyPoints1;
oldPoints2 = xyPoints2;
% Convert the rectangle represented as [x, y, w, h] into an
% M-by-2 matrix of [x,y] coordinates of the four corners. This
% is needed to be able to transform the bounding box to display
% the orientation of the face.
bboxPoints1 = bbox2points(bbox1(1, :));
bboxPoints2 = bbox2points(bbox2(1, :));
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon1 = reshape(bboxPoints1', 1, []);
bboxPolygon2 = reshape(bboxPoints2', 1, []);
% Display a bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon1, 'LineWidth', 3);
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon2, 'LineWidth', 3);
% Bounding Boxの位置は[x, y, width, height] ⇒ x座標に幅の1/6,4/6を足し、幅を1/6にする
lefteye = [bbox1(1)+bbox1(3)*1/6 bbox1(2) bbox1(3)/6 bbox1(4)];
righteye = [bbox1(1)+bbox1(3)*4/6 bbox1(2) bbox1(3)/6 bbox1(4)];
videoFrame = insertShape(videoFrame,'rectangle', lefteye,'Color','red','LineWidth',3);
videoFrame = insertShape(videoFrame,'rectangle',righteye,'Color','red','LineWidth',3);
end
else
% Tracking mode.
[xyPoints1, isFound1] = step(pointTracker, videoFrameGray);
visiblePoints1 = xyPoints1(isFound1, :);
oldInliers1 = oldPoints1(isFound1, :);
[xyPoints2, isFound2] = step(pointTracker, videoFrameGray);
visiblePoints2 = xyPoints2(isFound2, :);
oldInliers2 = oldPoints2(isFound2, :);
numPts1 = size(visiblePoints1, 1);
numPts2 = size(visiblePoints2, 1);
if numPts1 >= 10
% Estimate the geometric transformation between the old points
% and the new points.
[xform1, inlierIdx] = estimateGeometricTransform2D(...
oldInliers1, visiblePoints1, 'similarity', 'MaxDistance', 4);
oldInliers1 = oldInliers1(inlierIdx, :);
visiblePoints1 = visiblePoints1(inlierIdx, :);
[xform2, inlierIdx] = estimateGeometricTransform2D(...
oldInliers2, visiblePoints2, 'similarity', 'MaxDistance', 4);
oldInliers2 = oldInliers2(inlierIdx, :);
visiblePoints2 = visiblePoints2(inlierIdx, :);
% Apply the transformation to the bounding box.
bboxPoints1 = transformPointsForward(xform1, bboxPoints1);
bboxPoints2 = transformPointsForward(xform1, bboxPoints2);
% Convert the box corners into the [x1 y1 x2 y2 x3 y3 x4 y4]
% format required by insertShape.
bboxPolygon1 = reshape(bboxPoints1', 1, []);
bboxPolygon2 = reshape(bboxPoints2', 1, []);
% Display a bounding box around the face being tracked.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon1, 'LineWidth', 3);
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon2, 'LineWidth', 3);
% Bounding Boxの位置は[x, y, width, height] ⇒ x座標に幅の1/6,4/6を足し、幅を1/6にする
lefteye = [bbox1(1)+bbox1(3)*1/6 bbox1(2) bbox1(3)/6 bbox1(4)];
righteye = [bbox1(1)+bbox1(3)*4/6 bbox1(2) bbox1(3)/6 bbox1(4)];
videoFrame = insertShape(videoFrame,'rectangle', lefteye,'Color','red','LineWidth',3);
videoFrame = insertShape(videoFrame,'rectangle',righteye,'Color','red','LineWidth',3);
% Reset the points.
oldPoints1 = visiblePoints1;
setPoints(pointTracker, oldPoints1);
oldPoints2 = visiblePoints1;
setPoints(pointTracker, oldPoints2);
end
end
% Display the annotated video frame using the video player object.
step(videoPlayer, videoFrame);
% Check whether the video player window has been closed.
runLoop = isOpen(videoPlayer);
end
% Clean up.
clear cam;
release(videoPlayer);
release(pointTracker);
release(eyeDetector);
release(mouthDetector);
2 件のコメント
Atsushi Ueno
2021 年 11 月 8 日
プログラムを読んで得た気付き
このプログラムは「検出モード」と「追跡モード」を繰り返す仕組みになっています
- 検出モードでは、必ず垂直なBounding Boxで対象物を囲む形で検出されます
- 追跡モードでは、フレーム間における追跡ポイントの動きを検出するのでBounding Boxの回転にも対応します
- 検出モードで一度でも十分な追跡ポイントが得られれば追跡モードに移行します
- 追跡モードで追跡ポイント数が閾値を下回ったら検出モードに戻ります
- なぜBounding Boxの[x, y, w, h]表現を[x1 y1 x2 y2 x3 y3 x4 y4]表現に変えるのか
- ⇒追跡モードでBounding Boxが回転した時にinsertShape関数でポリゴン表示する為

bboxPoints1 = bbox2points(bbox1(1, :)); % bbox1はBBox[x, y, w, h]、bboxPoints1は[x,y]座標の行列
bboxPolygon1 = reshape(bboxPoints1', 1, []); % bboxPolygon1は[x1 y1 x2 y2 x3 y3 x4 y4]表現
追跡モードでは2枚のフレームにおける追跡ポイントの差分を検出して、これに基づいてBounding Boxの位置を変更します。
% Apply the transformation to the bounding box.
bboxPoints1 = transformPointsForward(xform1, bboxPoints1);
bboxPoints2 = transformPointsForward(xform1, bboxPoints2);
この変更するBounding Boxの位置情報は、回転に表現する為xy座標の行列に変換したbboxPoints1,bboxPoints2に対して行っています。一方検出モードで得られたBBox[x, y, w, h]の情報(bbox1,bbox2)は、検出モードでしか更新されません。なので、追跡モードに追記したEyePairBigのBounding Boxの位置は動かないと思います。
ここまで書いて気付きましたが、今の課題は重心の計算なんですね。
tsuyoshi tsunoda
2021 年 11 月 9 日
採用された回答
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で 追跡と動き推定 についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!