What is the best technique to do image similarity on a pair of images that are only line segments and provides a value on how similar they are?

93 ビュー (過去 30 日間)
I am trying to perform image similarity comparisons between a pair of images where one image is a hand drawn tactile map compared to the image of the tactile map template. The images contain single line segments where there are no contours or edges which may be the challenging part. The similarity score output metric should be a multidimensional output value based on the angles, length proportions, and scaling. Perhaps the local, global features, and structures (like intersection and shape of certain parts of the map) as well.
One key thing is that the hand drawings are likely not consistent since the hand drawn line qualities may differ from the template line where there are slight curvatures/small angle changes and retracements (majority of the retracements are not completely on-top of the initially drawn line segment).
I have tried a custom Matlab script utilizing affine transforms with lengths, angles, rotation, scale, translationX/Y but the scores are only good for drawings that have majority of the template's features. Drawings that are incomplete or completely different just aligns to any line segment that it detects, for example, a hand drawing that was a vertical segment on the left side of the map, may be aligned with a vertical segment on the right of the map. Would breaking the images into quadrants help?
I will provide examples of some good and bad hand drawings and its template to compare to. The last one will be another task where it is a shortest path direction from the full map, example: lower right to upper right corner of the map:
Decent hand drawing and its template:
Unfinished hand drawing with angle, length, and structural errors and its template:
Poor drawing and its template:
Shortest path drawing with poor lengths and its template:
Completely incoherent hand drawing of a map and its template:
I would like to hear your thoughts and suggestions. If you need additional information, please let me know. Thanks in advance all!

採用された回答

Umar
Umar 2024 年 10 月 27 日 9:52
編集済み: Umar 2024 年 10 月 27 日 9:54

Hi @Michael ,

After going through your comments and to tackle the problem of image similarity comparisons between a hand-drawn tactile map and its template, you need to consider several factors, including the nature of the images, the features you want to compare, and the potential inconsistencies in the hand-drawn lines. Below is a detailed approach to developing a MATLAB script that addresses these challenges. Before you can compare the images, you need to preprocess them to enhance the features you are interested in. This includes converting the images to grayscale, applying edge detection, and possibly thinning the lines to ensure you are working with a consistent representation of the line segments.

% Load images
template = imread('/MATLAB Drive/vecteezy_blank-paper-scroll-png-
illustration_8501601.png');
hand_drawn = imread('/MATLAB Drive/vecteezy_mandala-for-design-hand-
drawn_9874711.png');
% Convert to grayscale
template_gray = rgb2gray(template);
hand_drawn_gray = rgb2gray(hand_drawn);
% Apply edge detection
template_edges = edge(template_gray, 'Canny');
hand_drawn_edges = edge(hand_drawn_gray, 'Canny');
% Thinning the edges
template_thinned = bwmorph(template_edges, 'thin', inf);
hand_drawn_thinned = bwmorph(hand_drawn_edges, 'thin', inf);

Next, you need to extract features from both images. This can include the lengths of line segments, angles between segments, and the overall structure of the drawings. You can use the regionprops function to obtain properties of the detected edges.

% Extract line segments using Hough Transform
[H, theta, rho] = hough(template_thinned);
peaks = houghpeaks(H, 5, 'threshold', ceil(0.3 * max(H(:))));
lines_template = houghlines(template_thinned, theta, rho, peaks);
[H_hd, theta_hd, rho_hd] = hough(hand_drawn_thinned);
peaks_hd = houghpeaks(H_hd, 5, 'threshold', ceil(0.3 * 
max(H_hd(:))));
lines_hand_drawn = houghlines(hand_drawn_thinned, theta_hd, rho_hd,   
peaks_hd);

To measure similarity, you can define a scoring function that takes into account the angles, lengths, and structural features of the line segments. You can compute the angle differences and length ratios between corresponding line segments.

function score = calculate_similarity(lines_template, 
lines_hand_drawn)
  score = 0;
  num_lines_template = length(lines_template);
  num_lines_hand_drawn = length(lines_hand_drawn);
    % Iterate through each line in the template
    for i = 1:num_lines_template
        % Extract properties of the template line
        angle_template = atan2d(lines_template(i).point2(2) - 
        lines_template(i).point1(2), ...
                                 lines_template(i).point2(1) - 
        lines_template(i).point1(1));
        length_template = norm(lines_template(i).point2 - 
        lines_template(i).point1);
        % Compare with each line in the hand-drawn image
        for j = 1:num_lines_hand_drawn
            angle_hand_drawn = atan2d(lines_hand_drawn(j).point2(2) - 
           lines_hand_drawn(j).point1(2), ...
                                       lines_hand_drawn(j).point2(1) - 
            lines_hand_drawn(j).point1(1));
            length_hand_drawn = norm(lines_hand_drawn(j).point2 - 
            lines_hand_drawn(j).point1);
            % Calculate angle difference and length ratio
            angle_diff = abs(angle_template - angle_hand_drawn);
            length_ratio = length_hand_drawn / length_template;
            % Update score based on similarity criteria
            score = score + exp(-angle_diff^2) * exp(-abs(length_ratio
             - 1)^2);
        end
      end
      % Normalize score
      score = score / (num_lines_template + num_lines_hand_drawn);
  end
% Calculate similarity score
similarity_score = calculate_similarity(lines_template, 
lines_hand_drawn);
disp(['Similarity Score: ', num2str(similarity_score)]);

To improve the robustness of the similarity measurement, consider breaking the images into quadrants. This can help in localizing features and reducing the impact of misalignments.

% Divide images into quadrants
[rows, cols] = size(template_thinned);
quadrants = {template_thinned(1:rows/2, 1:cols/2), 
template_thinned(1:rows/2,   
 cols/2+1:end), ...
           template_thinned(rows/2+1:end, 1:cols/2), 
template_thinned(rows/
 2+1:end, cols/2+1:end)};
% Calculate similarity for each quadrant
for k = 1:length(quadrants)
  quadrant_score = calculate_similarity(lines_template, 
  lines_hand_drawn);
  disp(['Quadrant ', num2str(k), ' Similarity Score: ', 
  num2str(quadrant_score)]);
end

Please see attached.

By preprocessing the images, extracting relevant features, and calculating a similarity score based on angles and lengths, you can effectively assess the similarity between the two images. Additionally, breaking the images into quadrants can enhance the accuracy of the comparison, especially in cases where the hand-drawn lines are inconsistent or incomplete.

Feel free to adjust the parameters and functions to better suit your specific requirements and the characteristics of your images.

If you have any further questions, please let me know.

  5 件のコメント
Michael
Michael 2024 年 11 月 1 日 0:39
Hi @Umar ,
I have tested many different values in the houghpeaks() and houghlines() parameters for the # of peaks, 'threshold', 'NHoodSize', 'FillGap', 'MinLength' but I am unable to have it detect the line segments we are looking for.
Below image is what the attached code is detecting:
Image of what an ideal line detection should be in blue (zoomed in on the lower right quadrant of the above image):
Do you have any suggestions on how to get it to detect the above lines in blue? Below is the full code:
function main()
% Load and preprocess images
hand_drawn = imread('L:\NEI\Navigate\MRI\functional\NEINV004\Post1\drawings\NEINV004_220422_164717_DrawFullMap_02_resized.png');
template = imread('L:\NEI\Navigate\VEERSmaps\dpi300\Map01_solution0.png');
% Convert to grayscale
template_gray = rgb2gray(template);
hand_drawn_gray = hand_drawn;
% Apply edge detection
template_edges = edge(template_gray, 'Canny');
hand_drawn_edges = edge(hand_drawn_gray, 'Canny');
% Thinning the edges
template_thinned = bwmorph(template_edges, 'thin', inf);
hand_drawn_thinned = bwmorph(hand_drawn_edges, 'thin', inf);
% Display preprocessed images
displaySideBySide(template_thinned, hand_drawn_thinned);
% Extract line segments using Hough Transform with improved parameters
[H, theta, rho] = hough(template_thinned);
peaks = houghpeaks(H, 30, 'threshold', ceil(0.08 * max(H(:))), 'NHoodSize', [11 11]);
lines_template = houghlines(template_thinned, theta, rho, peaks, 'FillGap', 10, 'MinLength', 40);
[H_hd, theta_hd, rho_hd] = hough(hand_drawn_thinned);
peaks_hd = houghpeaks(H_hd, 50, 'threshold', ceil(0.2 * max(H_hd(:))), 'NHoodSize', [11 11]);
initial_lines = houghlines(hand_drawn_thinned, theta_hd, rho_hd, peaks_hd, 'FillGap', 50, 'MinLength', 50);
% Also modify the consolidation angle to be more lenient:
lines_hand_drawn = consolidateLines(initial_lines, 89); % Increased angle threshold
% Figure 2: Line detection visualization
figure('Name', 'Line Detection', 'Units', 'normalized', 'Position', [0.1 0.1 0.8 0.8]);
% Create blank black images for overlay
template_black = zeros(size(template_thinned));
hand_drawn_black = zeros(size(hand_drawn_thinned));
% Template image with lines
subplot(1, 2, 1);
imshow(template_black);
hold on;
% First plot the original thinned image in white
[row, col] = find(template_thinned);
plot(col, row, '.', 'Color', [1 1 1], 'MarkerSize', 1);
% Then plot the detected lines
for k = 1:length(lines_template)
xy = [lines_template(k).point1; lines_template(k).point2];
plot(xy(:,1), xy(:,2), 'LineWidth', 2, 'Color', [1 0 0 0.7]);
% Plot beginnings and ends of lines
plot(xy(1,1), xy(1,2), 'x', 'LineWidth', 2, 'Color', [1 1 0]); % Yellow
plot(xy(2,1), xy(2,2), 'x', 'LineWidth', 2, 'Color', [1 0 0]); % Red
end
title('Template with Detected Lines');
legend('Original Points', 'Detected Lines', 'Start Points', 'End Points', 'Location', 'southoutside');
% Hand-drawn image with lines
subplot(1, 2, 2);
imshow(hand_drawn_black);
hold on;
% First plot the original thinned image in white
[row, col] = find(hand_drawn_thinned);
plot(col, row, '.', 'Color', [1 1 1], 'MarkerSize', 1);
% Then plot the detected lines
for k = 1:length(lines_hand_drawn)
xy = [lines_hand_drawn(k).point1; lines_hand_drawn(k).point2];
plot(xy(:,1), xy(:,2), 'LineWidth', 2, 'Color', [0 1 0 0.7]);
% Plot beginnings and ends of lines
plot(xy(1,1), xy(1,2), 'x', 'LineWidth', 2, 'Color', [1 1 0]); % Yellow
plot(xy(2,1), xy(2,2), 'x', 'LineWidth', 2, 'Color', [1 0 0]); % Red
end
title('Hand-drawn with Detected Lines');
legend('Original Points', 'Detected Lines', 'Start Points', 'End Points', 'Location', 'southoutside');
sgtitle('Line Detection Results');
% Calculate similarity scores
similarity_score = calculate_similarity(lines_template, lines_hand_drawn);
% Divide images into quadrants and analyze
[rows, cols] = size(template_thinned);
mid_row = floor(rows/2);
mid_col = floor(cols/2);
% Create quadrants
template_quadrants = {
template_thinned(1:mid_row, 1:mid_col),
template_thinned(1:mid_row, (mid_col+1):end),
template_thinned((mid_row+1):end, 1:mid_col),
template_thinned((mid_row+1):end, (mid_col+1):end)
};
hand_drawn_quadrants = {
hand_drawn_thinned(1:mid_row, 1:mid_col),
hand_drawn_thinned(1:mid_row, (mid_col+1):end),
hand_drawn_thinned((mid_row+1):end, 1:mid_col),
hand_drawn_thinned((mid_row+1):end, (mid_col+1):end)
};
% Calculate quadrant similarities
quadrant_scores = zeros(1, 4);
for k = 1:4
[H_t, theta_t, rho_t] = hough(template_quadrants{k});
peaks_t = houghpeaks(H_t, 20, 'threshold', ceil(0.08 * max(H_t(:))), 'NHoodSize', [11 11]);
lines_t = houghlines(template_quadrants{k}, theta_t, rho_t, peaks_t, 'FillGap', 10, 'MinLength', 40);
[H_h, theta_h, rho_h] = hough(hand_drawn_quadrants{k});
peaks_h = houghpeaks(H_h, 20, 'threshold', ceil(0.08 * max(H_h(:))), 'NHoodSize', [11 11]);
initial_lines_h = houghlines(hand_drawn_quadrants{k}, theta_h, rho_h, peaks_h, 'FillGap', 100, 'MinLength', 50);
lines_h = consolidateLines(initial_lines_h, 89); % Apply consolidation to quadrants too
quadrant_scores(k) = calculate_similarity(lines_t, lines_h);
end
% Figure 3: Scores visualization
figure('Name', 'Similarity Scores', 'Units', 'normalized', 'Position', [0.15 0.15 0.7 0.7]);
% Create heatmap of quadrant scores
subplot(1, 2, 1);
quadrant_matrix = reshape(quadrant_scores, [2, 2]);
imagesc(quadrant_matrix);
colormap(gca, 'hot');
colorbar;
title('Quadrant Similarity Scores');
axis equal tight;
% Add text annotations for quadrant scores with outline effect
for i = 1:2
for j = 1:2
% Create outline effect by plotting the text multiple times with small offsets
offsets = [-1 -1; -1 0; -1 1; 0 -1; 0 1; 1 -1; 1 0; 1 1] * 0.3;
% Plot black outline
for k = 1:size(offsets, 1)
text(j + offsets(k,1)*0.015, i + offsets(k,2)*0.015, sprintf('%.4f', quadrant_matrix(i,j)), ...
'HorizontalAlignment', 'center', ...
'Color', 'black', ...
'FontWeight', 'bold', ...
'FontSize', 10);
end
% Plot white text on top
text(j, i, sprintf('%.4f', quadrant_matrix(i,j)), ...
'HorizontalAlignment', 'center', ...
'Color', 'white', ...
'FontWeight', 'bold', ...
'FontSize', 10);
end
end
xlabel('Column');
ylabel('Row');
% Create bar plot of scores
subplot(1, 2, 2);
scores = [similarity_score, quadrant_scores, mean(quadrant_scores)];
bar(scores);
title('Similarity Scores Comparison');
xticklabels({'Overall', 'Q1', 'Q2', 'Q3', 'Q4', 'Avg Quad'});
ylabel('Similarity Score');
ylim([0 1]);
grid on;
% Add score values on top of bars
for i = 1:length(scores)
text(i, scores(i), sprintf('%.4f', scores(i)), ...
'HorizontalAlignment', 'center', ...
'VerticalAlignment', 'bottom', ...
'FontWeight', 'bold');
end
% Display numerical results
fprintf('\nAnalysis Results:\n');
fprintf('Overall Similarity Score: %.4f\n', similarity_score);
fprintf('Quadrant Scores:\n');
for k = 1:4
fprintf('Quadrant %d: %.4f\n', k, quadrant_scores(k));
end
fprintf('Average Quadrant Score: %.4f\n', mean(quadrant_scores));
end
function consolidated_lines = consolidateLines(lines, angle_threshold)
if isempty(lines)
consolidated_lines = lines;
return;
end
% Create binary image of all points - with padding
point1_x = zeros(length(lines), 1);
point1_y = zeros(length(lines), 1);
point2_x = zeros(length(lines), 1);
point2_y = zeros(length(lines), 1);
for i = 1:length(lines)
point1_x(i) = lines(i).point1(1);
point1_y(i) = lines(i).point1(2);
point2_x(i) = lines(i).point2(1);
point2_y(i) = lines(i).point2(2);
end
% Add padding to prevent edge issues
padding = 10;
max_x = max([max(point1_x), max(point2_x)]) + padding;
max_y = max([max(point1_y), max(point2_y)]) + padding;
min_x = min([min(point1_x), min(point2_x)]);
min_y = min([min(point1_y), min(point2_y)]);
% Adjust coordinates to start from 1
offset_x = 1 - min_x + padding;
offset_y = 1 - min_y + padding;
width = ceil(max_x - min_x + 2*padding);
height = ceil(max_y - min_y + 2*padding);
line_image = zeros(height, width);
% Draw all lines into binary image with adjusted coordinates
for i = 1:length(lines)
p1 = [lines(i).point1(1) + offset_x, lines(i).point1(2) + offset_y];
p2 = [lines(i).point2(1) + offset_x, lines(i).point2(2) + offset_y];
line_points = getLinePoints(p1, p2);
for j = 1:size(line_points, 1)
if all(line_points(j,:) > 0) && ...
line_points(j,2) <= size(line_image,1) && ...
line_points(j,1) <= size(line_image,2)
line_image(line_points(j,2), line_points(j,1)) = 1;
end
end
end
% Find intersection points
intersections = findIntersectionPoints(line_image);
% Initialize output
consolidated_lines = struct('point1', {}, 'point2', {});
visited = false(size(line_image));
% Process each unvisited point
[y, x] = find(line_image & ~visited);
for i = 1:length(x)
if visited(y(i), x(i))
continue;
end
% Start new path
[path_points, visited] = followPath([x(i) y(i)], line_image, visited, intersections);
if length(path_points) > 1
% Convert sequence of points to line segments
segments = pathToLineSegments(path_points, angle_threshold);
% Add segments to output, adjusting coordinates back to original space
for s = 1:size(segments, 1)
new_line = struct('point1', [segments(s,1) - offset_x, segments(s,2) - offset_y], ...
'point2', [segments(s,3) - offset_x, segments(s,4) - offset_y]);
consolidated_lines(end+1) = new_line;
end
end
end
end
function intersections = findIntersectionPoints(img)
% Create structuring element for neighborhood check
se = ones(2,2); % Larger neighborhood
% Count neighbors for each pixel
neighbor_count = conv2(double(img), se, 'same');
% Find points with more than 2 neighbors (intersections)
% More strict threshold for intersections
intersections = neighbor_count .* img > 4;
% Add smaller dilation to avoid over-segmentation
intersections = imdilate(intersections, strel('disk', 1));
end
function [path_points, visited] = followPath(start_point, img, visited, intersections)
path_points = start_point;
current_point = start_point;
done = false;
min_path_length = 50; % Minimum number of pixels to consider as valid path
while ~done
% Ensure current point is within bounds
if current_point(2) > size(img,1) || current_point(1) > size(img,2) || ...
current_point(2) < 1 || current_point(1) < 1
done = true;
continue;
end
visited(current_point(2), current_point(1)) = true;
% Get 8-connected neighbors
[neighbors_y, neighbors_x] = get8Neighbors(current_point, size(img));
valid_neighbors = false(length(neighbors_x), 1);
% Check which neighbors are valid (part of line and not visited)
for i = 1:length(neighbors_x)
if img(neighbors_y(i), neighbors_x(i)) && ~visited(neighbors_y(i), neighbors_x(i))
valid_neighbors(i) = true;
end
end
% If at intersection point, stop path if we have enough points
if intersections(current_point(2), current_point(1))
if size(path_points, 1) >= min_path_length
done = true;
continue;
end
end
% If no valid neighbors, end path
if ~any(valid_neighbors)
done = true;
continue;
end
% Choose next point (prefer continuing in same direction)
valid_x = neighbors_x(valid_neighbors);
valid_y = neighbors_y(valid_neighbors);
if isempty(valid_x)
done = true;
continue;
end
next_idx = chooseBestNeighbor(current_point, [valid_x valid_y], path_points);
next_point = [valid_x(next_idx) valid_y(next_idx)];
% Add point to path
path_points = [path_points; next_point];
current_point = next_point;
end
% If path is too short, mark it as invalid
if size(path_points, 1) < min_path_length
path_points = [];
end
end
% And in the pathToLineSegments function, modify the segment creation criteria:
function segments = pathToLineSegments(points, angle_threshold)
segments = [];
if size(points, 1) < 2
return;
end
% If we only have 2 points, make a single segment
if size(points, 1) == 2
segments = [points(1,:) points(2,:)];
return;
end
% % Parameters for line merging - ORIGINAL
% min_segment_length = 100; % Minimum length for a segment
% smoothing_window = 15; % Larger window for smoother direction changes
% cumulative_angle_threshold = 60; % Maximum cumulative angle change
% Parameters for line merging
min_segment_length = 50; % Minimum length for a segment
smoothing_window = 20; % Larger window for smoother direction changes
cumulative_angle_threshold = 60; % Maximum cumulative angle change
% Initialize first segment
current_segment_start = 1;
base_direction = points(end,:) - points(1,:); % Use overall direction as reference
base_direction = base_direction / norm(base_direction);
cumulative_angle = 0;
last_point = points(1,:);
for i = smoothing_window:size(points, 1)
% Calculate smoothed direction using larger window
window_points = points(max(1,i-smoothing_window):i,:);
current_direction = window_points(end,:) - window_points(1,:);
if norm(current_direction) > 0
current_direction = current_direction / norm(current_direction);
% Calculate angle with respect to base direction
angle = acosd(max(min(dot(current_direction, base_direction), 1), -1));
% Update cumulative angle
if i > smoothing_window
angle_change = abs(angle - prev_angle);
cumulative_angle = cumulative_angle + angle_change;
end
prev_angle = angle;
% Create new segment if cumulative angle change is too large
if cumulative_angle > cumulative_angle_threshold
segment_length = norm(points(i-1,:) - points(current_segment_start,:));
if segment_length > min_segment_length
segments = [segments; points(current_segment_start,:) points(i-1,:)];
current_segment_start = i-1;
base_direction = current_direction;
cumulative_angle = 0;
end
end
end
end
% Add final segment if long enough
final_length = norm(points(end,:) - points(current_segment_start,:));
if final_length > min_segment_length
segments = [segments; points(current_segment_start,:) points(end,:)];
end
end
function [y, x] = get8Neighbors(point, img_size)
offsets = [-1 -1; -1 0; -1 1; 0 -1; 0 1; 1 -1; 1 0; 1 1];
neighbors = repmat(point, 8, 1) + offsets;
% Filter out points outside image bounds
valid = neighbors(:,1) > 0 & neighbors(:,1) <= img_size(2) & ...
neighbors(:,2) > 0 & neighbors(:,2) <= img_size(1);
x = neighbors(valid,1);
y = neighbors(valid,2);
end
function next_idx = chooseBestNeighbor(current, neighbors, path_points)
if size(path_points, 1) < 2
next_idx = 1; % Just take first neighbor if at start
return;
end
% Get current direction
current_dir = current - path_points(end-1,:);
if norm(current_dir) > 0
current_dir = current_dir / norm(current_dir);
else
next_idx = 1;
return;
end
% Calculate direction to each neighbor
directions = neighbors - repmat(current, size(neighbors,1), 1);
norms = vecnorm(directions, 2, 2);
valid = norms > 0;
if ~any(valid)
next_idx = 1;
return;
end
directions(valid,:) = directions(valid,:) ./ norms(valid);
% Calculate dot product with current direction
dots = directions * current_dir';
% Choose neighbor that best maintains current direction
[~, next_idx] = max(dots);
end
function points = getLinePoints(p1, p2)
% Bresenham's line algorithm
x1 = round(p1(1)); y1 = round(p1(2));
x2 = round(p2(1)); y2 = round(p2(2));
dx = abs(x2 - x1);
dy = abs(y2 - y1);
steep = dy > dx;
if steep
[x1, y1] = deal(y1, x1);
[x2, y2] = deal(y2, x2);
end
if x1 > x2
[x1, x2] = deal(x2, x1);
[y1, y2] = deal(y2, y1);
end
dx = x2 - x1;
dy = abs(y2 - y1);
error = dx / 2;
ystep = (y1 < y2) * 2 - 1;
y = y1;
points = zeros(dx + 1, 2);
idx = 1;
for x = x1:x2
if steep
points(idx,:) = [y x];
else
points(idx,:) = [x y];
end
idx = idx + 1;
error = error - dy;
if error < 0
y = y + ystep;
error = error + dx;
end
end
end
function score = calculate_similarity(lines_template, lines_hand_drawn)
score = 0;
% Handle empty line cases
if isempty(lines_template) || isempty(lines_hand_drawn)
return;
end
num_lines_template = length(lines_template);
num_lines_hand_drawn = length(lines_hand_drawn);
% Parameters for tolerance
angle_tolerance = 15; % degrees
length_tolerance = 0.2; % 30% difference allowed
position_tolerance = 30; % pixels
% Iterate through each line in the template
for i = 1:num_lines_template
% Extract properties of the template line
angle_template = atan2d(lines_template(i).point2(2) - lines_template(i).point1(2), ...
lines_template(i).point2(1) - lines_template(i).point1(1));
length_template = norm(lines_template(i).point2 - lines_template(i).point1);
midpoint_template = (lines_template(i).point1 + lines_template(i).point2) / 2;
% Store best match score for this template line
best_line_score = 0;
% Compare with each line in the hand-drawn image
for j = 1:num_lines_hand_drawn
angle_hand_drawn = atan2d(lines_hand_drawn(j).point2(2) - lines_hand_drawn(j).point1(2), ...
lines_hand_drawn(j).point2(1) - lines_hand_drawn(j).point1(1));
length_hand_drawn = norm(lines_hand_drawn(j).point2 - lines_hand_drawn(j).point1);
midpoint_hand_drawn = (lines_hand_drawn(j).point1 + lines_hand_drawn(j).point2) / 2;
% Calculate differences with tolerance
angle_diff = abs(angle_template - angle_hand_drawn);
if angle_diff > 180
angle_diff = 360 - angle_diff;
end
angle_score = exp(-(angle_diff/angle_tolerance)^2);
% Length comparison with tolerance
length_ratio = length_hand_drawn / length_template;
length_score = exp(-(abs(length_ratio - 1)/length_tolerance)^2);
% Position comparison
position_diff = norm(midpoint_template - midpoint_hand_drawn);
position_score = exp(-(position_diff/position_tolerance)^2);
% Combine scores with weights
line_score = (0.4 * angle_score + 0.3 * length_score + 0.3 * position_score);
% Update best match score
best_line_score = max(best_line_score, line_score);
end
% Add best match score for this template line
score = score + best_line_score;
end
% Normalize score
score = score / num_lines_template;
end
function displaySideBySide(image1, image2)
% Input validation
if nargin ~= 2
error('Function requires exactly 2 input arguments');
end
% Fix broken lines using morphological operations
se = strel('disk', 1); % Create a small disk-shaped structural element
image1_fixed = imdilate(image1, se); % Dilate to connect broken lines
image2_fixed = imdilate(image2, se); % Dilate to connect broken lines
% Create a new figure with normal size
fig = figure('Units', 'pixels');
% Get screen size
screenSize = get(0, 'ScreenSize');
% Calculate desired figure size (60% of screen size)
figWidth = round(screenSize(3) * 0.6);
figHeight = round(screenSize(4) * 0.6);
% Center the figure on screen
left = round((screenSize(3) - figWidth) / 2);
bottom = round((screenSize(4) - figHeight) / 2);
% Set figure position
set(fig, 'Position', [left bottom figWidth figHeight]);
% First image
subplot(1,2,1);
imshow(image1_fixed, 'InitialMagnification', 'fit');
axis image; % Preserve aspect ratio
title('Template Image', 'Interpreter', 'none');
% Minimize margins
p = get(gca, 'Position');
p(1) = 0.05; % Left margin
p(3) = 0.43; % Width
set(gca, 'Position', p);
% Second image
subplot(1,2,2);
imshow(image2_fixed, 'InitialMagnification', 'fit');
axis image; % Preserve aspect ratio
title('Hand-drawn Image', 'Interpreter', 'none');
% Minimize margins
p = get(gca, 'Position');
p(1) = 0.52; % Left position
p(3) = 0.43; % Width
set(gca, 'Position', p);
% Add super title
sgtitle('Preprocessed Images');
end
Umar
Umar 2024 年 11 月 1 日 1:21
Hi @Michael,
It took a long time to analyze this code. However, gone through your comments, I will suggest following strategies to enhance your line detection capabilities using the Hough Transform. Adjusting the threshold parameter can significantly impact peak detection. Since you have experimented with ceil(0.2 * max(H_hd(:))), try lowering this value to increase sensitivity. A lower threshold may help detect weaker lines that are currently being overlooked. In your code, I also noted that neighborhood size affects how local maxima are detected. Experiment with larger values, such as [15 15] or even [21 21], which can help in identifying more peaks by considering a broader context.For FillGap, consider reducing it if your lines are relatively short or fragmented. Lower values (e.g., 5 or 10)might connect closer segments effectively. Similarly, for MinLength, try reducing it to 20 or 30 pixels if your lines are not consistently long. For preprocessing techniques, to improve canny edge detection, adjust its parameters (e.g., lower and upper thresholds) to capture more edges. Consider using Gaussian smoothing before edge detection to reduce noise and for morphological operate, utilize morphological operations like dilation or closing imclose() to connect fragmented lines in the binary edge image, which could lead to better continuity and recognition of line segments.
Your existing function consolidateLines() is a good start. You may want to analyze how lines are merged based on their proximity and orientation. Ensure that your angle threshold is appropriate for your expected line orientations. Also, the findIntersectionPoints ( ) function should accurately identify junctions where lines meet. Ensure that these points are correctly utilized when consolidating lines. Visualize intermediate results after each processing step (e.g., after edge detection, after Hough Transform). This can provide insights into where the process might be failing and overlay detected lines on the original image to visually confirm what is being detected versus what should be detected. If persistent issues remain, consider exploring alternative algorithms such as the Probabilistic Hough Transform (probabilisticHoughLines ( ) in MATLAB, which may provide more robust results for noisy images or complex line structures.
By methodically adjusting parameters, enhancing preprocessing steps, and employing effective post-processing techniques, you should improve your line detection capabilities significantly.
Hope this helps.

サインインしてコメントする。

その他の回答 (2 件)

埃博拉酱
埃博拉酱 2024 年 10 月 27 日 1:06
As far as I know, convolutional neural networks are good at this kind of image recognition problems that are difficult to design a specific algorithm, especially when the criterion for the problem is mainly based on the human eye.
However, designing a training database may require some techniques. Neural networks are good at training with "scoring" type of data, while humans are better at outputting "sorting" type of data. You may need to design an algorithm that converts human sorting label data into scoring.
  1 件のコメント
Michael
Michael 2024 年 10 月 28 日 17:32
Thanks for your input, I have actually tried using Gemini's models to prompt it to provide a similarity score. It did well when I inputted hand drawings and templates of objects and faces, however, with the tactile maps, it did incredibly poor.
I decided to fine-tune a Gemini model (1.5-Flash-001) with human labeled scoring of the comparisons (963 examples of drawing and its template and human rated scoring) and it just finished tuning after 30 hours. I will run it on validation/evaluation images and see how it does.
Do you think this process would work as well?
Since Gemini is a LLM, I was thinking maybe I could create a scoring criteria based off of the verbal descriptors it provides to ultimately come up with a similarity score.
Ex. of prompt:

サインインしてコメントする。


Walter Roberson
Walter Roberson 2024 年 10 月 27 日 1:00
The best way is due to be invented in 1,719,483 years, 2 months, and 11 days, by some small furry creatures from Alpha Centauri.

製品


リリース

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by