Speed up calculation of normal vectors metric
2 ビュー (過去 30 日間)
古いコメントを表示
Hey there!
Given a measured pointcloud (by using a 3D measuring device), I would like to calculate the p2 norm deviation of each normal vector from the mean normal vector in a given sphere to estimate the uniformity of the point cloud for each point. I think that my code is working but it's super slow (facing ~ 1 mio. points). I calculate all normals by using pcnormals, then I'm growing a kdtree and isolate the 10 nearest neighbors. Then I trim all neighbors beyond a radial threshold and calculte the mean normal vector. Right now I don't see any implementation without using a for-loop.
Note that I'm using the fast DNorm2 MEX implementation by Jan but you can also use vecnorm or pdist2. I attachted a small part of the pointcloud for test purposes.
Maybe one of the pros can help me out.
EDIT: It is not super important to use a spherical ROI. Some sort of voxel-filter would also be fine.
Lennart
K=10; %10-NN Search
dist_thres=0.015; %Maximum distance ROI => Sphere with radius dist_thres
normals = pcnormals(pointCloud(points),6);
for i=1:length(points)
%disp(i);
idx=knnsearch(points([1:i-1 i+1:end],:),points(i,:),'K',K); %Find K Nearest Neighbors by growing a kdtree
distances=DNorm2(repmat(points(i,:),K,1)'-points(idx,:)'); %Euclidean Distance of all NNs
idx(distances>dist_thres)=[]; %Remove neighbor outliers
metric(i)=DNorm2(normals(i)-mean(normals(idx,:),1)); %calculate p2 norm of deviation of mean normal inside sphere and the normal of the corresponding point
end
0 件のコメント
回答 (1 件)
Guillaume
2019 年 8 月 30 日
編集済み: Guillaume
2019 年 9 月 5 日
Why do you remove the current point from the search set. Doing that means copying the whole point matrix (bar one point) for each step of the loop. A million copy operation of a million minus 1 points is always going to take a while. Can't you just search for the nearest K+1 point and remove the nearest (the one with distance 0) from the returned set?
Also, since you're calling knnsearch for each of your points, you're reconstruction the Kd-tree each time. Doing that a million times is probably not very efficient. I would think that building the tree just once with kdtreesearcher would be a lot more efficient.
So, I suspect this would be faster. I don't have the stats toolbox, so I'm just going from the doc:
%...
searcher = kdtreesearcher(points);
for ptidx = 1:size(points, 1) %don't use length on a matrix!
[idx, d] = knnsearch(searcher, points(ptidx, :), 'K', K+1); %Note that it's the kdtreesearcher.knnsearch method being called here, not the generic knnsearch function
idx(d == 0) = []; %remove current point from those returned
%... rest of the calculations
end
edit: was missing the points input in knnsearch
2 件のコメント
Guillaume
2019 年 9 月 5 日
Glad you've got something that works.
As I don't have the toolbox I can't do any testing so I'm not sure I can help much further. However:
index=linspace(1,size(points,1),size(points,1));
is the strangest way I've seen of simply doing:
index = 1:size(points, 1);
In some places you use length(points). I'd recommend you always use size(points, 1) as length would return the number of columns if for some reason you only had 1 or 2 points. I know it's probably unlikely but since you mean to query the number of rows, do ask for the number of rows.
This:
mask=ones(size(points,1),K);
%...
mask(distances(index,:)>thres_rad)=0;
is also a strange way of doing:
mask = distances < thres_rad;
It looks like all the findSphereNeighbors snippets you've posted are meant to be in a loop with the query point being just one point (so one row). As a result, idx and distances would be just one row and the idx(:, 1), and distances(:, 1), are just misleanding, this : expands to just 1. idx(1) and distances(1) would be clearer. On the other hand, you then have max(distances(index, :)) so I'm a bit confused.
Note that if you do have a 2D distance matrix, make sure it is preallocated before the loop if you're filling it one row at a time.
Note that my suggestion was to build the Kd-tree only once before the loop. All the methods you show appear to rebuild it each time inside loop. Then yes, searching for more point would be less efficient.
But actually, rereading the knnsearch doc, I see you can pass all query points at once so you don't even need a loop. This should be more efficient, at least for the seach. You might still need a loop for the distance filtering afterward:
indices = knnsearch(points, points, 'K', K+1); %search the K+1 nearest neighbours to all the points at once
indices(:, 1) = []; %from your code I assume that 1st index of each row is the nearest neighbour, i.e. the current point. Note that index is a matrix here
And actually, looking at the distance calculation, I don't think you need a loop for that either:
nearestpoints = permute(reshape(points(indices, :), size(points, 1), K, []), [1, 3 ,2]); %3D matrix of nearest points (K points across 3rd dimension)
distance = squeeze(sqrt(sum((points - nearestpoints) .^ 2, 2)));
mask = distance <= thresh_rad
%...
The only time consuming part above is the permute. It's still very fast on a 1e6x10x3 matrix.
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!