photo

Qu Cao

MathWorks

Last seen: 28日 前 2016 年からアクティブ

Followers: 2   Following: 0

I'm an Automated Driving and Mapping Engineer at MathWorks and a Mechanical Engineer by education. DISCLAIMER: Any advice or opinions posted here are my own, and in no way reflect that of MathWorks.

統計

  • Knowledgeable Level 4
  • Knowledgeable Level 3
  • 3 Month Streak
  • Revival Level 3
  • First Answer

バッジを表示

Feeds

表示方法

回答済み
In stereocalibration, is the relationship between the 'R and T output as PoseCamera2' and the actual camera position the same, or does the sign of x in T reverse?
Sorry for the confusion. We will update our documenation to be more specific about the meaning of PoseCamera2. PoseCamera2 is t...

6ヶ月 前 | 0

| 採用済み

回答済み
detectSIFTFeatures only working for uint8
Use im2double: I = imread('cameraman.tif'); points = detectSIFTFeatures(im2double(I))

8ヶ月 前 | 1

回答済み
estworldpose giving different answers on each run.
estworldpose is a RANSAC-based method. You may want to set the random seed before running the function to get persistent results...

約1年 前 | 0

| 採用済み

回答済み
Replacing vision.GeometericTransformEstimator call
https://www.mathworks.com/matlabcentral/answers/521519-what-function-replaced-vision-geometrictransformestimator

1年以上 前 | 0

回答済み
I have two camera parameters from stereoParams. Which one should I choose for Stereo Visual SLAM application? Or do I just get their mean values?
Usually, the focal length of the two cameras are the same. You can use either one.

1年以上 前 | 0

回答済み
Defining Feature detection area.
You can specify the ROI that you want to extract features from.

1年以上 前 | 0

| 採用済み

回答済み
unit of translation result from estrelpose function
As the documentation of estrelpose says, the function calculates the camera location up to an unknow scale. This is becuase you...

2年弱 前 | 0

| 採用済み

回答済み
Difficulties in obtaining good results with the ORB-SLAM2 algorithm in MATLAB.
Thank you for posting the question. In general, tuning the hyperparameters for a visual SLAM system can be hard and requires a...

2年弱 前 | 1

| 採用済み

回答済み
vSLAM: vSLAM algorithm is very sensitive to hyperparameters Issue?
You find the nature of the SLAM problem. Yes, the visual SLAM system is sensitive to hyperprameters which usually need to be tun...

2年弱 前 | 0

| 採用済み

回答済み
How to construct stereoParameters with intrinsic and extrinsic matrix?
poseCamera2 essentially transforms camera 2 to camer 1. If you have “the transaltion and rotation from camera1 to camera 2”, (le...

約2年 前 | 1

回答済み
Object 3D world coordinates from multiple images
You will need a stereo camera to give you the actual dimension of 3-D objects. Alternatively,if you know the size of an object...

約2年 前 | 0

回答済み
creating a bag of features for new image set for monocular SLAM
The bag-of-features data may not work for the KITTI dataset because it was trained using a small amount of image data. You may w...

約2年 前 | 1

| 採用済み

回答済み
The Premultiply Convention in Geometric Transformations does not support C/C++ code generation?
Thank you for reporting this. There is a bug in the documentation. All the geometric transformation objects with the premultiply...

約2年 前 | 0

| 採用済み

回答済み
How to use reconstructScene with a disparity map from file, without calling rectifyStereoImages ?
You can use the reprojectionMatrix output from rectifyStereoImages to do the reconstruction. Otherwise, you need to save the ste...

2年以上 前 | 0

| 採用済み

回答済み
Match the coordinate systems of "triangulate" and "reconstructScene" with "disparitySGM"
The point cloud generated from reconstructScene is in the rectified camera 1 coordinate. Starting in R2022a, you can use the ad...

2年以上 前 | 0

| 採用済み

回答済み
MATLAB Simulate 3D Camera: why is there no focal length (world units) attribute in the sensor model?
Please take a look at this page: https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0ni74 If you know the size...

2年以上 前 | 0

回答済み
How to port SLAM algorithm to embedded platform?
Unfortunately, as of R2022a the visual SLAM pipeline doesn't support code generation yet. We're actively working on this suppopr...

2年以上 前 | 1

| 採用済み

回答済み
how to get the relative camera pose to another camera pose?
Note that the geometric transformation convention used in the Computer Vision Toolbox (CVT) is different from the one used in th...

2年以上 前 | 2

| 採用済み

回答済み
How to get 3D world coordinates from 2D image coordinates?
You should use the rectified stereo images. The disparityMap computed from disparitySGM should have the same size as your stereo...

2年以上 前 | 0

回答済み
Creating a depth map from the disparity map function
You can use reconstructScene for your workflow.

3年弱 前 | 0

回答済み
Unable to use functions from the Computer Vision Toolbox in Simulink MATLAB function block
A workaround is to declare the function as an extrinsic function so that it will be essentially executed in MATLAB: https://www...

3年弱 前 | 0

| 採用済み

回答済み
how to get texture extraction using LBP features in MATLAB?
You can use the extractLBPFeatures function.

約3年 前 | 0

回答済み
About error of helperVisualizeMotionAndStructureStereo
In helperVisualizeMotionAndStructureStereo.m, please note the following code in retrievePlottedData which discards xyzPoints out...

約3年 前 | 2

回答済み
About SLAM initial Pose data
The initial pose data is provided by the dataset. It's used to convert the 3-D reconstruction into the world coordinate system. ...

約3年 前 | 0

回答済み
About "slam" on my camera device
The example shows how to run stereo visual SLAM using recorded data. It doesn't support "online" visual SLAM yet, meaning that y...

約3年 前 | 0

回答済み
Is Unreal Engine of the Automated Driving Toolbox available on Ubuntu?
As of R2021a, only Windows is supported. See Unreal Engine Simulation Environment Requirements and Limitations.

約3年 前 | 1

回答済み
why we use Unreal engine when there is a 3D visualization available in Automated driving toolbox?
It's not just used for visualization. With Unreal, you can configure prebuilt scenes, place and move vehicles within the scene, ...

3年以上 前 | 0

| 採用済み

回答済み
About running a stereo camera calibrator
In general, you can use any type of stereo camera and calibrate its intrinsic parameters using the Stereo Camera Calibrator. You...

3年以上 前 | 0

回答済み
How to obtain optimal path between start and goal pose using pathPlannerRRT() and plan()?
Please set the random seed at the beginning to get consistent results across different runs: https://www.mathworks.com/help/mat...

3年以上 前 | 0

| 採用済み

回答済み
Does vehicleCostmap this type of map only support pathPlannerRRT object to plan a path? Can I use another algorithm to plan a path?
You can create an occupancyMap object from a vehicleCostmap object using the following syntax: map = occupancyMap(p,resolution)...

3年以上 前 | 0

さらに読み込む