Community Profile

photo

Qu Cao

Last seen: 6日 前 2016 以来アクティブ

Statistics

  • Knowledgeable Level 3
  • 3 Month Streak
  • Revival Level 3
  • Knowledgeable Level 2
  • First Answer

バッジを表示

Content Feed

表示方法

回答済み
How to use reconstructScene with a disparity map from file, without calling rectifyStereoImages ?
You can use the reprojectionMatrix output from rectifyStereoImages to do the reconstruction. Otherwise, you need to save the ste...

約1ヶ月 前 | 0

| 採用済み

回答済み
Match the coordinate systems of "triangulate" and "reconstructScene" with "disparitySGM"
The point cloud generated from reconstructScene is in the rectified camera 1 coordinate. Starting in R2022a, you can use the ad...

約2ヶ月 前 | 0

| 採用済み

回答済み
MATLAB Simulate 3D Camera: why is there no focal length (world units) attribute in the sensor model?
Please take a look at this page: https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0ni74 If you know the size...

4ヶ月 前 | 0

回答済み
How to port SLAM algorithm to embedded platform?
Unfortunately, as of R2022a the visual SLAM pipeline doesn't support code generation yet. We're actively working on this suppopr...

5ヶ月 前 | 1

| 採用済み

回答済み
how to get the relative camera pose to another camera pose?
Note that the geometric transformation convention used in the Computer Vision Toolbox (CVT) is different from the one used in th...

5ヶ月 前 | 1

| 採用済み

回答済み
How to get 3D world coordinates from 2D image coordinates?
You should use the rectified stereo images. The disparityMap computed from disparitySGM should have the same size as your stereo...

7ヶ月 前 | 0

回答済み
Creating a depth map from the disparity map function
You can use reconstructScene for your workflow.

8ヶ月 前 | 0

回答済み
Unable to use functions from the Computer Vision Toolbox in Simulink MATLAB function block
A workaround is to declare the function as an extrinsic function so that it will be essentially executed in MATLAB: https://www...

8ヶ月 前 | 0

| 採用済み

回答済み
how to get texture extraction using LBP features in MATLAB?
You can use the extractLBPFeatures function.

11ヶ月 前 | 0

回答済み
About error of helperVisualizeMotionAndStructureStereo
In helperVisualizeMotionAndStructureStereo.m, please note the following code in retrievePlottedData which discards xyzPoints out...

11ヶ月 前 | 0

回答済み
About SLAM initial Pose data
The initial pose data is provided by the dataset. It's used to convert the 3-D reconstruction into the world coordinate system. ...

11ヶ月 前 | 0

回答済み
About "slam" on my camera device
The example shows how to run stereo visual SLAM using recorded data. It doesn't support "online" visual SLAM yet, meaning that y...

12ヶ月 前 | 0

回答済み
Is Unreal Engine of the Automated Driving Toolbox available on Ubuntu?
As of R2021a, only Windows is supported. See Unreal Engine Simulation Environment Requirements and Limitations.

約1年 前 | 0

回答済み
why we use Unreal engine when there is a 3D visualization available in Automated driving toolbox?
It's not just used for visualization. With Unreal, you can configure prebuilt scenes, place and move vehicles within the scene, ...

約1年 前 | 0

| 採用済み

回答済み
About running a stereo camera calibrator
In general, you can use any type of stereo camera and calibrate its intrinsic parameters using the Stereo Camera Calibrator. You...

約1年 前 | 0

回答済み
How to obtain optimal path between start and goal pose using pathPlannerRRT() and plan()?
Please set the random seed at the beginning to get consistent results across different runs: https://www.mathworks.com/help/mat...

1年以上 前 | 0

| 採用済み

回答済み
Does vehicleCostmap this type of map only support pathPlannerRRT object to plan a path? Can I use another algorithm to plan a path?
You can create an occupancyMap object from a vehicleCostmap object using the following syntax: map = occupancyMap(p,resolution)...

1年以上 前 | 0

回答済み
Defining a ROI for feature extraction rather than rectangle
Unfortunately, rectangle is the only type of ROI supported. As a workaround, you can define multiple ROIs in your image to cover...

1年以上 前 | 1

回答済み
Monocular Visual Simultaneous Localization and Mapping Error: Dot indexing is not supported for variables of this type.
The example has been updated over the past few releases. For 20b version, please check the following documentation: https://www...

1年以上 前 | 0

| 採用済み

回答済み
How can I store the feature descriptors for all 3D points found in Structure from Motion?
You can use imageviewset to store the feature points associated with each view and the connections between the views. You can al...

1年以上 前 | 0

回答済み
How to use "triangulateMultiview" to reconstruct the same world coordinate point under multiple different views?
triangulateMultiview requires both camera poses and intrinsic parameters inputs to compute the 3-D world positions corresponding...

1年以上 前 | 0

| 採用済み

回答済み
imageviewset() not returning an imageviewset object
imageviewset is introduced in R2020a. If you are not able to upgrade to 20a, you can use viewSet as a workaround.

2年弱 前 | 0

回答済み
How to ensure that the number of matches between 2 images is equal to the number given?
You can set 'MatchThreshold' to 100 and 'MaxRatio' to 1.

2年弱 前 | 0

| 採用済み

回答済み
Undefined function 'estimateGeomerticTransform' for input arguments of type 'SURFPoints'.
There is a typo in your code, estimateGeomerticTransform should be estimateGeometricTransform.

2年弱 前 | 0

回答済み
How do I find 3D coordinates from stereo picture pair?
You can use reconstructScene function to compute the 3-D world points from a disparity map. Then, you can query the 3-D coordina...

約2年 前 | 0

回答済み
RRT navigation toolbox and automated driving toolbox also costmap from driving scenario
1) plannerRRT in Navigation Toolbox is a generic motion planner where you can define the state space. pathPlannerRRT in Automate...

約2年 前 | 0

回答済み
helperTrackLocalMap error with Monocular SLAM
Answer pasted from comments: Please try tunning the parameters to see if it helps improve the robustness: In helperIsKeyFrame,...

約2年 前 | 0

回答済み
Why is the pointsToWorld back-projection inverted?
You may need to convert the camera world pose to extrinsics using cameraPoseToExtrinsics: [worldOri,worldLoc] = estimateWorld...

約2年 前 | 3

| 採用済み

回答済み
rectangle instead of square for camera calibration
Unfortunately, this function does not support rectangle patterns.

約2年 前 | 0

回答済み
How to calculate fisheye intrinsics?
You can use undistortFisheyeImage function to produce a "virtual perspective" camera intrinsics, which is the format you need. S...

2年以上 前 | 1

| 採用済み

もっと読み込む