Survey Urban Environment Using UAV
This example shows how to create a custom trajectory for a UAV and capture its flight data while surveying an urban environment. To map its environment, the UAV flies through a city and captures aerial images using a camera and other sensor data.
Specify Trajectory for UAV
Create a flight trajectory for the UAV by providing it with these waypoints in order:
A takeoff position.
Two flight waypoints to follow after takeoff.
A landing position.
Since the map used in this example is from the Unreal Engine® US City Block scene, the positions of each waypoint must be with respect to the world frame of that scene. This image shows a top view of the scene with the desired flight path, which contains four waypoints.
The flight path must:
Takeoff at point T at an xyz-position of
Fly through points 2 and 3 at xyz-positions,
[-155 -105], and
[-115 5], respectively.
Land at point L at an xyz-position of
The takeoff and landing points have elevations of 0 meters, but you must specify heights for waypoints 2 and 3.
Specify an elevation of 150 meters. The UAV must maintain this elevation for the entire flight.
uavElevation = 150;
waypoints to the xy-positions of the takeoff, flight and landing waypoints in order.
waypoints = [-185 105; -115 105; -115 -5; -15 -5];
Compute and visualize the flight trajectory of the UAV using the
exampleHelperComputeAndShowUAVTrajectory helper function.
[positionTbl,rotationTbl,traj] = exampleHelperComputeAndShowUAVTrajectory(waypoints,uavElevation);
Run Simulation and Obtain Filtered Flight Data
This section uses the Unreal simulation to capture data, and stores the acquired Simulink® signal data in the
out MAT file, which contains these outputs:
SimulationMetadata— Metadata from the Simulink simulation.
logsout— Logged data acquired by the UAV during its flight. This includes image, label, and depth information.
The simulation finds the time steps at which the UAV is within specific tolerance values for the target elevation and pitch. It then logs data for only those time steps. This process maintains a central perspective projection at the correct elevation and removes noisy data.
First, specify the target pitch, in degrees, for the UAV. This pitch value is in the Unreal Engine world coordinate system.
uavPitch = 90;
Then, set the tolerance values for UAV elevation and UAV pitch.
elevationTolerance = 15e-2; pitchTolerance = 2;
Filter out every frame. To create a map, you do not need every frame to meet the tolerance requirements. Increasing results in faster, but less accurate, map creation, while decreasing it results in slower, but more accurate, map creation. You must ensure that the successively captured image frames have overlap regions between them. Use an value of 16 so the captured data meets these requirements.
nthFrame = 16;
mSurveyUrbanEnvironment Simulink model.
pathToModel = "mSurveyUrbanEnvironment"; open_system(pathToModel)
Run the model to start the Unreal Engine simulation and capture the camera and depth sensor data in the
out MAT file. Note that you need at least 240 MB of disk space to save the captured simulation data.
Load Captured Data
out MAT file generated by the Simulink model.
out MAT file contains this data captured from the model:
Image— Image frames acquired by the UAV camera for each time step, returned as an H-by-W-by-3-by-F array.
Depth— Depth map for each image frame acquired by the UAV camera, returned as an H-by-W-by-F array.
H and W are the height and width, respectively, of the acquired images, in pixels. F is the total number of time steps logged, and is directly proportional to the flight time.
Save the image and depth data into workspace variables.
image = logsout.get("Image").Values.Data; depth = logsout.get("Depth").Values.Data;
Save Filtered Data
You must calculate the values of these additional parameters for orthophoto computation:
focalLength— The focal length of the UAV camera, in meters. Note that the UAV camera in this example is facing vertically downward.
meterToPixel— The number of pixels on your screen that constitute 1 meter in the real world. This is also the ratio between your screen and the Simulink space.
reductionFactor— This value represents the reduction factor for each axis. If the saved orthophoto [H W] size pixels, then the actual size of the orthophoto is [H W]
reductionFactorin pixels, and the real-world size of the orthophoto, in meters, is [H W]
meterToPixel. This ensures that 1 meter in this final map is equivalent to 1 meter in real life. Because processing matrices with millions of rows and columns is computationally expensive, you use the reduction factor to scale down such matrices.
Load the model and save the path to the camera block. Then, obtain the focal length of the camera. Assume a symmetrical lens with the same focal length along the x- and y-axes.
load_system(pathToModel) pathToCameraBlock = pathToModel + "/Downward Facing Camera"; focalLengths = str2num(get_param(pathToCameraBlock,"FocalLength")); focalLength = focalLengths(1);
Calculate the meter-to-pixel ratio for your screen. This initial value is for a screen on which 752 pixels corresponds to 15.6 cm in the real world. If this value does not match those of your screen, determine the ratio for your monitor and set
meterToPixel = 752*100/15.6;
Next, set the reduction factor for the orthophoto to 800. Where a reduction factor of 1 results in a 1:1 scale map of the ground, which would be too large to efficiently process, a reduction factor of 800 results in a 1:800 scale map. Note that, depending on your use case, you may need to tune the reduction factor, as high or too low of a value can cause ripple effects in the orthophoto.
reductionFactor = 800;
depth data, along with the additional orthophoto parameters, into a MAT file.
save("flightData.mat", ... "image","depth", ... "uavElevation","meterToPixel","focalLength","reductionFactor", ... "-v7.3");
This example showed how to create a flight trajectory for a UAV, simulate the UAV flight, and capture information from the sensors during the UAV flight. It also showed you how to filter out the data of interest by eliminating noise.
In the next step of the Map and Classify Urban Environment Using UAV Camera and Deep Learning workflow, Obtain Orthophotos from Central Perspective Images, you process the captured perspective images to convert them into orthophotos.