Video length is 33:53

Highway Lane Change

Learn how to develop automated lane change maneuver (LCM) systems for highway driving scenarios using Automated Driving Toolbox™ and Navigation Toolbox™. Automated LCM systems enable self-driving vehicles to automatically move from one lane to another. Discover how to design and integrate the following subsystems used in automated LCM systems:

  • Vehicle dynamics
  • Sensors and simulation environments
  • Path planning algorithm designs
  • Lane change controller designs
  • Metric assessments and visualizations

Published: 21 Jun 2021

Hello, everyone. I am Mihir Acharya, product manager for Navigation Toolbox at MathWorks. And I work with robotics and autonomous systems applications. I'll be presenting this webinar today with Pitambar.

Hey, everyone. My name is Pitambar Dayal. I'm the product manager for Automated Driving Toolbox at MathWorks. And I'll be joining Mihir here today to present this webinar.

All right, let's get started. In this session, you will learn how to design and simulate lane change maneuver system for an autonomous vehicle driving on a highway. We will see how you can develop virtual worlds for simulation, design a motion planner and controller, run simulation tests, and finally deploy the code on a hardware platform.

Lane changing is one of the many complex processes for a self-driving vehicle. And in this session, we will show how you can use model-based design to build a lane change maneuver system. But before we jump into the specifics, I would like to share the bigger picture with you.

At MathWorks, we pay special attention on the industry trends and how we can support our customers by providing algorithms and tools for these trends. The first trend we see now is the need for virtual worlds. The industry continues to do more with simulation in order to reduce vehicle testing, as well as explore scenarios that may be difficult or unsafe to reproduce in the real world. MathWorks provides tools that allow you to design and simulate scenes, scenarios, sensors, and dynamics.

The second trend driving MathWorks investments is the need for multidisciplinary skills. Developing an autonomous system application requires a variety of skills that includes perception algorithms for object detecting or tracking, localization and pulse estimation, and algorithms for motion planning and controls. Now you could be an expert in one of these or multiple of these areas, but also want to use out-of-the-box solution for the other pieces of the puzzle.

Now MathWorks provides dedicated toolboxes to help you learn and apply these disciplines. These toolboxes include metrics and analysis capabilities that you can use throughout your development process. The third trend driving investments is the need for software. Our investments here are focused on strengthening our platform tools to enable you to design, deploy, and test embedded software. Today, we will illustrate the planning and control capabilities along with the other pieces of this workflow. Now let's take a look at the lane change maneuver example.

We use this simulating test bench model for the highway lane change maneuver system. Let's see a quick glimpse of the simulation output. A level 2 or level 3 highway driving automation requires a motion planner and a robust controller for lane change maneuvers. In this simulation, the blue car is called the Ego vehicle or Ego actor. And the vehicle in front of this Ego vehicle is called the lead or Target vehicle. When the lead vehicle slows, down the planner generates an optimal trajectory for the Ego vehicle to change the lane. This example also shows how to perform collision checking using dynamic capsule-based objects.

This test bench includes six major subsystems-- scenario and environment, lane change planner and planner configuration, lane change controller, vehicle dynamics, metric assessment, and visualization. From left to right, we will walk you through each subsystem of this model, I will now hand it over to Pitambar to walk us through the scenario and planner subsystems of this model.

Here, we have our test bench model. And the first subsystem that will look at is the scenario and environment block. This subsystem reads map data from the base workspace. And it outputs information about lanes and reference path. So let's click in and take a look.

Now, there's two main blocks here-- the Scenario Reader block and the Vehicle to World block. Let's start with this first one. So the Scenario Reader block reads in a driving scenario from the workspace. You can see the parameters of this block that you can change over here. We're going to be using a pre-loaded scenario, which you saw in the demo earlier.

Now let me close out of this. And this block will take an Ego vehicle information to perform a closed loop simulation. And it outputs ground truth about actors and lane boundaries in Ego vehicle coordinates.

So the other block we have is the Vehicle to World block. This converts Target vehicle positions from vehicle coordinates to world coordinates. Now, we see the outputs of this subsystem on the right. Let's go back to our test bench and see what these outputs map to.

We can see that system time, map info, and Target Actors World, all map to our next subsystem, which is the Highway Lane Change Planner. Our Lane Boundaries and our Target Actors Ego map to our metrics assessment. And Target Actors World also maps to the visualization.

So that brings us to the next subsystem, which is the Highway Lane Change Planner. Now I'll click in here, and you'll see that this is a pretty complex subsystem. So before I dive into the different blocks we see here, I'll use a schematic to show what's going on.

OK, now let's go through how the motion planner works at a high level. And then we'll map it back to the subsystem that we just saw. So the main operations of the motion planner are driven by three functions-- there's the dynamicCapsuleList, which is used for collision checking. There's the referencePathFrenet. This handles coordinate transforms. We have global2frenet and frenet2global. And the third one we have is trajectoryGeneratorFrenet, which we'll get to in a minute.

But first, here, we have a map. And the waypoints are defined to reach the destination by mission planner. Now the mission planner is out of the scope of today's topic. So we just assumed that the waypoints are given for a driving scenario.

The first thing we do is execute a coordinate transform from global2Frenet space for the Ego vehicle and surrounding most important objects, or MIOs. Now with the surrounding traffic conditions and map information, the terminal state sampler defines terminal states for cruise control, lead car following, and lane change.

The trajectoryGeneratorFrenet generates multiple candidate trajectories the Ego vehicle can take to meet the terminal states. We perform a cost evaluation and feasibility checks for each candidate trajectory to find feasible candidates that have a minimum cost value. Now in the meantime, we also predict target trajectories for each surrounding target. So each candidate Ego trajectory is checked for collisions against these target trajectories.

And finally, we can determine a collision-free optimal trajectory. So we conduct the trajectory generation and motion prediction in the Frenet space. Now let's go to the next slide and see how some of these items map to our subsystem.

All right, here we see how that schematic maps to our subsystem. We see that these first two blocks transform the Ego and MIO to Frenet coordinates. Then we see the terminal state sampler, motion prediction, and motion planner as well. So let's go into some more detail on the terminal state sampler and the motion planner.

We'll start with the terminal state sampler. The terminal state sampler defines terminal states for trajectory generation, based on surrounding traffic conditions and map information. So the terminal state sampler reads Ego and surrounding MIOs Frenet states, along with map information, and finds a preferred lane. It generates terminal states for three different driving maneuver modes-- cruise control, lead car following, and lane change.

For example, here we have three different traffic situations. First, we identify which target is safe and which target is unsafe based on time-to-collision analysis. If the current Ego lane is safe and the unsafe targets are detected in both left and right lane, then the preferred lane would be the current Ego lane. In this case, the desired maneuver would be either cruise control or lead car following mode.

We would determine the terminal Frenet states for each mode. If the current lane is unsafe and either the left or right lane is safe, the desired maneuver should be the lane change mode. So again, we define the terminal Frenet states according to the direction of the lane change.

Now let's take a look at the Motion Planner Block. In this block, we evaluate cost values for all terminal states. The terminal states are sorted by cost and fed into the trajectoryGenerateFrenet. This function generates multiple candidate Ego trajectories. We remove some of the candidates through a feasibility check-- for example, an excessive curvature or yaw rate would give us reason to remove some trajectories.

Now, we check each feasible candidate for collisions against predicted target trajectories. And finally, we can determine a collision-free optimal trajectory. So here's an example for finding an optimal trajectory.

In this driving situation, first we identify which target is safe and which target is unsafe. We predict the target trajectory and check a collision against it. The red colored ones show colliding trajectories. The white lines mean that the candidate trajectories have a high cost. And the light blue dashed lines indicate and infeasible trajectory due to an excessive curvature or yaw rate.

So the green line is what we're looking for. It represents a collision-free optimal trajectory. If the current lane becomes unsafe, the preferred lane is change to the next lane. So the right lane change becomes the optimal trajectory with minimum cost in this case.

Now let's go back to the Simulink model and show how the outputs of the motion planner feed into the next subsystem. All right, here we have the planner subsystem. I'll go back, and we see that we have two outputs. The first output RefPointOnPath, feeds into our lane change controller.

The second output is related to visualization. And we'll see this in the visualization subsystem later. So now, I'll pass it off to Mihir, who will talk about the lane change controller.

Thanks, Pitambar. As Pitambar showed, we have the collision-free optimal trajectory coming as an input from the Lane Change Planner subsystem. Now we need a lane change controller to follow this reference trajectory.

The input to the controller is a reference point on path. Another input coming from the vehicle dynamic subsystems is the longitudinal velocity. The output from the controller feeds into the vehicle dynamics, which makes this a closed loop system.

Now let's look inside the lane change controller subsystem. The Path Following Controller block keeps the vehicle traveling within a marked lane of a highway while maintaining a user-set velocity. Now let's see how it works.

The input from planner consists of several parameters. Out of these parameters, we need reference velocity, reference curvature, relative yaw angle, and lateral offset deviation. We consider the trajectories

provided by the planner as virtual lanes. The virtual lane center subsystem creates a virtual lane from the path points.

The controller needs to know the lateral deviation and relative yaw angle in reference with the virtual lane. The Preview Curvature subsystem converts trajectory to curvature input. This is required because the Ego vehicle needs to track the curvature while its longitudinal velocity is also changing. So the preview curvature block merges the curvature, curvature derivative, and the longitudinal velocity to convert the trajectory into curvature input required by the Path Following Controller block.

Now let's learn more about the Path Following Controller block. Path following for lane change first needs lateral control that keeps the Ego vehicle traveling along the center line of its lane by adjusting the steering of the Ego vehicle. Individually, this task is also called Lane Keeping Assist.

Secondly, longitudinal control maintains a user-set velocity of the Ego vehicle. You may also call it as cruise control. The Path Following Control combines the lateral and length longitudinal control using Adaptive Model Predictive Control. You can generate the longitudinal acceleration and steering angle outputs using this out-of-the-box Simulink block.

Now you may think that, why can't I use other control methods such as a PID controller. The reason is during the PID controller for larger systems like these becomes challenging. MPC or model predictive control has many advantages over PID. Some other reasons I would highlight here-- it can handle multiple input, multiple output, allowing the controller to respond to data from various sensors. And it can take into account the constraints from these sensors such as slowing down the vehicle if a corner or stop sign is detected in advance. This is also called a preview capability, which is really important in these kind of situations.

Now let's see how the Path Following Controller block follows a center line of a lane. When the Ego vehicle moves away from the center lane, the lateral deviation and relative yaw angle changes. The lateral control keeps the Ego car traveling along the center lane of the lanes on the road by adjusting the front steering angle of the Ego car. The goal for lane keeping control is to drive both lateral deviation and relative yaw angle close to 0.

We saw how we get the previewed lane curvature earlier. Now these three inputs complete the lateral control part of the Path Following Controller block. The longitudinal control tracks of velocity set by the user and maintains a safe distance from the lead vehicle by adjusting the longitudinal acceleration of the illegal vehicle, from which it takes the longitudinal velocity as input from the vehicle dynamics block. That, we will learn about later. The goal here is to compute optimal control actions while satisfying safe distance, velocity, and acceleration constraints.

Now I would like to highlight here if it was only one of the two either lateral or longitudinal control that we needed to do, we could just use a model predictive controller. But we want to autonomously steer a car whose lateral vehicle dynamics is changing with time due to varying longitudinal velocity. A traditional model predictive controller is not effective at handling the varying dynamics, as it uses a constant internal planned model. And that's why we used Adaptive Model Predictive Control.

Now let's take a look inside the Path Following Controller block. Going back to our Simulink model, we can click inside the Path Following Controller block. And here, we see there are three tabs. The parameter tabs provides the bicycle model parameters. You can enable or disable distance keeping using the spacing control checkbox.

The Controller tab includes actuator limits such as minimum and maximum steering angles and longitudinal acceleration. It also includes the embassy settings that allows you to tune the controllers performance. The last tab includes an interesting piece. Here you can create your own subsystem of the controller and customize it according to your application. This type of customization and flexibility is what I love about these tools.

All right, so we talked about the lane change controller subsystem that included the Path Following Controller block. I mentioned that we will talk about the Vehicle Dynamics block later. And now is the time. So I'll hand it over to Pitambar to discuss about the Vehicle Dynamics block.

Thanks, Mihir. Now let's talk about our next subsystem, which is vehicle dynamics. And the main block that we're going to focus on here is this one. This is the Vehicle Body 3 Degrees of Freedom block. It calculates the longitudinal, lateral, and yaw motion for a rigid, two-axle vehicle model.

So in this example, we're using a simple bicycle model which takes force inputs. The input values come from the steering angle and the acceleration, which undergo transforms to feed into this model. Now, before I get to the outputs, I want to talk briefly about the vehicle dynamics blocks set.

The 3 Degrees of Freedom bicycle model that we used is a simple model that works well for scenarios where roll, pitch, and bounce aren't that important. But when you do care about those effects, we have other models in vehicle dynamics block set that can help you capture them with higher fidelity. So now let's jump into what else Vehicle Dynamics blocks set has to offer.

Vehicle Dynamics block set has in open and documented library of component and subsystem models. It has pre-built vehicle models that you can parameterized and customize, fast running models that are ready for hardware in the loop deployment, and in Interface to Unreal Engine. So if you're interested in exploring more about vehicle dynamics and you have this toolbox, this is something to look into.

So now let's go back to our subsystem. I want to talk a little bit about the outputs. We have three of them-- the Ego actor, the longitudinal velocity, and the lateral velocity. The Ego actor contains information about many aspects of the Ego vehicle. We see this Pack Ego block here, which contains information about position, velocity, yaw, yaw rate, and the Ego actor ID. All of this gets packaged into a struct called Ego actor.

Now let's go back to our test bench and see how these outputs feed into other models. We can see that the Ego actor feeds into many of the subsystems in our test bench. The lateral velocity feeds into our metrics assessment. And the longitudinal velocity feeds into our metrics, as well as our lane change controller. So speaking of metrics, I'm now going to pass it back to Mihir, who's going to talk about metrics and visualization

Thanks, Pitambar. Now we have all the pieces in our test bench in place. Our Ego vehicle is able to follow and change lanes. But what happens when we don't monitor our speed while driving on a highway? I don't need to answer that, right?

With an autonomous vehicle, we need to monitor a lot of metrics that we do very naturally in manual driving. The metrics assessment block helps us monitor various metrics that we'll see in a bit. This subsystem takes the longitudinal velocity, lateral velocity, and Ego actor dynamics as inputs from the Vehicle Dynamics subsystem. And it takes the lane boundaries and target actors as inputs from the scenario and environment subsystem.

It then generates a dashboard that shows all these metrics while the simulation is running. But first, let's take a look inside the subsystem and what different metrics are recollecting. So we have to Detect

Collusion and Detect Lead Vehicle blocks here. The Detect Collision block detect the collision of the vehicle with other vehicles and halts the simulation if a collision is detected. The Detect Lead Vehicle block computes the headway between the Ego and lead vehicles, which is used for computing the time gap as well.

The time gap is calculated using the distance to the lead vehicle or headway, and the longitudinal velocity of the Ego vehicle. The longitudinal jerk is calculated using the longitudinal velocity. The lateral jerk value is calculated using the lateral velocity.

When we run the simulation, we can see the lane change visualization and the metrics dashboard side by side. Notice how the Ego acceleration readings on the dashboard are varying when the lane change maneuver is executed. Now you must be thinking, how do we generate this kind of visualization that we have been seeing in this presentation. And that brings me to the last part of this test bench. That is visualization.

Now unlike most other subsystems in this test bench, the visualization block here is a MATLAB function. This block creates a MATLAB plot using the inputs from the scenario and environment, as well as the planner subsystems. The MATLAB source code gives you the flexibility of changing any visualization parameters. The high level block of this MATLAB function gives you the control of quickly changing the view settings.

You can enable or disable the capsule objects or the trajectories. And then you can see just the lane change operation without any other highlighted trajectories or objects. Next, we will see how we can use this test bench to analyze and test the lane change maneuver system.

For the analysis and test purposes, we created 15 different test scenarios that you can edit programmatically. The example scenarios include straight roads, curved roads, and imported from the HD map. We can select the desired test scenarios using the Setup function. Let me show you an example of simulation with scenario number 14 from that list.

We added a couple of dashboard blocks to visualize the planner behavior during the simulation. We can monitor the current maneuver mode and preferred lane. The test bench model automatically determines the preferred lane depending on the surrounding traffic conditions. The driver can override the preferred lane decision at any time.

The current driving maneuver mode is cruise control in this video. When a slow-moving lead car is detected, the preferred lane is switched from the second to third lane as we can see here. And maneuver mode is change from cruise control to lane change mode. Now the driver overrides the current preferred lane and changes to the fourth lane. Then the vehicle moves to the fault line accordingly.

Another slow moving lead car is detected and it changes the lane back to third. Now, a disabled car is detected in the current Ego lane. And it changes the lane to avoid a collision. However, the distance to the orange car is too close, and it changes the lane again to the first lane. So we can see it's quite a complicated situation. However, the planner and controller were able to handle these situations.

The following example shows a closed loop system simulation from another scenario. During the simulation, you can see the curvature for the reference trajectory. The lateral deviation and relative yaw angle showed the trajectory following performance. The steering angle is the control action to follow the reference trajectory. Finally, you will see the lateral and longitudinal jerk as well. The assessment block continuously monitor these jerks if it exceeds the limits.

You'll notice that during the lane change, the reference curvature changes, and the lateral jerk increases. Now we have seen various simulation scenarios. And what if we like to deploy this code on a target hardware? So once we feel confident after testing these algorithms and simulation, we can deploy them on target hardware or test them in software in the loop using the generated C++ code.

This shipping example that comes with our product walks you through the co-generation process for the lane change planner. You can configure the lane change planner model to generate C++ code for real-time implementation of the model. This snippet shows a list of model configuration parameters. You can apply common optimizations on the generated code, and also create a report to explore the generated code.

After generating C++ code for the highway lane change planner, you can access the code functionality using software in the loop simulation. It provides early insight to use software in the loop. And you can know the behavior of a deployed application. You can compare the outputs from Normal simulation mode and Software-in-the-loop simulation mode. You can plot and verify if the differences between these runs are within the tolerance limits.

During the Software-in-the-loop simulation, you can log the execution time metrics for the generated code. This is a code execution profiling report we are seeing here that provides metrics based on data collected from a Software-in-the-loop execution. You can see a summary with profile sections of the code and CPU utilization. You can plot the execution time taken for the step function of highway change planner model. And we do this by using a helper function provided in the example.

This plot can be an early indicator of the performance of the generated code. For accurate execution time measurements, you can profile degenerated code when it is integrated into the external environment. Now you can also generate C++ code for the lane change controller. And similar to the planner example, there is a dedicated shipping example to show how you can generate the code and report as this functionality and execution time for the controller. Now I'll hand it over to Pitambar to summarize what we have learned today.

So let's go ahead and do a quick recap of the entire test bench. We talked about the scenario and environment subsystem. We talked about the lane change planner, the lane change controller. We went over the vehicle dynamics, the metric assessment, and finally the visualization. Now I want to bring this full circle to some of the slides you saw in the beginning.

So we talked about how there's several industry trends that are driving MathWorks investment in automated driving systems. There's the need for virtual worlds. So the industry continues to do more with simulation in order to reduce vehicle testing, as well as explore scenarios that might be difficult or unsafe to reproduce in the real world. So we saw this in our model with this scenario an environment subsystem, as well as the vehicle dynamic subsystem.

The second thing is the need for multidisciplinary skills. Developing an automated driving application requires a variety of skills, from planning and controls to perception disciplines like detection, localization, tracking, and fusion. In our particular example, we focused on the planning decisions and controls. And the third trend driving investments is the need for software. So here we saw aspects of designing, testing, and deploying our highway lane change system.

So to wrap up, let's go over some of the key takeaways. First of all, we closely followed this example from the dock called highway lane change. So if this is something you want to try out, you'll need automated driving tool box, navigation tool box, and model predictive control toolbox.

And in this example, we covered a couple of things. First, we talked about developing virtual worlds for simulation. We did this with the scenario and environment subsystem as well as the vehicle dynamics subsystem. Then we talked about designing a motion planner and a controller. And Mihir and I went into quite a bit of detail on how this was done for this example.

And finally, we ran simulation tests and deployed code. So before we end this session, I'll leave you with a couple of additional resources. On the left here, we have the automated driving solutions page. This talks about how you can use MathWorks tools for various automated driving application.

In the middle, we have a motion planning E-book. And finally, on the right hand side, we have a model predictive control video series. So if these are resources that you're interested in, you can scan these QR codes, and it'll take you to that page. And that's how we'll conclude this video. So I'll leave this page up for a couple of seconds. But thanks for tuning in.