Fusing a GPS and IMU to Estimate Pose | Understanding Sensor Fusion and Tracking, Part 3
From the series: Understanding Sensor Fusion and Tracking
Brian Douglas
This video continues our discussion on using sensor fusion for positioning and localization by showing how we can use a GPS and an IMU to estimate an object’s orientation and position. We’ll go over the structure of the algorithm and show you how the GPS and IMU both contribute to the final solution so you have a more intuitive understanding of the problem.
Published: 1 Oct 2019
Let’s continue our discussion on using sensor fusion for positioning and localization. In the last video, we combined the sensors in an IMU to estimate an object’s orientation and showed how the absolute measurements of the accelerometer and magnetometer were used to correct the drift from the gyro. Now in this video, we’re going to do sort a similar thing, but we’re going to add a GPS sensor. GPS can measure position and velocity, and so this way we can extend the fusion algorithm to estimate them as well. Just like the last video, the goal is not to fully describe the fusion algorithm; it’s again too much for one video. Instead, I mostly want to go over the structure of the algorithm and show you visually how the GPS and IMU both contribute to the final solution so you have a more intuitive understanding of the problem. So I hope you stick around for it. I’m Brian, and welcome to a MATLAB Tech Talk.
Now it might seem obvious to use a GPS if you want to know the position of something relative to the surface of the Earth. Just strap a GPS sensor onto your system and you’ve got latitude, longitude, and altitude, and you’re done. This is perfectly fine in some situations. Like when the system is accelerating and changing directions relatively slowly and you only need position accuracy to a few meters. This might be the case for a system that is determining directions in your car. As long as the GPS locates you to within a few meters of your actual spot, the map application could figure out which road you’re on and therefore where to go.
On the other hand, imagine if the system requires position information to a few feet or less and it needs position updates at hundreds of times per second to keep up with the fast motion of your system. Like, for example, trying to follow a fast trajectory through obstacles with a drone. In this case, GPS might have to be paired with additional sensors, like the sensors in an IMU, to get the accuracy that you need.
To give you a more visual sense of what I’m talking about here, let’s run an example from the MATLAB Sensor Fusion and Tracking Toolbox, called Pose Estimation from Asynchronous Sensors. This example uses a GPS, accel, gyro, and magnetometer to estimate pose, which is both orientation and position, as well as a few other states. The script generates a true path and orientation profile that the system follows. The true orientation is the red cube, and the true position is the red diamond. It’s a little hard to see right now, but it’ll be more clear when it starts. Now, the pose algorithm is using the available sensors to estimate orientation and position, and it shows the results of that as the blue cube and blue diamond, respectively. So that’s what we want to watch. How closely do the blue objects follow the red objects? The graphs on the right plot the error if you want to see a more quantitative result.
The cool thing about this is that while the script is running, the interface allows us to change the sample rates of each of the sensors or remove them from the solution altogether so that we can see how that impacts the estimation. Let’s start by removing all of the sensors except for the GPS and we’ll read the GPS 5 times a second.
The default trajectory is to follow a circle with a radius of about 15 meters. And you can see that it’s moving around this circle pretty slowly. The orientation estimate is way off, as you’d expect since we don’t have any orientation sensors active. But the position estimate isn’t too bad. After the algorithm settles and removes the initial bias, we see position errors of around plus/minus 2 meters in each axis.
So now let me add back in the IMU sensors and see if our result is improved. It’s taking several seconds for the orientation to converge, but you can see that it is slowly correcting itself back to the true orientation. Also, the position estimate is, well, about the same. Plus or minus 2 meters, maybe a little less than that. Because this is a relatively slow movement and it’s such a large trajectory the IMU sensors that are modeled here are only contributing a minor improvement over the GPS alone. The GPS velocity measurement is enough to predict how the object moves over the 0.2 seconds between measurements since the object isn’t accelerating too quickly. This setup is kind of analogous to using GPS to get directions from a map on your phone while you’re driving. Adding those additional sensors aren’t really going to help much.
So now, let’s go in the opposite direction and create a trajectory that is much faster. In the trajectory generation script, I’ll just speed up the velocity of the object going around the circle from 2.5 to 12.5 meters per second. This is going to create more angular acceleration in a shorter amount of time. And to really emphasize the point I’m trying to make here, I’m going to slow the GPS sample time to once per second. Let’s give this a shot.
Okay, what’s happening here is that when we get a GPS measurement, we get both position and velocity. So once a second, we get a new position update that puts the estimate within a few meters of the truth, but we also get the current velocity. For one second, the algorithm propagates that velocity forward to predict what the object is doing between measurements. This works well if the velocity is near constant for that one second, but extremely poorly, as you can see, when the velocity is rapidly changing.
This is the type of situation that is similar to a drone that has to make rapid turns and avoid obstacles, and it is here where the addition of the IMU will help because we won’t have to rely on propagating a static velocity for one second, we can estimate velocity and rotation using the IMU sensors.
Now, to see the improvement I’ve placed two different runs next to each other. The left is the GPS only that we just saw, and the right is with the addition of the IMU. You can see, at least visually, how the GPS with the IMU is different than the GPS alone. It’s able to follow the position of the object more closely and creates a circular result rather than a saw blade. So adding an IMU seems to help estimate position. So why is this the case and how is the algorithm combining these sensors?
Well, again, intuitively we can imagine that the IMU is allowing us to dead reckon the state of the system between GPS updates, similar to how we use the gyro to dead reckon between the mag and accel updates in the last video. And this is true, except it’s not as cut and dried as that; it’s a lot more intertwined than you might think. To understand why this is the case, we need to explore the code a bit.
The fusion algorithm is a continuous-discrete extended Kalman filter. This particular one is set up to accept the sensor measurements asynchronously, which means that each of the sensors can be read at their own rate. This is beneficial if you want to run, say, your gyro at 100 Hz, your mag and accelerometer at 50 Hz, and your GPS at 1 Hz. You’ll see below how this is handled.
The thing I want to point out here, though, is that this is a massive Kalman filter. The state vector has 28 elements in it that are being estimated simultaneously. There’s the obvious states like orientation, angular velocity, linear position, velocity, and acceleration. But the filter is also estimating the sensor biases and the mag field vector. Estimating sensor bias is extremely important because bias drifts over time. That means that even if you calculate sensor bias before you operate your system and hard code that calibration value into the software, it will not be accurate for long. And any bias that we don’t remove will be integrated and will cause the estimate to walk away from the truth when we’re relying on that sensor.
Now if you don’t have a good initial estimate of sensor bias when you start your system, then you can’t just turn on your filter and trust it right away. You have to give it some time to not just estimate the main states you care about like position and velocity, but also to estimate some of the secondary states like bias. Usually you let the Kalman filter converge on the correct solution when the system is stationary and not controlled, or maybe while you’re controlling it using a different estimation algorithm, or maybe you just let it run and you don’t care that the system is performing poorly while the filter converges. And so when we talk about giving a Kalman filter enough time to converge, this is part of it.
Another thing we need to think about is how to initialize the filter. This is an EKF and it can estimate state for nonlinear systems. It does this by linearizing the models around its current estimate and then using that linear model to predict the state into the future. So, if the filter is not initialized close enough to the true state, the linearization process could be so far off that it causes the filter to never actually converge. Now this isn’t really a problem for this example because the ground truth is known in the simulation, so the filter is simply initialized to a state close to truth. But in a real system, you need to think about how to initialize the filter when you don’t know truth.
Often, this can be done by just using the measurements from the sensors directly. Like using the last GPS reading to initialize position and velocity and using the gyro to initialize your angular rate, and so on.
With the filter initialized, we can start running it. And every Kalman filter consists of the same two-step process: predict and correct.
To understand why we can think about it like this, if we wanted to estimate the state of something—for example, where it is, or how fast it’s going—there are two general ways to do this: We could measure it directly, or we could use our knowledge of dynamics and kinematics to predict where it is.
For example, imagine a car driving down the road and we want to know its location. We could use GPS to measure its position directly. That’s one way. But if we knew where it started and its average speed, we could also predict where it will be after a certain amount of time with some accuracy. And using those predictions alongside a measurement can produce a better estimate. So the question might be, why wouldn’t we just trust our measurement completely here? It’s probably better than our estimate. Well, as sort of an extreme example, what if you checked your watch and it said it was 3 p.m., and then you waited a few seconds, and checked it again and it said 4 p.m. You wouldn’t automatically assume an hour passed just because your measurement said so. You have a basic understanding of time, right? That is, you have an internal model that you can use to predict how much time has passed, and that would cause you to be skeptical of your watch if you thought seconds passed and it said an hour. On the other hand, if you thought about an hour had passed but the watch said 65 minutes, you would probably be more inclined to believe the watch over your own estimate since you’d be less confident in your prediction. And sensors have errors and uncertainty associated with them and you can improve the state estimate by including a prediction even if your sensor is pretty good.
And this is precisely what a Kalman filter is doing. It’s predicting how the states will change over time based on a model that it has and along with the states, it’s also keeping track of how trustworthy the prediction is based on the process noise that you’ve given it. The longer the filter has to predict the state, the less confidence it has in the result. Then, whenever a new measurement comes in, which has its own measurement noise associated with it, the filter compares the prediction with the measurement and then corrects its estimate based on the relative confidence in both.
And this is what the script is doing. The simulation runs at 100 Hz, and at every time step, it predicts forward the estimate of the states. And then, if there is a new measurement from any of the sensors, it runs the update portion of the Kalman filter, adjusting the states based on the relative confidence in the prediction and the specific measurement. So it’s in this way that the filter can run with asynchronous measurements.
Now, with the GPS only solution that we started with, the prediction step could only assume that the velocity isn’t changing over the 1 second and since there were no updates to correct that assumption, the estimate would drastically run away from truth. However, with the IMU, the filter is updating 100 times a second and looking at the accelerometer and seeing that the velocity is in fact changing. So in this way, the filter can react to a changing state faster with the quick updates of the IMU than it can with the slower updates of the GPS. And once the filter converges, and it has a good estimate of sensor biases, then that will give us an overall better prediction, and therefore, a better overall state estimation. This is the power of sensor fusion.
Now, this explanation might not have been perfectly clear and probably a bit fast, but I think it’s hard to really get the topic to sink in by watching a video. So I would encourage you to play around with this example, turn sensors on and off, change their rates, noise characteristics, and the trajectory and see how the estimation is affected yourself. You can even dive further into the code and see how the EKF is implemented. I found it was helpful to place breakpoints and pause the execution of the script so that I could see how the different functions update the state vector.
Okay, this where I’m going to leave this. In the next video, we’ll start to look at estimating the state of other objects when we talk about tracking algorithms.
So if you don’t want to miss that and future Tech Talk videos, don’t forget to subscribe to this channel. And, if you want, you can check out my channel, Control System Lectures, where I cover more control theory topics as well. Thanks for watching.
Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
アジア太平洋地域
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)