What Is Robust Control? | Robust Control, Part 1
From the series: Robust Control
Brian Douglas
This video covers a high-level introduction to robust control. The goal is to get you up to speed with some of the terminology and to give you a better understanding of what robust control is and how it fits into the larger control field.
A system is robust if it’s capable of meeting requirements (usually stability or performance measures) even in the presence of model or disturbance uncertainty. Basically, it’s being able to say, “I don’t know my system perfectly or the environment it’s going to operate within, but I’m confident it’s going to work!”
And robust control theory is how we get that confidence. It’s a method or an approach that we can use to design a system in a way that can handle uncertainty.
In this first video, we put some intuition behind these definitions.
Published: 11 Feb 2020
This videos series covers a high-level introduction to robust control. The goal is to get you up to speed with some of the terminology and to give you a better understanding of what robust control is and how it fits into the larger control field. In this first video, we’ll go over what robust means and why it’s important. So, I hope you stick around for it. I’m Brian, and welcome to a MATLAB Tech Talk.
Let’s start with some definitions, and then we’ll circle back around and provide some intuition behind these words. A system is robust if it’s capable of meeting requirements (usually stability or performance measures) even in the presence of model or disturbance uncertainty. Basically, it’s being able to say, I don’t know my system perfectly or the environment it’s going to operate within, but I’m confident it’s going to work!
And robust control theory is how we get that confidence. It’s a method or an approach that we can use to design a system in a way that can handle uncertainty. Now of course there’s a lot more to it than this, things like how do I quantify uncertainty and is all uncertainty treated equally, and so on, and we’re going to get to some of that in this series. But to start off here, I want to touch on why we need to think about things like robustness in the first place.
Let’s look at a simplified workflow for designing a controller. You have some process that you want to change the behavior of, some physical and real system. Let’s say you want to develop a hover controller for a drone. You develop a mathematical model of that system which maps the inputs, motor speeds, to outputs, drone position, velocity, and orientation. You know, basically some function that mimics the behavior of the real drone. Then you use this model to design a controller which is just another mapping that takes a reference signal and the system state as inputs and outputs the controlled variables. These are the the things that we can use to affect the behavior of the process. And this controller mapping can take many different forms, it could be PID, or full-state feedback, or a neural network, or some nonlinear mapping, or whatever you choose to use to solve your problem.
You probably have some performance or stability requirements that you’re designing to, and once you have a controller that works with the model, you can implement it on an embedded processor that drives the actual drone. And if your model was close enough to the real system, then you have some expectation that the controller you designed will work on the real system as well.
But here’s the sad reality, your model, the model you used to design your control system, is wrong. Which means, your control system isn’t going to work exactly the same on the real hardware as it did against your model. Now the main question is, is it good enough, though? I mean, our controller could probably handle some differences in the real system, right? Well, before we address that let’s look at why the model is wrong in the first place. And there are a lot of reasons.
Real systems are extremely complex and the input to output mapping may be poorly understood - for example we may not understand or be able to measure certain things like high frequency dynamics, therefore, our model won’t include them or at the very least they won’t be accurate at these frequencies.
Or we may not understand the inputs into the system very well, things like disturbances or even the behavior of our own actuators that are driving the system. So, there is uncertainty in both what the driving forces are that go into the system as well as uncertainty in how the system responds to those driving forces.
Additionally, we may also intentionally build a model that deviates from the real physics in order to get something simple to work with. This is the case for every linear model, we’ve sacrificed realism for simplicity. Another thing is that the system itself may naturally vary over time - and the change is driven by unknowable stochastic events. Short term unknowable events we might label noise, whereas, longer term events might be things like how the system degrades over time. The drone might not behave the same brand new as it does a year later.
And finally, you may be designing to a single physical system, but manufacturing tolerances and defects means that you don’t exactly know the parameters of duplicate systems. Say for example, the controller is for a toy drone that you’re selling, but each drone is manufactured slightly differently and so it flies slightly differently. A single system model wouldn’t capture the uniqueness of each of the drone variations and therefore, would have some amount of error when we use it to represent the set of all of the drones.
All of these things make models imperfect representations of real life. They’re just approximations but they are extremely helpful to solve many problems so we’re willing to put up with the fact that they aren’t perfect. But because of their imperfections, there is some uncertainty, some unknown difference between the output of the model and the output of the real system.
Therefore, when you use a model to design a controller, the consequence is that it’s being tailored to an imperfect representation. And so, a controller that is perfectly tuned to the model, still runs the risk of reduced performance or reduced stability on the real system.
So how do we get around this problem? A straightforward way is to simply add margin into the design. For example, don’t just meet stability, but exceed it by a certain amount, some margin, so that we’re confident that any deviations from what we think the system behavior will be won’t result in the system exceeding a requirement. We’ll just eat into some of that margin.
If you’re familiar with the classical gain and phase margins, this is exactly what they are for. You may have a requirement that says gain margin must be more than 6 dB and phase margin must be greater than 45 degrees at some critical frequency or frequencies. What this is saying is that you have to designed your controller in a way that the real system will still be stable even if the gain is up to 6 dB higher than your model claims, or if the phase lags up to 45 degrees more than your model claims at the frequencies that it’s specified at. Choosing the amount margin you need depends on how uncertain you are that your model matches the real system.
If we choose too little margin, then the differences between the model and the real system may cause greater deviations than what we’ve provisioned for. And, if we choose too much margin, then we’re being too conservative and we force ourselves to build a system that has to meet more stringent requirements, and typically this results in a more expensive control system, either with direct hardware costs for sensors are actuators that have less noise or better performance or it costs more in engineering time to design and test a higher performing system. So the trick is to choose the perfect amount of margin and to apply it to the right places.
And gain and phase margins are just one way to assess the robustness of your system. They are useful for sure, but they don’t necessarily give a complete view of how robust your system is. For example, take this open loop transfer function G(s). I can solve for the classical gain and phase margins with the function margin in MATLAB. And here, you can see that this system has infinite gain margin, meaning no amount of additional gain will make the closed loop system unstable, and it has almost 70 degrees phase margin at 0.4 rad/s. Which is equivalent to about 3 seconds of delay margin. So, this feels like a pretty robust system since it has such high margins.
And to show this, let’s assume that the gain in our model was way off and the real system has an open loop gain that is 15 dB higher. We can get the closed loop system with the feedback command and plot the step response for the closed loop system, and you can see that even with this large gain error, it’s still a stable system.
We can also add phase by itself by adding a delay to the open loop system. Here I’ve chosen a delay of 2 seconds. This eats up about 2/3 of the total phase margin, and as you can see with the step response this closed loop system is also still stable. So a gain increase of 15 dB by itself, or an added delay of 2 seconds by itself are both stable variations.
However, if we increase both the gain and the phase the story is different. Here, I’m only adding 3 dB of gain and a measly 0.2 seconds of delay and this causes the closed loop system to be unstable. So, it’s not as robust of we were expecting it to be. Therefore, the combination of gain and phase uncertainty also needs to be considered and not just each individually. This is why we may look at disk margin rather than classical margins because it accounts for simultaneous gain and phase perturbations. Disk margins have another benefit of being able to be applied to multi-input, multi-output systems, which classical gain and phase margins don’t handle very well. There is a lot to unpack here with all of these number, and we’ll talk more about disk margins later in the next video, but one thing to note quickly is that this is showing that the disk gain and phase margins are only about 1.3 and 17 degrees, respectively which aligns with our experimental results. So much less than were reported with the classical gain and phase margins.
Now, there are more methods to analyze robustness than just these two, but the general idea is the same for every method: Which is figure out how much uncertainty and variation your system can handle before it no longer meets the intent.
So with that in mind, we need a way to represent uncertainty. What I mean by that is how to describe the errors in the model, or the differences between it and the real system. This might seem a little odd at first because if you knew where the model differed from the real system, why capture uncertainty, why not just update the model? And the reasons were the ones that I covered earlier. Often, we can bound the uncertainty without fully knowing the dynamics that are causing it. For example, we may not be able to predict sensor noise, but we can figure out that the noise won’t exceed some threshold or has some statistical probability distribution. For classical gain and phase margins, we’re representing our uncertainty as purely gain error or purely phase error, and when we say that we require 6 dB and 45 degrees margin, that means we’ve assessed our system and our model and believe the errors to be bounded by those values. But we can represent uncertainty in other ways. For example, the amount of uncertainty could be based on frequency. We might say that we have a lot more confidence in our model at low frequencies than we do at higher frequencies. So, we may require less margin in the design for low frequency behaviors, while requiring more margin for high frequency behaviors.
Ok, so far, we’ve only introduced the analysis portion of robust control theory, that’s figuring out the system we have. The other half of robust control is synthesis: that is, creating a system specifically with robustness in mind, or designing a controller that produces the needed amount of margin in the system. Now, the thing this note here is that robust control is not a specific type of controller like PID or full state feedback. Instead, it’s a design method. It’s a set of tools that allow us to choose PID gains, or full-state feedback gains, or to tune some other controller so that it is robust. In this way, we can design, for example, a single PID controller, using robust methods, that will hover all of our toy drones, as long as the drone parameters or dynamics vary within the bounds of our uncertainty.
In the classical sense, loop shaping is a robust control method. We can set a specified gain or phase margin and then shape the loop by adding gains and poles and zeros until the design meets those requirements. Classical loop shaping becomes difficult, however, for multi-input multi-output systems, or for systems that have uncertainty that can’t be bounded with a simple gain or phase margin, or for systems that are highly nonlinear. Therefore, there are other robust control approaches to handle different situations. Things like H inf loop shaping, mu synthesis, and so on, there’s a bunch of methods.
The point of all of this though, all of robust control theory is to address uncertainty. And the major steps involved in robust control are to 1) understand uncertainty in your system and represent it in your model, 2) analyze your system to see how robust it is to these uncertainties, and 3) if the system is not sufficiently robust, then make changes to the system so that it is. And loosely speaking, that’s what we’re going to cover over the next three videos so I hope you are looking forward to it.
If you don’t want to miss those or any other future Tech Talk videos don’t forget to subscribe to this channel. And you want to check out my channel, control system lectures, I cover more control theory topics there as well. Thanks watching, and I’ll see you next time.