E. Scott Meadows Ph.D., University of Texas at Austin, 1994 |
Publications |
Thesis Abstract |
This dissertation contains a record of work in four related areas: stability of model predictive control, continuity of model predictive control feedback laws and objective functions, resolution of some implementation issues for model predictive control using linear models, and the use of model predictive control for stochastic systems. It makes connections between some well known results for linear quadratic optimal control, which may be viewed as an MPC method, and dynamic programming.
Sufficient conditions for stability are provided for general nonlinear systems. They include non-negativity of the objective and continuity at the origin. Stability is obtained through a Lyapunov stability argument using the MPC objective as a Lyapunov function. A matrix rank condition related to the constraint set that ensures continuity is also provided.
Continuity of the feedback control law derived using model predictive control and the corresponding objective function can have important consequences for the stability and performance of the closed-loop system. Through use of an unusual example, this dissertation investigates continuity and provides a sufficient condition to ensure that the objective function and feedback control law are continuous with respect to the state.
Some of the results reported here concerning implementation of linear model predictive control are based on previous work by Rawlings and Muske at the University of Texas. The issues discussed herein are technical issues important for applications, including the replacement of a state stability constraint in the original proposal by one that is better suited for numerical implementation, and the replacement of an infinite series of state constraints with an equivalent finite set.
This work also demonstrates that analysis methods from dynamic programming can be used to analyze the model predictive control algorithm and subsume many standard results into a more general and comprehensive theory. This connection has not been explicitly stated in the literature to date and remains a rich topic available for future research.
Two results concerning stochastic or perturbed systems are presented. The first provides conditions under which an asymptotically stable control method can retain its stabilizing ability in the presence of perturbations arising from an exponentially stable state observer. The second examines the performance and demonstrates the suboptimality of model predictive control when applied to certain stochastic systems.
[ Home
| People
| Projects
| Recent Presentations
| Publications
| Tech Reports
| Contact Us ]
University of Wisconsin
Department of Chemical Engineering
Madison WI 53706