Introduction to the Mathematical Theory of Control
This book is devoted to the mathematical theory of control. It provides a self-contained introduction to the topic and it also includes some topics of current research. The main aim of the book is a study of the following control system x’(t) = f(t,x(t),u(t)), where u(t) belongs to U, t belongs to some time interval (0,T), x in Rn is a controlled unknown quantity and u is a control from the set of admissible controls U, which is a subset of Rm. Basic properties of the control system are described in chapter 3. Considering, in addition to the equation described above, a cost functional depending on the terminal value x(T) and/or on the whole trajectory x, the existence of an optimal control is studied in chapter 5. Chapters 6-8 are devoted to a discussion of necessary and sufficient conditions for optimal control. In chapter 4, asymptotic stabilization of the system by stabilizing feedback is studied. The topic is further developed in chapter 9, where a new tool of patchy feedbacks (which might be discontinuous) is introduced. Chapter 10 deals with impulsive control systems, also depending on the derivative of the control u.
A necessary background can be found in the appendix and in chapter 2, where basics of the theory of ordinary differential equations are explained with special focus to issues arising from the theory of optimal control. The book is well organised. Simplified problems are presented first, if possible, in order to explain main ideas. Proofs of theorems are nicely structured and divided clearly into steps. The book also contains many figures and examples helping to understand the subject. Each chapter contains several homework problems. The book can serve students of science and engineering as an introduction to the theory of nonlinear control and as an overview of basic techniques and results in the field. It can be used as the basis for a course on control theory at a beginning graduate level.