Introduction a l'optimisation différentiable
Optimization theory on finite-dimensional linear spaces involving smooth (i.e. differentiable) cost functions and possibly also functional constraints is a fundamental discipline in solving various applied optimization problems with a finite number of design variables and a finite number of constraints. After introducing the general area and basic concepts in part 1, the textbook lucidly presents all major aspects of this theory. Part 2 deals with necessary and sufficient optimality conditions for problems without (or with) constraints, the Lagrange multiplier method, Karush-Kuhn-Tucker conditions, and sensibility analysis. Special attention is paid to linear and quadratic mathematical programming. Part 3 presents the Newton and quasi-Newton methods for solving systems of nonlinear equations. Part 4 then treats unconstrained optimization, starting by solving a quadratic case with the direct method or the conjugate-gradient method and continuing for a general case with the Newton method applied to optimality conditions, the descent method including line-search strategies, trust-region methods, and the quasi-Newton method with the Broyden-Fletcher-Goldfard-Shanno (BFGS) method.
Eventually, part 5 addresses constrained problems: the simplex method for linear programming, the Newton method with projected gradients, interior-point methods, the augmented-Lagrangian method and sequential quadratic programming. The textbook is equipped with many numerical examples illustrating the efficiency of the expounded methods and algorithms. It is also augmented with attractive biographical sketches and photos of mathematicians who have contributed substantially to the field. It ends with lists and appendices of definitions, theorems, examples and algorithms, and an index. All this makes it an excellent textbook for French-reading students and also for researchers interested in optimization theory.