This book presents a collection of recent concepts and results which lead to algorithms for solving optimal control problems in the framework of dynamic programming. The use of the dynamic programming approach in concrete examples consists of three main steps: finding of possibly optimal trajectories, computing the corresponding value function and then the use of an appropriate verification procedure to decide on the optimality of trajectories. According to this programme the author presents several verification theorems in Chapter 3. The previous Chapter 2 contains auxiliary results, which are often of independent interests (e.g., on monotonicity of real functions). Chapter 4 introduces tools for computing (describing, characterizing) fields of extremals to which a verification theorem can be applied. The theoretical results of Chapters 3 and 4 are summarized in Chapter 5 and then used in the last chapter for the famous examples: brachystochrone, minimal surfaces of revolution, soft landing problem, rotating radar antenna and minimal-time problem for linear systems. Many results presented in the book are based on the author's results published during the last thirty years. The presentation supposes that the reader is familiar with principles of classical calculus of variations and non-smooth analysis. The book will be of interests to graduate students and researchers in theoretical and applied control theory.