Numerical methods in matrix computations

The expansion of computational linear algebra has been enormous since linear algebra was combined with computational methods in the early days of computers somewhere mid 20th century, which is only few decades ago. It grew out to become an important research area with software packages used worldwide. Linear algebra is at the core of many computational problems in practically all applied engineering domains: differential equations, optimization, systems theory, physics, chemistry, mechanics, etc. You name it, and eventually there will be some linear algebra problem underneath.

But problems grow in size and hardware evolves. This means that both theory and software need regular updates, and they are both in constant evolution. Even with the current state of maturity, it remains an active research domain. Many people have worked on the road to bring it to its current level and Åke Björk was one of the people who have contributed considerably. His book Numerical methods co-authored with Germund Dahlquist from 1974, covers the much broader range of numerical problems. A first volume of its thorough revision is published by SIAM in 2008 as Numerical methods in scientific computing. As he states in the introduction of this book, when he started revising his original text around 2000, it turned out that the linear algebra alone needed many more pages and now, 15 years later, it resulted in this comprehensive survey of over 700 pages.

The author keeps a delicate balance between theory, practice, and software. There are of course many results, often formulated in the form of a theorem. Many of them are proved in this book but for those requiring a more extensive or complex proof, the reader is referred to the literature for the proof. Therefore each chapter has an extensive list of historical and recent references for further reading. There are also some algorithms, sometimes formulated in pseudo code, sometimes as a ready-made matlab source (if it is not too long) and clearly stripped down to their bare minimum. Of course matlab (from `matrix laboratory') was designed as, and it still is, a basic tool for researchers in the field. No wonder that notation of the book is influenced by the matlab syntax. No extensive numerical results are included, besides some graphs, that show convergence behavior, distribution of eigenvalues, or structure of a matrix without further details. Neither are the abundantly available practical (engineering or more general numerical) applications described. That is where his other books on scientific computing should be consulted. These restrictions are also reflected in the kind of exercises that are provided (no solutions given); they are more of a theoretical nature (prove that ...) or they are only moderately numerical (no complicated programming required).

It is hard to give in detail all that has been covered and what is not, but let me try to give a survey. There are four chapters: (1) direct methods for linear systems, (2) linear least squares problems, (3) eigenvalue problems, and (4) iterative methods. The first chapter also introduces the basic definitions, error analysis and floating-point computation, and the BLAS (Basic Linear Algebra Subprograms) package. The fundamental algorithms are Gaussian elimination (LU factorization), and the Hermitian variant (Cholesky). Banded, sparse, and structured matrices form special cases. Sparse matrices are of course more prone to chapter 4 on iterative methods, and for the many structured matrices, only a brief survey is given.

The second chapter on least squares methods is somewhat more extensive. The author has another book published on this topic: Numerical methods for least squares problems (SIAM, 1996). This is the chapter to introduce the singular value decomposition (SVD) and Gram-Schmidt orthogonalization (QR factorization), orthogonal Givens and Householder transformations, and all the variations of the least squares principle (total least squares etc.). Also here we find structured problems and special cases. There is even a section on regularization and nonlinear least squares problems.

The chapter on eigenvalue problems needs again to introduce some further definitions and basic material like perturbation theory. The basic algorithms are described: power method, LR and QR methods, etc. Generalized eigenvalue problems and functions of matrices get their own subsection. The trailing section deals with nonnegative matrices with the Perron-Frobenius theory and finite Markov chains as applications.

Iterative methods are clearly the most appropriate to solve the large and often sparse problems required in modern applications. Again we find here the most important basic algorithms like Krylov methods, conjugate gradient (CG) and BiCG, and preconditioning. These are applied to linear systems of equations, least squares and eigenvalue problems.

This is a fundamental work giving the basic ideas and methods in modern computational linear algebra. Of course, with such a vast domain, it is impossible to be complete in all aspects, but many ideas are mentioned and many references are given. Therefore, it will be an excellent reference work, parts can be used to teach a course on the subject, or it can be used by an independent reader to get a grasp on the topic, i.e., to learn something of the background, not just a practical hands-on programming course. More advanced researchers will probably find it too basic, but it can be very useful as a survey.

Adhemar Bultheel
Book details

This is an up-to-date inventory of basic computational linear algebra. It treats linear systems, eigenvalue and least squares problems solved with direct as well as iterative methods for different types of matrices.



978-3-319-05088-1 (hbk)
84.79 € (hbk)

User login