Reverse mathematics is a specialized research domain about the foundations of mathematics. The problem is to find out which axioms are needed to prove specific theorems. Stillwell gives in this book an introduction to the field that should be accessible for a general mathematician.

The Euclidean system was the first axiomatic system that was consistently exploited. In this system, the parallel postulate has long been considered as a theorem whose proof was yet to be found. When it was finally shown that geometries can also be developed without that postulate, it became clear that it was just an extra axiom that one has to accept without proof for Euclidean geometry to be worked out consistently, but that other axioms could be accepted, leading to different geometries. Of course everything in mathematics starts with numbers, and the step from natural numbers to integers, rationals, reals, complex, etc. has an even longer history. Hilbert (1899) formulated 17 axioms that should be able to support both geometry and the number systems. All of them, except two, were geometric.

A generally accepted base system these days is denoted as RCA${}_0$ which stands for a *Recursive Comprehension Axiom system*. It is weak in the sense that it does not allow to prove the Bolzano-Weierstrass theorem. Adding this theorem (or an equivalent one) as an extra axiom to the system makes it stronger. What is needed is an axiom asserting the existence of a set. Such a system will also require arithmetic which is characterized by the Peano arithmetic system (PA). The purpose of this book is to explain what this RCA${}_0$ system really is and why it is 'weak' and stronger systems are needed to prove some basic theorems in analysis. It is also made clear that the term 'Recursive', which may refer to the induction principle that is used, is not really covering the proper concept. Since that induction principle allows to define (enumerable) computations, in a more modern interpretation, 'Recursive' should be considered a synonym of 'Computable'.

So it all starts with the natural numbers and the induction principle. The natural numbers are used to define the rational numbers which are again enumerable. For the real numbers however, Dedekind needed the concept of least upper bound, which is a concept beyond enumerability. Once real numbers are available and an ordering is defined, one can introduce limits of sequences and continuity of functions. Thus looking for the foundations of the natural numbers and their arithmetic is an obvious starting point. Their definition involves an initial number 0, and a successor operator, which is obviously the core of the idea of induction. Addition and multiplication can then be introduced and we have arithmetic as well. Note however that the natural numbers are considered as an open-ended set, meaning that infinity is only a potential infinity, not an actual one. To define the real numbers one needs to accept the existence of an infinite set as a mathematical object.

This arithmetic of the natural numbers started an arithmetization of mathematics and logic. A language was developed which defined all allowable meaningful strings. The *Principia Mathematica* was one of the summits of this idea of formalizing logic and therefore also mathematics.. By $\Sigma_1^0$ one denotes such a system of logical propositions which allows to express that some element $u$ has property $\alpha$ (i.e. belongs to some set) if and only if there is some $z$ such that $\psi(u)$ holds with $\psi$ some quantifier-free formula. The 1 refers to the one $\exists$ quantifier at the beginning and the 0 to the quantifier-free expression $\psi$. (If the formula starts with a $\forall$ instead of a $\exists$, then it is called a $\Pi_1^0$ formula.) Arithmetization has its limitations though. Real numbers cannot be expressed in terms of integers, or as Cantor formulated it: there is no one-to-one relation between a set and its subsets which he proved using his well known diagonal argument. Sets are only definable if they are computable in the arithmetic based on natural numbers, or what is basically equivalent, if they can be encoded in the allowed language which is restricted by enumerability.

Classical analysis however needs sets to define a limit and continuity, and to prove all the basic theorems like the intermediate value theorem, the fundamental theorem of algebra, the Bolzano-Weierstrass theorem, the Heine-Borel theorem, the extreme value theorem, and to study uniform continuity and Riemann integrals. Thus we should look at computability and allowable strings. This is a matter of language: encoding objects and encoding algorithms that manipulate these objects needs a syntax. To analyse and interpret such encoding strings, graphs, and in particular trees, become also important tools. The Cantor set is typically an object that is most easily described with a tree. In this context König's theorem about graphs is another result that is important, not only for graphs but also for analysis. It says that if a finitely branching tree has infinitely many vertices, then it has an infinite path. Its proof resembles the proof of the Bolzano-Weierstrass or the Heine-Borel theorem that rely on the construction of an infinite sequence of nested intervals. All these proofs incorporate an enumerable concept that should result in the definition of limit.

Thus we are forced to look at computability. That domain was started in the early twentieth century and further developed with the study of algorithms and digital computers. One may distinguish two types of algorithms: one that transforms numbers into numbers (hence defining a computable function), and one that has a set as input and answers yes or no to any question that can be asked about what is in that set (hence defining a computable set). Church and Turing formalized algorithms so that they can be encoded and numbered which means that we can construct sequences of algorithms and see whether they will lead anywhere. The problem of a set, not being isomorphic to its subsets, is reflected in the impossibility to answer self-referencing questions: is the set of all "sets that are not member of themselves" an element of itself? Similarly, in the halting problem an algorithm has to answer a question about itself. An algorithm that can identify in finite time any algorithm that will not stop in a finite time does not exist because, when applied to itself, it leads to a contradiction. Computably enumerable sets can be defined though via a computably enumerable sequence. However, there exit computably enumerable sets that are not computable as Post has proved, using self-reference of sets, This self-reference is also the basic idea underlying the incompleteness theorems of Gödel.

In this light, enumerable computability is the basic idea of mathematical constructivism. One could say that $\Sigma_1^0$ is the formal system that incorporates this idea in the language of PA. Stillwell however prefers to switch to a simpler formulation that was proposed by Smullyan and which he calls *Elementary Formal System* (EFS). In this system computably enumerable sets are available and he gives the formal definition of a Turing machine and he shows that EFS can represent all the $\Sigma_1^0$ relations of PA. The resulting system is thus RCA${}_0$, but since there are still many noncomputable sets, theorems involving these sets remain beyond the scope of RCA${}_0$.

A stronger system is ACA${}_0$ (referring to *Arithmetical Comprehension*) in which it is assumed that sets of elements (encodable by natural numbers) that satisfy a PA-expressible property do exist. Now basic theorems of analysis become provable. In fact the additional axiom turns out to be equivalent to some of them: Bolzano-Weierstrass, least upper bound principle, Cauchy convergence criterion, and the monotone convergence theorem and also the König infinity lemma for branching trees. It is remarkable that not the theorems but the equivalence is provable in the weaker RCA${}_0$ system. These equivalences are proved by Stillwell in the book and the link with Ramsey theory is explained. In fact the infinite Ramsey theorem is beyond the scope of ACA${}_0$. In a separate chapter further equivalences are proved between König's infinity lemma, and the Heine-Borel, extreme value, and uniform continuity theorems. Some topological theorems like Brouwer's fixed point theorem and the Jordan curve theorem are also equivalent to König's infinity lemma. The latter theorems are somewhere in between RCA${}_0$ and ACA${}_0$. Thus it is possible to define an intermediate system WKL0 which is RCA${}_0$ plus the weak König's lemma. In fact Stillwell briefly discussed the "Big Five", referring to RCA${}_0$, WKL${}_0$, ACA${}_0$, and two others: ATR${}_0$ (*Arithmetical Transitive Recursion*) and $\Pi_1^1$-CA${}_0$ ($\Pi_1^1$ refers to properties $\varphi(n)$ defining sets that can contain one $\forall$ quantifier). And yet, there are still theorems that are not provable in ATR${}_0$ and even not in $\Pi_1^1$-CA${}_0$.

Of course the previous topics discussed are just a brief survey and are restricted to elementary analysis. There is a much bigger picture to be given at a higher level. Predicate logic is essential in the previous development, but it is restricted to a minimum in this book. There are different levels of incompleteness and unsolvability that can be investigated, and there is set theory which is the basis for measure theory has its own systems (Zermelo-Fränkel with the axiom of choice), and many other topics that are not considered here.

This book is intended for mathematicians who are not directly involved in this subject. They should be able to follow the arguments given by Stillwell. This introduction for the non-specialist is administered step by step with little spoonfuls. It requires of the reader to clear the mind of all "obvious" assumptions. The general ideas and the kind of research that is developed in this domain is nicely "clarified" by Stillwell, but it is not a textbook on the subject. So, it is less structured because the reader is basically discovering the ideas and problems by himself while taken on this guided tour organized by Stillwell. Moreover, the average mathematician is used to think in the opposite direction, deriving new results from accepted preliminaries. Thinking in the other direction is not always easy. If you are not familiar with this relatively new research about the foundations and and minimal assumptions needed to develop the massive mathematical structure, this provides a good informal guideline. Logic, complexity and computability are in this information age not ethereal *spielerei* of the mind but problems with concrete implications for artificial intelligence and computer science as well.