In an historical perspective, the author recalls how *paradoxes* were the impetus for leaps of improvement in the evolution of mathematics. The $\sqrt{2}$ introduced the irrationals beyond the rationals, 0 was the missing link between positive and negative numbers, $\sqrt{-1}$ introduced the complex numbers beyond the reals, $\infty$ allowed to study divergent series, and the quaternions $\mathbb{H}$ (for Hamilton) related to the space-time computations of relativity theory. The latter can be extended to octonions $\mathbb{G}$ (for Graves) and this is the start of a sequence of Dickson algebras: $\mathbb{R}\subset\mathbb{C}\subset\mathbb{H}\subset\mathbb{G}\subset\cdots$, or more generally a sequence $A_k=A_{k-1}\times A_{k-1}$ of algebras equipped with a recursively multiplication. This definition doubles the dimension in each step so that $A_k$ has dimension $2^k$ (Dickson 1919). As $k$ increases, more and more classical properties of the multiplication are lost: the square may be negative in $A_1=\mathbb{C}$, commutativity is lost in $A_2=\mathbb{H}$, associativity is lost in $A_3=\mathbb{G}$, zero divisors occur, etc. The sequence of Dickson algebras is what Chatelin calls *Numberland* where hypercomputation takes place. Working in $\mathbb{R}$ is what Chatelin calls *thought* or one-dimensional thinking, but moving to $\mathbb{C}$, this becomes *intuition* or two-dimensional thinking. Together with $\infty$ they form *Reason* $ = \{\mathbb{R},\mathbb{C},\infty\}$.

The first chapters explore the calculus, i.e., all the computational rules in $A_k$. Leaving the strict computational conventions of familiar grounds gives the freedom to choose on how to define or compute things. Thus loosing properties for increasing k means opening up for many more possibilities. For example classical causality is based on the ordering in $\mathbb{R}$. A new linear concept of causality or derivability is given via a particular linear map (a *derivation*) in a Dickson algebra and the nonlinear core of the Dickson algebras is the part that is out of reach of all possible derivations.

The next chapter explores the norm and the singular values decomposition of the multiplication maps $L_x$ and $R_x$, i.e. left or right multiply with an element $x\in A_k$. The different possible definitions of a norm in the Dickson algebras give rise to different geometries. Complexification of a Dickson algebra is the generalisation of $\mathbb{C}=\mathbb{R}+i\mathbb{R}$, i.e. $A_k=A_{k-1}\times 1 \oplus A_{k-1}\times\tilde{1}_k$ where $\tilde{1}_k$ is the hypercomplex unit of $A_k$. It is illustrated and related to the dynamics of Verhulst's logistic equation.

The Dickson algebras have a dimension that is a power of 2. For the algebra for which the dimension is not a power of 2, one needs to resort to addition instead of multiplication. As an application the spectrum of the perturbed matrix $A(t)=A+tE$ is investigated for varying $t\in\mathbb{C}$.

When Dickson algebras are defined over the integers or in $\mathbb{Z}_r$ (in particular r = 2), several possible applications open up like number theoretic problems, floating point representation (the probability of the first digit in the representation, known as the *Borel-Newcomb paradox*), *Sharkovski's theorem* and the ordering of the natural numbers, etc.

More number theoretic applications are possible in the first four of the Dickson division algebras mentioned above ($A_0=\mathbb{R},\ldots,A_3 \mathbb{G}$), because they have no other zerodivisor than zero. When these algebras are considered as rings (addition and multiplication), then as an application, number theoretic theorems of (2, 4, and 8) squares can be analysed, i.e., which natural numbers can be written as a sum of 2,4, or 8 squares. This results in a quest for possibilities of 8-dimensional arithmetic.

Besides the discrete/continuous dichotomy, there is also the real/complex dichotomy. How these different dichotomies interact in computation is illustrated in the next chapter which analyses two applications. The first is about the relativity of the concept of inclusion. Think of fuzzy sets, but also about the dynamics of chaotic systems. The second one is about Fourier analysis and complex signals.

The computation of e.g. an SVD, which we know as a concept in linear algebra, leads to paradoxes when it is applied in a nonlinear environment of nonassociative Dickson algebras ($k \ge 3$). Classical logic is deductive and tries to avoid any paradox (Russel, Turing). Chatelin however sees these paradoxes as an opportunity to leave the classical deductive logic and escape to a more organic logic. That is a logic that allows to reason about hypercomputing. An alternative (organic) representation of complex numbers and higher dimensional complex vectors is given and it is illustrated how these are used in computation.

The concluding chapter is about Euler's $\eta$ function. This is explored as a tool to give weight, or meaning, or depth to hypercomplex numbers, or as Chatelin calls it, *organic intelligence*.

This is a book unlike anything I have read before. The potential reader who is looking for philosophical aspects should be warned that there are hard mathematics involved, but the mathematician should be warned as well, that he/she should be willing to abandon familiar grounds and follow the ideas and philosophy behind the mathematical exposition. This book is almost a paradox in itself. The reader is guided around some of the phenomena at the boundaries of Numberland which is much like an experience Alice must have had when she explored Wonderland. I do not think the book will become the computational bible of the future, but as an exercise in out-of-the-box-thinking it has overwhelmingly succeeded. It is far from giving a solution to all problems posed by nonlinear computational problems. It is not even giving a definitive solution to the most elementary partial problems. As Chatelin writes herself, the right choice to make among the many possible choices that can be made in higher dimensional Dickson algebras, can only be validated by experience.