In 2016 Oxford University Press started a *Landmark Science* series with cheap reprints of classic books, the 'must-reads', about topics that have shaped current science. Here I review two of the more mathematical/physical ones from the first release of seven books in April 2016: this book and *Hyperspace* (M. Kaku).

In the early days of artificial intelligence it was hoped that one day it would be possible that a computer would be built that could perfectly simulate a human mind. More than half a century later AI scientists are still trying to write software that can understand human language in rather restrictive written situations and even more restricive in voice control. The loose application of syntax and certainly the often implied semantics that humans use are still a serious obstacle for machines. Successes came only at the end of the previous century with very powerful machines and big data so that data mining and the interaction of computer science with other scientific domains and often with a strong mathematical component.

With this book (the original is from 1989, but this reprint has the preface of the paperback edition of a decade later) Roger Penrose refutes the original ambitions of AI: Conscious thinking can not be modelled and programmed on a machine. To formulate precisely what he is claiming, it needs to be explained first what is understood by "a machine" (i.e., the software operating it) and how to decide who is right and who is wrong. In other words, what is truth, what is real, what is our world made of that we (our minds) experience? And this points to another thesis that Penrose wants to defend in this book: We still do not fully understand yet the link between microscopic quantum physics, the macroscopic physics that we experience, and the physics of the cosmos. Understanding the underlying physical truth and its mathematics is our only hope to understand the conscious mind. Also here has recently been some progress with the detection of the Higgs boson (2013), and observation of gravitational waves (2016). So there are still good reasons to read the book 25 years after its first publication. The original publication had raised some controversy and in the 1999 preface of this reprint, Penrose defends his viewpoints and refers to his follow-up book *Shadows of the Mind* (1994). Given the skeptic reactions, Penrose's viewpoint has never been very popular or generally accepted. If you read this book, you might also be interested in reading a more recent vision by another theoretical physicist M. Kaku who wrote *The Future of the Mind* (2014).

To give meaning to all the components mentioned above, Penrose had to start this book with a prequel similar to the well known encyclopedic survey *A Short History of Nearly Everything* by B. Bryson (2003), although Penrose is somewhat more focused, and certainly more technical and with much more detail. Since he has to address scientists from several different fields, but also the sophisticated layman, Penrose has chosen to verbally explain most of the topics he wants to address, although there are still formulas (for which he almost apologizes). However reducing the number of formulas to a minimum does not mean that the reading is easy. A mathematical readership would perhaps have preferred a bit more formulas because once properly defined, a formula can exactly and compactly represent what is meant and replace many words.

To begin with, Penrose introduces AI, and a discussion of Searle's ideas, who also is a protagonist of "strong AI". Next he introduces Turing Machines and Universal Turing Machines (the latter do not only accept data as input but also the instructions to process the data). He does so to a painfully detailed level of bitstrings and even gives examples of programs. I do not think it is necessary to go that deep to explain an algorithm and the Halting Problem that Turing did solve. Also the lambda calculus by Church as an alternative is described, although with a bit less details.

The next topic is the Mandelbrot set, recursive sets, the real and complex number systems, and Cantor's infinite numbers, tilings of the plane, Platonism vs. intuitionism, complexity theory, etc. Of course also the Gödel theorems are discussed and the incompleteness of formal systems, touching on the problem of defining what is provable or what is truth. If something can not be proved, this does not mean that it is not true, and if something takes an infinite time to prove, we cannot even say that it is true or not until it is proved, which takes forever.

In a second part Penrose gives an introduction to physics. He classifies theories as superb, useful, or tentative. To the superb category (basically the only ones he discusses) belong the classical views: Newtonian mechanics, Maxwell's theory, and Einstein's relativity theory, as opposed to the more recent quantum electrodynamics. The big-bang theory is useful, while supersymmetry, string theory, and GUT (Grand Unified Theory) are considered only tentative. Here he differs clearly from the views Kaku is defending in his *Hyperspace* book. So he introduces the billiard-ball Newtonian and Hamiltonian visions of the world, Maxwell's electromagnetics, and special and general relativity theory. For the latter he explains the Riemannian tensor as a sum of a Weyl (defining shape) and a Ricci (defining volume) tensor, but leaves out the technical details. Because of time-shifts experienced by an observer in motion, causality and determinism, hence computability, need revision, and the fuzzy boundary between matter and energy revives the question of what is real. The latter is even more problematic when quantum theory is introduced blurring the boundary between particle behavior and wave behavior. Here he goes to some detail with Hilbert spaces of orthogonal state vectors of probabilistic nature, Heisenberg's uncertainty principle, Schödinger's equation, and spins. Of course we meet Schrödinger's cat and other well known phenomena.

To explain the 'arrow of time' (although the equations are symmetric in time, we experience it only in one direction), he refers to the second law of thermodynamics: entropy does not decrease. This means that in the beginning the entropy should have been extremely low. Where does that come from? An incentive to embark upon a chapter on cosmology. Penrose conjectured the *Weyl curvature hypothesis* already in 1979 It is an alternative for the cosmic inflation in the early stage of the cosmos. A vanishing Weyl tensor at the time of the big-bang that is constantly increasing should explain the homogeneity of mass and the increase of entropy. An explanation is hidden in gravitational radiation, which supposedly is time-asymmetric. A somewhat speculative idea.

So after this long excursion, in his last two chapters Penrose returns to the original problem of modelling the human mind. He first gives a biophysical description of the brain and what is known about its centers and how it works. He does not give a precise definition of consciousness because it is seemingly impossible. For example, a brain seems to be able to register things, even when the person is 'unconscious' like during an operation. An indirect characterization is that consciousness is linked to for example common sense, judgement of truth, understanding and artistic appraisal while the opposite is automatic and algorithmic behaviour. What is there in the evolution that has made our brain the way it is and what advantage does it bring to the creatures able of conscious thinking? Then he applies what has been explained before and draws the conclusion that "...neither classical nor quantum mechanics [...] can ever explain the way in which we think" but "A plausible case can be made that there is a non-algorithmic ingredient to (conscious) thought processes" (p. 521). This idea is partly inspired by his experience as a mathematician and rests on Gödel's theorem. Mathematicians can know the truth of a proposition by 'insight' while the Gödel theorem will claim that there are propositions that can not be proved. For examples his nonperiodic tilings and quasi-crystals do exist and yet these are not algorithmic. He gives several other examples of scientists who, by a spark of inspiration came up with a superb result, while they were not `working' on the subject following algorithmic rules. A machine will never be able to achieve this. Thus classic computers in the sense of Turing machines will never be able to simulate this kind of consciousness. QED. It can only be hoped that a massive parallel quantum computer could ever simulate such complex interaction of atoms. His hope lies in grasping and understanding quantum radiation that will make such a monstrously complex objective possible. His statement has raised objection and is critisized by several of his colleagues.

After the publication of the book, psychologist Stuart Hameroff suggested that there is some biological analog of quantum computing in the brain that involve microtubules within the neurons. This was taken up by Penrose and that ingredient formed the basis of a follow-up book *Shadows of the Mind* (1989), in which the current idea is further developed into the so called Orchestrated objective reduction (Orch-OR) theory. Still today arguments in favor and against the Penrose-Hameroff conjecture are published and the last word has not been said or written about this challenging and controversial hypothesis. All the more reasons to read or re-read this book. But besides the controversy, the larger part of the book is just an introduction to a Cathedral of Science brought to the educated layman in an extraordinary masterpiece brought by a brilliant mind. Even though the book has many pages already, you still get the feeling that Penrose is deliberately confining himself to the essence, which makes you hungry for more. This might explain that it won the Science Book Prize in 1990.