The Outer Limits of Reason

 This is a paperback edition of a book that was previously published in 2013. It is generally believed that reason distinguishes humans from other life forms. The Outer Limits of Reason is exploring the boundaries of what can be known by proper logical reasoning without meeting a contradiction. It is only after 345 pages of exploring what is reason-ably possible and what is not that Yanofsky comes to the following definition: the set of processes or methodologies that do not lead to contradiction or falsehood. This means that reason defines what we will ever know and understand.

Exploring the boundaries of our knowledge immediately reminds one of a more recent book by Marcus Du Sautoy What We Cannot Know (2016). The subjects discussed in both books are about the same and both explore the outer limits, or edges as Du Sautoy calls them, of what we can ever know and understand, but the books do reflect the background of their authors: Du Sautoy is a the mathematician and Yanofsky a professor of Computer and Information Science. Both discuss logic, the foundations of mathematics, quantum relativity, and cosmology. Du Sautoy pays some special attention to whether all this may point to a Creator or a God, while Yanofsky's discussions are more philosophical, and he uses an approach that is indeed more AI and computer science-like. The recurrent motto of his argument is that if an assumption implies a contradiction, then we have crossed a boundary.

Paradoxes are the tools par excellence that lead to contradictions. So that is where this book starts with. You find all the classics like the liar who says he is lying and Russell's barber paradox where a barber claims to shave everyone in the village who is not shaving himself, and all the other paradoxes that result from self-referencing, such as the first uninteresting number that by definition negates its own existence. More contradictions can be concluded in connection with the ship of Theseus, the Zeno paradoxes, time travel, while the Monty Hall problem is just counter-intuitive. Yanofsky analyses why we arrive at these contradictions and how they can be solved. Most of them are thought experiments that cannot happen or exist in a real world.

When infinity is involved, then again reasoning can easily lead to a contradiction just like a division by zero is the trick that allows to prove anything. To understand infinity, Yanofsky includes a discussion of countability of the natural and rational numbers and the infinity of the reals. This leads to the development of set theory and the Zermelo-Fraenkel system. To answer the question whether this says something about what exists outside or only inside our brain requires a discussion about the Platonist view of reality.

Computational complexity is a subject that is somewhat closer to the heart of Yanofsky. Searching and sorting, graphs, and hard problems like the travelling salesman problem, and the Hamiltonian cycling problem, are examples to be classified in the P vs NP classification. In the higher regions of the complexity scale we find the non-computable problems we find computer algorithms analysing algorithms. For example using a Cantor-like diagonal process, Yanofsky proves that the halting problem is undecidable, unless an oracle can be consulted. But then we may wonder whether an algorithm will stop after a finite time when it is verifying whether another algorithm will halt. So there can be iterations of halt-computable problems leading to an hierarchy of uncomputable problems like there is an hierarchy of infinities. The philosophical question can then be raised whether the human brain can outperform a machine, or is the brain just a very complex machine. For the moment our brain can beat computers for certain tasks but the gap is closing.

Next follows a discussion of traditional boundaries of what we do not know (yet). Chaotic systems and of course quantum mechanics with Heisenberg's uncertainty principle, Schrödinger's cat, Bell's theorem, and the quantum eraser experiment. These quantum theoretical aspects raise again philosophical questions about free will and all the different interpretations of quantum mechanics. More familiar and a bit less weird is relativity theory that mixes space and time and mass and energy, but still, there can be strange consequences like time travelling leading to the twins paradox and the grandfather paradox. We currently also hit a theoretical boundary since relativistic quantum mechanics can join quantum mechanics and special relativity theory but when we want to weave general relativity into this theoretical fabric to also include gravity, it leads to a contradiction. The attempts of scientists are focused on a more general theory of everything to include all the four fundamental interactions (electromagnetics, weak and strong forces and gravity). Current theoretical derivations resulted in several kinds of string theory.

On a further philosophical level we may question the foundations of how science increases knowledge. For example the problem of induction: Can we be absolutely sure that something holds because we never observed the contrary so far? Instead of proving that a certain scientific method is correct, we can turn the argument around as Popper proposed and accept a theory as long as it is not empirically falsified. Thomas Kuhn's vision of how science progresses was controversial in the 1960s. He claims that science doesn't develop linearly. Competing incommensurable paradigms coexist for a while until progress happens with discontinuous paradigm shifts. Since mathematics seems to be a driving factor for progress, this leads to an investigation of Wigner's "Unreasonable Effectiveness of Mathematics". Yanofsky seems to support a Platonic view that this effectiveness comes from the fact that mathematics is derived from physics. Einstein's relativity and Emma Noether's conservation laws are based on symmetry and symmetry is a driving force in today's fundamental physics. Thus the structure is there for us to detect. But if that is true, then it shifts only the question to the next level: why is all this structure in the universe, and why is intelligent life possible to discover the mathematics in that universe? This starts an extensive discussion of the (weak) anthropic principle but (of course) without a definitive answer.

Then Yanofsky comes to analyse mathematics itself. There are definitely limitations to what is possible in mathematics. There are the classical Greek problems of squaring the circle, trisecting an angle, and doubling a cube using only a straight edge and compass. Galois theory puts an end to the search for formulas for the zeros of higher order polynomials, and Robert Berger proved that the tiling problem is undecidable, and so are Diophantine equations, and the word problem for groups. On a fundamental level, Peano arithmetic was proved to be consistent in the Zermelo-Fraenkel system with the axiom of choice, (but is ZFC itself consistent?). Furthermore Tarski's theorem and the incompleteness theorems of Gödel are discussed in some detail. This also relates to Parikh's theorem from computer science and Löb's paradox of logic.

This brief enumeration sketching the contents shows that in this book many topics are discussed on a broad scale. The approach is rather fundamental, but the most technical aspects are skipped, so that the arguments remain accessible for a broad audience, but it is not leisure reading. You need to stay focussed to follow all the arguments. Yanofsky is not just hopping on the surface, but penetrates the epidermic of the subject and actually proves things, especially those that are somewhat related to computer science. Also the mathematics part is not be easy for an outsider. Fortunately, he takes his time and is rather verbose, gently taking the reader along on a slow pace not avoiding repetitions if appropriate. Chapters have an introduction announcing what will be discussed in the next sections, and they end with notes and suggestions for further reading, and within the chapters there are ample pointers to specific notes and references that are gathered at the end of the book. And if you need to recall or reread something from previously chapters there is an extensive index. Yanokofsky keeps asking why at every stage and he places the many, often opposing, philosophical opinions next each other. In most cases, he leaves it to the reader to decide what side to choose.

The year 2018 has been announced as the year of artificial intelligence because of the progress made in recent years. The announcement of the singularity where human intelligence will be surpassed by computers has raised serious concern. Several tycoons of science and computer industry started thinking more concretely about how to deal with this evolution. Since the hard cover version of this book was out of print, this paperback edition is most welcome these days. Although the contents dates back from 2013, it is still an important read for its deep, yet accessible approach to human and computer intelligence and a thorough discussion of its philosophical aspects.        

Reviewer: 
Adhemar Bultheel
Book details

The book discusses the limitations of mathematics and logic and where science in general meets its current and future boundaries. This is the fuel for many classical philosophical controversies and our vision on what science can and cannot explain. And indeed it raises the very question of the reason of the existence of life and of human intelligence in this universe that we observe and try to understand.  

Author: 

Publisher: 

Published: 
2016
ISBN: 
978-0-262-01935-4 (hbk); 978-0-262-52984-6 (pbk); 978-0-262-31676-7 (ebk)
Price: 
out of print (hbk); £14.95 (pbk); $13.95 (ebk) (net)
Pages: 
428
Categorisation

User login