This is a translation of Les Métamorphoses du calcul (2007) that was awarded the The Grand Prix de Philosphie de l'Académie Française. The main idea is to argue that computation seems to supersede proof by reasoning as a fundamental element in the building of mathematics. These two approaches are placed in their historical relationship with respect to each other. The Greek developed proof by reasoning where computation failed. Nowadays, we seem to evolve in the opposite direction.
Ever since the Greek laid the foundations of our "Western" mathematics, based on axioms and proving theorems by logical inference, not much changed fundamentally until these foundations were scrutinized and revolutionized in the twentieth century. Computation has been considered as boring and of lesser importance that is happily left to machines. However when computers are being used for proving theorems like was the case with the Four Color Problem, computing starts to be competing with the traditional proof methods. Where the traditional methods set a bound on the length of what is a feasible proof (Wiles' proof of Fermat's last theorem needed over 150 pages), computing may extend considerably what can be proved. It is the thesis of the author that computing will revolutionize mathematics in the coming decades.
In a first historical part, it is shown how the Greek had their proofs by reasoning, but computation had also old roots in Mesopotamia and of course we have the Euclidean algorithm, Thales computed the height of pyramids and much later calculus entered prominently as a computational tool, and even invaded geometry after Descartes.
But it is the second part in which the mathematical (r)evolution of the 20th century is sketched that is at the core of the book. Dowek first discusses predicate logic started off by Frege who wanted to define the integers via set theory. As we know, this led to contradictions, so that it had to be revised by Russell and Whitehead. They were the ones who introduced types. These developments were influential on the role of computing in the first half of the 20th century. Early in that century two related theories developed independently: computability and constructibility. To understand the role of computability, consider the Euclidean algorithm. If it computes z as the greatest common divider of x and y then the algorithm enables us to decide that the proposition "z is the gcd of x and y" is true. So the algorithm becomes a way of proving propositions. The next step is to wonder whether any mathematical proposition can be decided, i.e. "proved" by an algorithm with reasonable restrictions? In other words: can any function f(x) be computed in finite time and with finite density of information? Now algorithms and not numbers became the subject of computation. So we arrive at Hilbert's decision problem. It was answered independently by Church and Turing. Church used lambda calculus and Turing his Turing machines. They both found that there is no algorithm to decide whether a proposition is provable in predicate logic. There were alternative instruments to come to this result formulated by Gödel or by Kleene who used recursive functions, all of them equivalent as was shown later. So it became clear that reasoning was not just a method to uncover results that were already implicit in the axioms. Constructivism is inherent in an algorithm since it gives the results explicitly, but non-constructive proofs can be accepted too if it can be shown that some result exists even if we do not know it explicitly. For example if the assumption of non-existence leads to a contradiction. This however relies on a principle of excluded middle: either A or not A. This principle was what Brouwer, as a radical intuitionist, wanted to get rid of, but rejecting that principle of course endangered many results in mathematics. The conflict between constructive and non-constructive lies in the interpretation of "there exists". For the first it means that "it exists and we know it" for the second "it exists even if we do not known it". Dowek elaborates on Church's thesis and lambda calculus in separate chapters and on the constructivism of Brouwer and how it influenced the design of algorithms.
The third part culminates in Dowek's own thesis: the crisis of the axiomatic method and the dawning supremacy of computation. In the 1970's automatic computer programs for theorem proving were conceived stimulating the fantasy of science fiction authors about machines domination humanity. But disappointing results moved developers to less ambitious proof checking algorithms and correctness proofs of programs or circuits. Also symbolic computation and computer algebra systems entered mathematical research and there was the proof by Appel and Haken of the four color problem in 1976. They had reduced the problem to an analysis of a map with ten regions only. This still required so many cases that checking all of them was done by a computer program. There was a big controversy whether or not to accept this as a proof. Although the program has been verified for correctness by two automatic systems, one can never be sure that there is not a minor flaw left undetected of whether the verifying algorithms themselves were flawless. Time to rethink our concept of proof as more computer assisted proofs started to appear since the 1990's. Anyway since proofs become more complex, in fact too complex and too long to be written out in full requiring hundreds or thousands of papers that no one can ever read or check, one might be forced to accept computer proofs. At least if no shorter proofs exist. Which then triggers the question whether it is provable that no short proof exists fort some problem. Like instruments have long ago changed the development of physical sciences, it is time to acknowledge that computers can and should be used as instruments to do experiments within mathematics, experiments which, if not proving things, can at least show the way or help formulating hypotheses, a practice that is well accepted in natural sciences. Although of course experimenst alone will never replace proofs.
Dowek ends his book with a program for further research. To name just a few open research questions. Can we show that a problem does not have a short axiomatic proof? Can we replace all axioms by computation rules? If not, in which cases? How much of the efficiency of mathematics, when applied to the physical sciences, can be explained by the Church-Turing thesis? Which branches of mathematics will profit more from the use of computers? What will be the impact of computers on mathematical writing?
This is not easy reading, but Dowek has done a tremendous job in explaining all the theories, logical and philosophical approaches in an accessible way. The technical details are left out but simple examples such as the Euclidean algorithm do make clear where the, sometimes subtle, differences lay, which may however have far reaching consequences. Whether or not computation will eventually eliminate axioms is still undecided, but from observation, it is clear that computation takes up an ever growing part of the job of a mathematician. If you, as a mathematician, are concerned about the foundations of what is your daily occupation, this is a book you should read, even though it might not influence your own research topic. If your subject is logic or theoretical computer science or philosophy of mathematics, this book is a must read, although you probably may be familiar with it already since it has been around in French for about 8 years now.