Foolproof, and Other Mathematical Meditations
Brian Hayes started his career as a member of the editorial staff of Scientific American in 1973. From 1983 on, his Computer Recreations continued the columns previously written by Martin Gardner and Douglas Hofstadter in Scientific American. After he left Scientific American he was mainly active as a contributor to (and for two years also as the editor of) American Scientist, a bimonthly magazine devoted to science and technology. Two books with selections of his articles appeared before. The present book is a collection of updated and extended versions of 13 of his contributions that appeared previously in American Scientist. The texts as they appeared originally were exposed to a broad readership and so many the reactions and additions could be implemented in the current version.
The topics covered by these 13 essays, as the author calls them, are diverse. Several of the topics are familiar subjects in popular science writing, but what I appreciate most is how Hayes transfers his interest in the subject to the reader. He is not just transferring knowledge of other authors, collecting the ideas from the literature, but he takes the reader along, showing how he explored the topic himself in his quest to understand the underlying truth or the mathematical law.
Anyone with an interest in puzzles and/or science and mathematics will love this book. No specific mathematical knowledge is required. To give a flavour of the contents, here is a quick survey.
The first text investigates the well known story about Gauss, who as a schoolboy had to add the numbers from 1 to 100, and how he surprised the school teacher putting the right answer on his desk with a "ligget se". He had inventing the sum formula for an arithmetic sequence by adding symmetric pairs arriving at 50 pairs whose sum is 101. Just telling this story (adding a personal pinch of drama), is where most authors stop, but it is where Hayes starts. Where does this story come from? How much is authentic? Did Gauss indeed count symmetric pairs? How much time does it take to add the 100 numbers sequentially? And many more similar questions. Hayes concludes after dragging the historical literature that the origin seems to be the biography of Gauss by Sartorius written in 1856 as an obituary for Gauss who died the year before. However the story has been moulded and modified a lot since then. Hayes has compiled and investigated the most complete collection of the different versions of how the story has been told since then.
Another historical detective work is the chapter where Hayes investigates what went wrong when in 1873 William Shanks published 707 digits of pi that he had calculated by hand. However, only the first 527 places were correct as D.L. Ferguson detected in 1944. Like a forensic investigator, Hayes analyses the computations of this cold case to find out which exactly were the errors that Shanks made and succeeds in identifying several omissions in transcribing the numbers.
Statistics is used in several of the chapters. We all know that to compute an average, we can increase the quality of the estimate by increasing the sample size, unless this process does not converge. A counter example investigated here is the factorial-like function n? To find the function value, one has to keep multiplying randomly selected integers between 1 and n (possibly selecting the same number repeatedly) until one hits the number 1. The function value is then the product obtained so far. Playing with this function, Hayes shows that the average outcome increases with the sample size.
More statistics are used in the chapter on Zeno's game. You and another players start with the same amount of coins. In step n you bet 1/n for the outcome of a fair coin toss. If you win you get 1/n of the other one's budget, it not, you loose 1/n of yours. Suppose you loose the first tosses, can you ever recover from your loss and win in the end? This is of course related to the divergence of the harmonic series, but it can also be connected with Cantor sets and with random walks on binary trees.
Random walks, space filling curves, and self-avoiding walks have practical applications but they are also fun to play with. For example, when walking on a rectangular grid, how many n-step self-avoiding walks do there exist. No exact formula is known, but an heuristic one has been proposed. The asymptotic behaviour as n approaches infinity seems to give rise to a so called connectivity constant in this formula. Experimental values were obtained, but so far no exact solution has been found. It is a tantalizing challenge to find out whether it is a (simple) expression in terms of known transcendental numbers.
In another chapter space filling curves are defined recursively leading to fractals, Cantor's calculus of the infinite, and to approximate solutions of the travelling salesman problem.
Another counting problem is to figure out the number of different (uniquely defined) solutions for a sudoku of order n. The history of sudokus is briefly discussed and some heuristics are given for solving them. The number of givens does not seem to be a good criterion to classify a sudoku as easy or hard. More mathematical approaches to the problem are found in the references. It is strange that the book by J. Rosenhouse and L. Taalman Taking sudoku seriously Oxford U. Press, 20012, is missing from the list. Anyway the sudoku problem is hard and assumed to be NP complete. The proof that 17 is the minimum number of givens for the usual order 3 sudoku was only proved in 2012 by exhaustively checking all the approximately 5 billion solutions.
Markov chains is another statistical subject. Finding the probability of a letter following a group of letters or words following a group of words, can be characteristic for the text produced by a particular author. With these probabilities, one may write a computer program that will generate (gibberish) text in the style of that author.
The distribution of discrete random variables are caught in a spectrum, like the eigenvalues of a random matrix, or the zeros of the Riemann zeta function or seismic activity, etc. Hilbert and Polya conjectured that there is some universal operator whose spectrum corresponds to the distribution of the Riemann zeta zeros. This can be interpreted as corresponding to the jumps in the energy levels of the nucleus of some imaginary chemical element that could be named Riemannium.
The distinction between pure random numbers and the more advantageous quasi- and pseudo-random numbers in computer applications is clearly explained in a chapter where it is also explained how Monte Carlo-type methods can solve high dimensional integration problems.
It is a known paradox that volume of the largest ball that can be inscribed in the n dimensional unit cube when n is large is surprisingly small. Hayes gives arguments that convince the reader that this is not so surprising, but that it should actually be an obvious fact.
Sometimes Hayes includes short computer code snippets that he used in his experiments. One chapter is devoted to the representation of floating point numbers on computers. Assigning a fixed finite number of bits to represent a floating point number results in finite precision and rounding errors that accumulate during the computations. This may have catastrophic consequences if not kept under control. Besides the standard IEEE representation, alternatives exist that are more flexible in distributing the available bits for representing a number between the significant and the exponent.
The final chapter is called Foolproof which is also the title of the book. It sketches how the concept of a mathematical proof has evolved since Socrates. Nowadays computer proofs are more common, but the first computer assisted proof (of the four colour problem in 1976) had a hard time to get accepted. But also proofs provided by humans can be very long or perhaps very cryptic so that it takes years to check them. The incentive for this chapter is the proof by Wanzel in 1837 that the trisection of an angle using only ruler and compass is impossible. This was known since ancient Greece, but no proof was given until then, and yet it remained in obscurity for quite a while.
I have tried to give an idea of what the different subjects are without including too many spoilers. There is plenty left to discover and savour for the connoisseur. Hayes has a pleasant style and he is taking you along his personal exploration of the subject. There are ample references provided for those who are hungry for more details. Hayes also refers at several instances to his web site at bit-player.org for additional material. These web pages claim to be An amateur's outlook on computation and mathematics. You can find many animations and texts that will also be of interest for the readers of this book.