there must be more to human thinking than can ever be achieved by a computer. ... Consciousness, in its particular manifestation in the human quality of understanding, is doing something that mere computation cannot. I make clear that the term `computation' includes both `topdown' systems, which act according to specific wellunderstood algorithmic procedures, and `bottomup' systems, which are more loosely programmed in ways that allow them to learn by experience.
The question at issue is not a dualistic distinction between the conscious mind and the physical brain. Penrose accepts that the conscious mind arises as a functioning of the physical brain, but he does not believe this functioning could be simulated by a large number of silicon chips operating as they do in a modern computer.
Gödel's theorem seems to me to prove that Mechanism is false, that is, that minds cannot be explained as machines.It is interesting, however, that Lucas doesn't rule out the construction of intelligent computers! At the end of his article, Lucas says:
When we increase the complexity of our machines there may, perhaps, be surprises in store for us. [Turing] draws a parallel with a fission pile. Below a certain ``critical'' size, nothing much happens: but above the critical size, the sparks begin to fly. So too, perhaps, with brains and machines.... This may be so. Complexity often does introduce qualitative differences. Although it sounds implausible, it might turn out that above a certain level of complexity, a machine ceased to be predictable, even in principle, and started doing things on its own account, or, to use a very revealing phrase, it might begin to have a mind of its own. It would begin to have a mind of its own when it was no longer entirely predictable and entirely docile, but was capable of doing things which we recognized as intelligent.... But then it would cease to be a machine, within the meaning of the act.So for Lucas, a machine cannot think by definition of what a machine is. Penrose certainly does not agree with Lucas's view here. On the contrary, Penrose claims that no matter how complex they become, computers will not have the capacity of human understanding.
In 1961 John Lucas  an Oxford philosopher well known for espousing controversial views  published a paper in which he purported to show that a famous theorem of mathematical logic known as Gödel's Second Incompleteness Theorem implies that human intelligence cannot be simulated by a computer. Roger Penrose is perhaps the only wellknown presentday thinker to be convinced by Lucas's argument.... The right moral to draw from Lucas's argument, according to Penrose, is that noncomputational processes do somehow go on in the brain, and we need a new kind of physics to account for them.Shadows of the mind will be hailed as a ``controversial'' book, and it will no doubt sell very well even though it includes explanations of difficult concepts from quantum mechanics and computational science. And yet this reviewer regards its appearance as a sad episode in our current intellectual life. Roger Penrose ... is convinced by  and has produced this book as well as the earlier The emperor's new mind to defend  an argument that all experts in mathematical logic have long rejected as fallacious. The fact that the experts all reject Lucas's infamous argument counts for nothing in Penrose's eyes. He mistakenly believes that he has a philosophical disagreement with the logical community, when in fact this is a straightforward case of a mathematical fallacy.
© 1994 by The New York Times Co. Reprinted by Permission.
I believe that in the next ten to twenty years some machines will become more intelligent than humans.We need to go to an appropriate level of modelling the functioning of a human brain in order to obtain a very good approximation.... Is it down, much further than the neuron level, to the atomic level? Should we look at quantum physics... as Penrose thinks, generating an atomic model of the brain's functioning that we do not yet have?... In reality I feel that we already do have sufficient basic modelling blocks.... What are those blocks? They are the neurons, the fundamental cells in the brain which cause it to operate as it does.
One major problem with consciousness is that it is a state which we ourselves feel that we are in, and through this we are aware of ourselves. It can, therefore, be extremely difficult, when thinking about ourselves, to realise that this is simply a result of a specific state of the neurons in our brain. These `private' thoughts and feelings we have, how can they possibly be simply a state of mind, realised by a state of the neurons? Well, that is exactly what they are.
Figure2: Alan Turing, 1951. By courtesy of the National Portrait Gallery, London.
Since part of Penrose's argument is based on a theorem of Turing concerning a limitation on what can be achieved by computers, it is interesting to know Turing's thoughts on these issues. Turing wrote an article [24]^{4} in which he gives his opinion on whether or not a computer will ever be able to think. In the article Turing states his belief that his own fundamental theorem concerning the limitations of computing machines is not an obstacle to the creation of intelligent computers.
Turing begins his article with the sentence:
I propose to consider the question, `Can machines think?'The question in this form is, he believes, too vague and he restricts the machine to being a computer. He then replaces the question whether a computer can think like a human with the question whether a computer can behave like a human, and more specifically whether a computer can answer questions posed by an interrogator so that the interrogator cannot distinguish the replies of the computer from those of a human. (He calls this the imitation game. It is now known as the Turing test^{5}.)
Turing states clearly that he believes the computer will do well at imitating human behavior and gives 50 years as the time scale when computers will have become sufficiently powerful to begin doing this. He was writing in 1950 and so we are just coming to his predicted time period. (Remember the new SandiaIntel supercomputer!) About his beliefs he says:
I believe further that no useful purpose is served by concealing these beliefs. The popular view that scientists proceed inexorably from wellestablished fact to wellestablished fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research.To construct intelligent computers, Turing proposes to use the technique of computer learning:
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.... Our hope is that there is so little mechanism in the childbrain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child.Turing considers, and rejects, contrary views to his own. He considers in particular the mathematical objection based on Turing's theorem, the issue we will be considering in this chapter. We give his discussion of this point in its entirety:
There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discretestate machines. The best known of these results is known as Gödel's theorem, and shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent. There are other, in some respects similar, results due to Church, Kleene, Rosser, and Turing. The latter result is the most convenient to consider, since it refers directly to machines, whereas the others can only be used in a comparatively indirect argument: for instance if Gödel's theorem is to be used we need in addition to have some means of describing logical systems in terms of machines, and machines in terms of logical systems. The result in question refers to a type of machine which is essentially a digital computer with an infinite capacity. It states that there are certain things that such a machine cannot do. If it is rigged up to give answers to questions as in the imitation game, there will be some questions to which it will either give a wrong answer, or fail to give an answer at all however much time is allowed for a reply. There may, of course, be many such questions, and questions which cannot be answered by one machine may be satisfactorily answered by another. We are of course supposing for the present that the questions are of the kind to which an answer `Yes' or `No' is appropriate, rather than questions such as `What do you think of Picasso?' The questions that we know the machines must fail on are of this type, ``Consider the machine specified as follows... Will this machine ever answer `Yes' to any question?'' The dots are to be replaced by a description of some machine in a standard form, which could be something like that used in 5. When the machine described bears a certain comparatively simple relation to the machine which is under interrogation, it can be shown that the answer is either wrong or not forthcoming. This is the mathematical result: it is argued that it proves a disability of machines to which the human intellect is not subject.Turing ends his article looking towards the future:The short answer to this argument is that although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect. But I do not think this view can be dismissed quite so lightly. Whenever one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it. We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of the fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.
We may hope that machines will eventually compete with men in all purely intellectual fields. But which are the best ones to start with? Even this is a difficult decision. Many people think that a very abstract activity, like the playing of chess^{6}, would be best. It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. This process could follow the normal teaching of a child. Things would be pointed out and named, etc. Again I do not know what the right answer is, but I think both approaches should be tried.We can only see a short distance ahead, but we can see plenty there that needs to be done.
One answer to this question might be that I am unable  because of my intellectual limitations, interests in other matters, general laziness, or some combination of all three  to follow Penrose's arguments through their many details. Perhaps he is right, and I just don't get it.To understand Penrose's mathematical argument against a computational mind, it will be necessary to disentangle all the various ingredients: Turing's theorem on limitations to computations, Gödel's theorem on limitations to formal systems, the formal system used to analyze a computation, computational models, mathematical truths, and mathematical beliefs. We shall uncover in section 6 a categorymistake in Penrose's reasoning, a confusion of the deduction of H believes X with the deduction of X itself. By supposing that the beliefs such as X are theorems, Penrose has made the consistency and correctness of his deductive system dependent on the consistency and correctness of H's beliefs, and this cannot be right. The beliefs and other thoughts of a mind cannot be theorems of the deductive system composing our theory of mind. The basic structure of the theory must be consistent whether or not the beliefs of a particular mind are consistent. Indeed, even in our everyday reasoning about other people's beliefs, those beliefs do not become part of our reasoning. And if those beliefs are contradictory, it does not follow that our reasoning about those beliefs is contradictory. We should be able, for example, to consistently and correctly deduce that someone will perform a particular action as a result of the incorrect beliefs he holds.
But let us begin by discussing the basic mathematical results, unencumbered by possible applications to a theory of mind.
27+96=?You might say, `That's easy; 123.' Or you might say `What do you want me to do with this piece of paper?', or `Que voulezvous que je fasse de ce papier?' if you're French. The point is that before you can respond to something I do you have to understand what I'm doing. We would have had to establish a means of communication so you will understand that I'm asking a question and understand what the question is before you can answer it. All sorts of conventions have been established as you have grown up and studied at school so that the initial communication hurdle is not an obstacle. Suppose you understand that I'm asking a question but you're not sure what some of the symbols on the paper represent. You might ask me `What is that symbol 27?' Perhaps I might show you a bowl with 27 marbles in it. But you might not know what aspect of what I showed you corresponded to the 27. So I might then show you a bowl with 27 oranges. In case you still do not understand I might show you a bowl with 27 mice. You might then get the idea that 27 is the common characteristic of all the things I showed you: It must be a bowl! No, I'm sure you would understand that 27 represents the quantity of marbles, or oranges. In fact to keep everything as concrete as possible and avoid unnecessary abstractions, no harm will result if we agree that we are always going to talk about marbles. So 27 means 27 marbles. Going back to what was written on that piece of paper, we will now suppose that you understand that I'm asking you a question about quantities of marbles. There are two quantities mentioned: 27 and 96. You might guess that the symbol + represents some operation you are to perform on the two quantities of marbles, but which operation? I'm sure you would say `That's easy, it means to add the two quantities of marbles together.' It's only easy because you were taught how to do addition and the symbol which represents it in school. We must start somewhere, with some things understood^{7} (for example the meaning of 27 marbles and certain basic operations which could be carried out on quantities of marbles). Then the meaning of more and more complicated operations on marbles could be defined. For example, I might hand you a piece of paper with this written on it:

In order to solve problems in arithmetic (which for concreteness we will take to mean questions about quantities of marbles) we must first agree on a certain set of basic instructions, or operations on the marbles, from which a rule may be given to perform more complicated operations. The rule is a program to be executed by performing the basic instructions one after the other. We might ask questions about the result of executing a program starting from some initial collection of bowls with marbles in them. If the program represents addition then we are asking a question about addition. If the program represents multiplication then we are asking a multiplication question. And so on.
Why do we choose to formulate arithmetic in terms of bowls of marbles? Because in this way arithmetic corresponds to concrete actions on concrete objects (marbles, or coins, etc.). This is how arithmetic originally developed, as a concrete activity rather than an abstract mental construct. At this stage there is no logic, no deduction or proof, just concrete actions carried out according to instructions which define addition, multiplication, and other more complicated arithmetical procedures. At a later stage we may reason about these arithmetical procedures, using rules of deduction which seem to us to be sound.
The setup we have been considering, consisting of registers (bowls) R_{k} containing numbers (of marbles) r_{k} and programs of instructions of the previously discussed type, is called an Unlimited Register Machine or URM for short.^{10} Using this approach you can compute any function which can be computed in any other way. In particular, you can compute the same functions as Turing machines compute.
A program P has a finite specification by the instructions I_{0},¼,I_{b}. You can however execute the program with infinitely many possible initializations of the registers. The program thus represents a uniform way for you to deal with the infinitely many cases you may be presented with.
 (1) 
R_{0}  R_{1}  R_{2}  R_{3}  next instruction 
3  2  0  0  0 
3  2  0  0  1 
4  2  0  0  2 
4  2  1  0  3 
4  2  1  0  0 
4  2  1  0  1 
5  2  1  0  2 
5  2  2  0  3 
5  2  2  0  0 
5  2  2  0  4 
5  0  2  0  5 
5  0  0  0  6 


Then the jump instruction J(j,k,l) is associated with the number m = g(j,k,l).
In the final step we assign numbers to the programs P = I_{0},I_{1},¼,I_{b}. Since we have assigned numbers to the individual instructions, we can associate with P the sequence of numbers m_{0},m_{1},¼,m_{b}, where m_{j} is the number assigned to the instruction I_{j}. Thus we must encode all such sequences of numbers into a single number. One way to do this is to associate to the numbers m_{0},m_{1},¼,m_{b} the number:

We may explain formula (4) this way. Given the program P and the corresponding Gödel numbers m_{0},m_{1},¼,m_{b} we compute h using the following prescription:
 (5) 
Let's find the programs corresponding to some Gödel numbers:



8  1  6 
3  5  7 
4  9  2 
This is a magic square, where the numbers add to 15 along any row, column, or diagonal.
The first player begins the game by putting an X in one of the squares on the board. This is the same as choosing a number from 1 to 9. The second player now puts an O in one of the remaining squares. This is the same as choosing one of the remaining numbers. Play proceeds like this until one player has marked all the squares in a row, column, or diagonal. This is the same as one player having three numbers which add to 15.
We have thus reformulated the game of tictactoe as an equivalent game involving a purely numerical procedure of choosing numbers from 1 to 9, the game being won when a player gets three numbers which add to 15. It is described in Karl Fulves' SelfWorking Number Magic [9] where it is called the marrakech game.
Of course, as numbers do not carry any meaning beyond their place in the succession of numbers, the marrakech game no longer has the same `meaning' as the original game. (The marrakech game is harder to play than tictactoe because the geometrical symmetries of the tictactoe board and the geometry of the game play are not present in the marrakech game.)
We have seen that all programs can be numbered: P_{0},P_{1},P_{2},¼. Let's say that a number m is produced by a program P_{n} if there is an initial value r_{0} of register R_{0} such that when all other registers are initialized to 0 (r_{k} = 0 for k ¹ 0) and the program executed, the program will be completed with m in register R_{0} (and any numbers in the other registers). We will be interested in the family of statements S_{n} of the form `n is not produced by P_{n}'. Which statements S_{n} can be shown to be true? For some values of n, by reasoning about the sequence of instructions in P_{n} you might find a way of demonstrating that S_{n} is true. For other values of n you might be able to demonstrate that S_{n} is false. For still other values of n you might have no idea whether or not S_{n} is true (or even whether or not it is meaningful to ask if it is true). For example:
To see that no such P_{l} exists, suppose P_{l} produces the number l. Then, given that P_{l} is sound, it follows that S_{l} is true, i.e. P_{l} does not produce l. This contradiction means that a sound program cannot produce its own Gödel number. Thus, given that P_{l} is sound, it follows that P_{l} does not produce l. Therefore S_{l} is true. Hence the sound program P_{l} cannot produce the number l of the true statement S_{l}. Hence we have our first main result:
 (6) 
 (7) 
The registers are initialized to 0 except R_{0} which is set to r_{0} = m. First test if m = 0 by comparing r_{0} and r_{1} (since r_{1} = 0). If m = 0, add^{15} l to r_{0} and halt. If m ¹ 0, subtract^{16} 1 from r_{0} and then execute the program P_{l}.
Consequently the program P_{l¢} is also sound and furthermore produces the number l, a number not produced by P_{l}. But the previous argument applied to P_{l¢} shows that S_{l¢} is true but not produced by P_{l¢} (and hence also not by P_{l}). This is our third main result:
 (8) 
 (9) 
 (10) 
In section 2 we defined the basic instructions and the programs composed of these instructions. I can explain to you very simply what you are to do when following each instruction and you may then execute the program. It is all very concrete and involves moving marbles in a specific way. A chimpanzee could be trained to execute some programs. A mechanical contraption could be built which would execute the programs, depending on how various levers were set. A presentday computer could be set up to execute any program if you type the instructions on the keyboard. There is no `logic' or reasoning which forms part of the process of executing a program. On the other hand, in section 3 we have been reasoning about programs. This involves something different from the programs themselves. What constitutes valid reasoning? Can we be absolutely precise as to what shall consistute an acceptable proof? In what language shall the reasoning take place?^{17} It was with the objective in mind of setting down a precise, consistent framework in which to conduct mathematical reasoning that formal mathematical systems were developed. A formal mathematical system is formulated with a specified language, which consists of an alphabet of symbols, together with rules for writing down formulas. In addition, there are rules^{18} which determine when a finite sequence of formulas constitutes a proof, the final formula of the proof being the theorem which is proved. The most characteristic property of such systems is the mechanical nature of proofchecking, a property emphasized by David Hilbert (18621943). Let's decide on a method^{19}2 to encode formulas and finite sequences of formulas into (Gödel) numbers. Then the mechanical nature of proofchecking means that there is a program P_{l} which will produce precisely the Gödel numbers of proofs, and a program P_{m} which will produce the Gödel numbers of the last formula in each proof. Every proof will eventually be produced by P_{l} and hence every theorem will eventually be produced by P_{m}.
When we use a formal mathematical system we have a particular interpretation in mind for the symbols and formulas. However, someone else may use the same formal system with a different interpretation.^{20} The formal system gives rules for writing symbols on paper. It has no intrinsic meaning and does not come with an interpretation. For this reason, Bertrand Russell once said (quoted in chapter XI of [12]):
Mathematics may be defined as the subject in which we never know what we are talking about nor whether what we are saying is true.and Henri Poincarè said (quoted in chapter XII of [12]):
Mathematics is the art of giving the same name to different things.Once an interpretation is settled upon we can inquire as to whether or not a particular formula is true. You may use any interpretation you wish provided all the axioms of the formal system are true in your interpretation. The formal system is said to be sound if its theorems are true no matter which interpretation you are using.^{21}
So suppose we have settled on a particular formal system to make deductions about the programs we have been considering. We interpret^{22} the formulas as statements about programs. The statement S_{n}, discussed in the preceding section, will be expressed by a formula f_{n}. Then if S_{n} is true we will say that the formula f_{n} is true, and if S_{n} is false we will say that the formula f_{n} is false.
From the program P_{m} which produces the Gödel numbers of all the theorems of the formal system, we can write down a program P_{j} which produces the number n if and only if f_{n} is a theorem. Here's a prescription to do this. For given input, execute P_{m}. If P_{m} halts and produces the number k, then we know k is the Gödel number of a theorem. To find out if k corresponds to one of the formulas f_{n}, we need to use the program P_{M} which, given input n, produces the Gödel number of f_{n}. We execute P_{M} repeatedly with input 0, then 1, then 2, etc. comparing the number produced with k. If k is not the Gödel number of some f_{n} the program will not halt. If k is the Gödel number of f_{n} then the program produces n as output. This prescription in words must be translated into a program P_{j}. This is easy to do once we have the programs P_{m} and P_{M}. A flow diagram for P_{j} is shown^{23}7 in Figure 7. Summarizing, from the program P_{m} which produces the Gödel numbers of theorems of our formal system, and the program P_{M} which from input n produces the Gödel number of f_{n}, we can write down a program P_{j} which produces the numbers n such that f_{n} is a theorem.
Let us now suppose that our formal system is sound, i.e. the theorems are true. Thus P_{m} will produce only Gödel numbers of true formulas, and hence P_{j} will produce only numbers n such that S_{n} is true. Thus P_{j} is sound. It follows by (5) that the number j is not produced by P_{j} and S_{j} is true. Thus f_{j} is true and not produced by P_{m}, i.e. f_{j} is true but is not a theorem of the formal system. This conclusion holds for any sound formal mathematical system with a language sufficiently broad to express the statements S_{n}. We thus have the following version of Gödel's theorem:
 (11) 
An interpretation gives meaning to the formal system and this meaning leads us to truths not derivable from the formal system itself. This must be expected if a formal system can be given different interpretations, and so different meanings and truths.
If one has ten pounds of axioms and a twenty pound theorem, then that theorem cannot be derived from those axioms.
An oracle in the form of a `black box', the interior of which we do not know, produces statements. The oracle is sound if all the statements are true. Now define^{25} a device D which does the following: if the oracle produces the statement D does not produce anything, then D produces the number 0 in response. If the oracle produces any other statement, D does not produce anything in response.
We may easily deduce that if the oracle is sound, then it will not produce the statement D does not produce anything, and furthermore it is true that D does not produce anything. Thus if we know the oracle is sound, then we know a truth not produced by the oracle.
I am reminded of a story concerning the great American physicist Richard Feynman. Apparently Feynman was explaining some idea to a student, but misstated it. When the student expressed puzzlement, Feynman replied: `Don't listen to what I say; listen to what I mean!'In Chapter 2, The Gödelian case, and Chapter 3, The case for noncomputability in mathematical thought, Penrose argues that the human mind cannot be simulated by a computer:
I shall shortly be giving (in Chapters 2 and 3) some very strong reasons for believing that effects of (certain kinds of) understanding cannot be properly simulated in any kind of computational terms... Thus, the human faculty of being able to `understand' is something that must be achieved by some noncomputational activity of the brain or mind... The term `noncomputational' here refers to something beyond any kind of effective simulation by means of any computer based on the logical principles that underlie all the electronic or mechanical calculating devices of today.Penrose puts forward his main argument in section 2.5 of Shadows of the Mind, and then deals with various counterarguments in Chapters 2 and 3:
The argument I shall present in the next chapter (section 2.5) provides what I believe to be a very clearcut argument for a noncomputational ingredient in our conscious thinking... In due course (in Chapters 2 and 3), I shall be addressing, in detail, all the different counterarguments that have come to my attention.
 (12) 
What Penrose has shown is quite compatible with the claim that a computer program could in principle successfully simulate our mathematical capacities. The possibility exists that each of the rules that a human mathematician explicitly relies on, or can be rationally persuaded to rely on, can be known to be sound and that the program generates all and only these rules but that the program itself cannot be rendered sufficiently ``perspicuous'' for us to know that that is what it does. Actual programs sometimes consist of thousands of lines of code, and it can happen that by the time a program has been tinkered with and debugged no one is really able to explain exactly how it works. A program which simulated the brain of an idealized mathematician might well consist of hundreds of thousands (or millions or billions) of lines of code. Imagine it given in the form of a volume the size of the New York telephone book. Then we might not be able to appreciate it in a perfectly conscious way, in the sense of understanding it or of being able to say whether it is plausible or implausible that it should output correct mathematical proofs and only correct mathematical proofs.In short, there may be a program P_{l} which is sound and which produces precisely the same numbers n of true statements S_{n} that human mathematicians can produce. But for this it is necessary that human mathematicians are unable to ascertain that that is what P_{l} does.
© 1994 by The New York Times Co. Reprinted by permission.
I cannot really see that it is plausible that mathematicians are really using an unsound formal system F as the basis of their mathematical understandings and beliefs. I hope the reader will indeed agree with me that whether or not such a consideration is possible, it is certainly not at all plausible.But in fact there has been a loss of certainty in the soundness and completeness of mathematics, as Penrose acknowledges and discusses in sections 2.10 and 3.4 of Shadows of the Mind. Here are the comments of several influential mathematicians:
I have told the story of this controversy in such detail, because I think that it constitutes the best caution against taking the immovable rigour of mathematics too much for granted. This happened in our own lifetime, and I know myself how humiliatingly easy my own views regarding the absolute mathematical truth changed during this episode, and how they changed three times in succession!... It is hardly possible to believe in the existence of an absolute, immutable concept of mathematical rigor, dissociated from all human experience.John von Neumann (from [26])
Mathematics may be likened to a Promethean labor, full of life, energy and great wonder, yet containing the seed of an overwhelming selfdoubt. It is good that only rarely do we pause to review the situation and to set down our thoughts on these deepest questions. During the rest of our mathematical lives we watch and perhaps partake in the glorious procession.... This is our fate, to live with doubts, to pursue a subject whose absoluteness we are not certain of, in short to realize that the only ``true'' science is itself of the same mortal, perhaps empirical, nature as all other human undertakings.Paul J. Cohen (from [6])
I wanted certainty in the kind of way in which people want religious faith. I thought that certainty is more likely to be found in mathematics than elsewhere. But I discovered that many mathematical demonstrations, which my teachers expected me to accept, were full of fallacies, and that, if certainty were indeed discoverable in mathematics, it would be in a new field of mathematics, with more solid foundations than those that had hitherto been thought secure.... After some twenty years of very arduous toil, I came to the conclusion that there was nothing more that I could do in the way of making mathematical knowledge indubitable.[19]... The splendid certainty which I had always hoped to find in mathematics was lost in a bewildering maze.[20]Bertrand Russell
Only he who recognizes that he has nothing, that he cannot possess anything, that absolute certainty is unattainable, who completely resigns himself and sacrifices all, who gives everything, who does not know anything, does not want anything and does not want to know anything, who abandons and neglects everything, he will receive all; to him the world of freedom opens, the world of painless contemplation and of  nothing.L.E.J.Brouwer (from [3])
 (13) 
In this way by executing the program P_{l} we may compute the values f_{0}(t),¼,f_{N}(t) for all (discrete) times t, and consequently the sequence of states of the neural network.
What are we to make of the popular notion that our brains are somehow libraries of our thoughts and beliefs? Is it in principle possible that brain scientists might one day know enough about the workings of our brains to be able to ``crack the cerebral code'' and read our minds?Let's imagine that there is a `dictionary' which can translate from our description (f_{0}(t),¼,f_{N}(t)) of the neuronal state of a brain to the beliefs which the mind holds at time t. Denote^{30} these beliefs by B_{0}(t),¼B_{M(t)}(t). These beliefs could be expressed in English (together with mathematical symbols), or in French, or in any formal language. Define some Gödel numbering so that each belief B_{j}(t) is assigned a number and the collection of all beliefs at time t is assigned the number b(t). So, we will be able to decode the number b(t) and obtain all the beliefs B_{0}(t),¼, B_{M(t)}(t). Suppose the dictionary is computational in the sense that there is a program P_{d} such that if we initialize the registers to r_{k} = f_{k}(t) then when we finish executing the program P_{d} the register r_{0} will contain the Gödel belief number b(t).
Hence to compute the beliefs held by a brain at time t initialize the registers to r_{k} = f_{k}(0), k = 0,¼,N. Then execute the program P_{l}. When you complete the program you will have r_{k}^{¢} = f_{k}(1) in the registers. Continue these computations until you have obtained f_{0}(t),¼,f_{N}(t). Now execute the dictionary program P_{d} with the registers initialized to r_{k} = f_{k}(t). When you complete the program P_{d} you will obtain a number b(t) which encodes all the beliefs held by the brain at time t. The number b(t) could be decoded to yield all the beliefs B_{0}(t),¼B_{N}(t) at time t stated in English. Combining the above steps, we can write a program P so that, initializing the registers to f_{0}(0),¼,f_{N}(0),t, then upon completing the program P the Gödel belief number b(t) will be in the register R_{0}.
Experience has taught most mathematicians that much that looks solid and satisfactory to one mathematical generation stands a fair chance of dissolving into cobwebs under the steadier scrutiny of the next... Knowledge in any sense of a reasonably common agreement on the fundamentals of mathematics seems to be nonexistent... The bald exhibition of the facts should suffice to establish the one point of human significance, namely, that equally competent experts have disagreed and do now disagree on the simplest aspects of any reasoning which makes the slightest claim, implicit or explicit, to universality, generality, or cogency.The consistency, correctness, or otherwise of a person's beliefs has nothing whatever to do with the consistency or correctness of the formal system used to deduce that person's beliefs. And what about thoughts in general? Think of something crazy. Something you believe is not, and could not possibly be, true. Should that crazy thing be a part of the deductive system forming a theory of mind? Or is it only the things you believe to be true that should be theorems? Surely all thoughts and other aspects of mind should be treated in the same way. You can think about anything you like; it won't affect the consistency or soundness of the theory of mind.
Suppose a human, who we call H, has a computational mind (associated with the program P), and that H is made aware of the program P which we are using to compute the properties of his brain. We may use a formal mathematical system F to logically deduce the behavior of P, and hence the behavior, beliefs, and other thoughts of H. We may say that the system F encapsulates all the knowledge and beliefs of H. Penrose argues that if H believes statement X, then since F encapsulates all of H's beliefs, it must be possible to prove X in the system F. Penrose then reasons that H will surely believe that the system F is sound. Consequently, H will believe (by Gödel's theorem) that the formula f is true. But since, again by Gödel's theorem, f is not a theorem of F, it follows that F cannot after all encapsulate all of H's beliefs.^{32} By this contradiction Penrose concludes that H cannot in fact have a computational mind.
Penrose's confusion in this line of reasoning is italicized above. For if H believes X (on day 1) then it is necessary that F deduces that H believes X on day 1. It is not necessary that F deduces X. If on day 2 H changes his mind and believes the negation of X, ØX, then it is necessary that F deduces that H believes ØX on day 2. It is not necessary that F deduces ØX. If on day 3 H goes crazy then it is necessary that F deduces that H is crazy on day 3. It is not necessary that F deduces all sorts of crazy formulas.
In short, although F cannot deduce f, there is no reason why F cannot deduce that H believes f. Gödel's theorem doesn't say anything about what can be proved concerning the state of H's mind!
The essential point is that H's beliefs do not form part of the deductive system F. If H believes two contradictory statements X and Y, these statements are not theorems of F and so it does not follow that F is inconsistent, as it would be if X and Y were theorems.^{33}
Let's explain in detail the distinction between X and H believes X, using our `toy' computational model. The statement X is one of H's beliefs, say B_{k}(t), and as such is encoded along with H's other beliefs in the Gödel belief number b(t). Using the formal system F we can deduce a theorem which states that when the program P is run with registers initialized to r_{k} = f_{k}(0) then when the program is completed r_{0} = b(t). There is a world of difference between the theorem r_{0} = b(t) and the statements B_{0}(t),¼,B_{N}(t) encoded in the number b(t). The confusion undoubtedly arose because the mathematical beliefs of H could be expressed in the same language used in the formal system F. No one would think of incorporating a nonmathematical belief, such as `It will rain tomorrow', into the formal system F !
Consider a present day computer. What sort of expressions might it print out? Consider the formal system F associated with the program it is running. Suppose the formula f is not provable in F. Is there any reason why the formula f cannot be printed out by the computer? No! Again, what comes out of the printer is not a theorem of F.
All this becomes clear when it is realized that what is printed out, or drawn, etc., need not satisfy any grammatical rules, which are required of theorems of F.
When we talk mathematics, we may be discussing a secondary language, built on the primary language truly used by the central nervous system. Thus the outward forms of our mathematics are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is. However, the above remarks about reliability and logical and arithmetical depth prove that whatever the system is, it cannot fail to differ considerably from what we consciously and explicitly consider as mathematics.We may interpret this quotation as stating the view that the mathematical procedures and beliefs of a mind are of a different category from the theorems of the formal system F associated with a computational model of the mind at the neuronal level. In short, mathematical beliefs are not theorems of F, which is the point of our criticism of Penrose's argument.
If there could be adequate mechanistic accounts which represented a person by means of a formal system, yet did not correlate beliefs, thoughts or statements with that system's theorems or `outputs', then the argument from Gödel fails. In fact, there is an approach which appears to satisfy this requirement. It deals in terms of the organism's physical states, inputs, and outputs, all at some rather low level of specification  perhaps the neurophysiological  and in possible transitions from state to state through time.... Certainly mechanists will maintain that those tokens of mathematical sentences which Alf produces are produced `mechanically'. However, in maintaining this they need not  and should not  say that the token sentences themselves correspond to theorems of a formal system which adequately represents Alf and his part of the world.
Will computers of the future be intelligent?Certainly presentday computers aren't intelligent.^{34} We have quoted in section 1 several views that intelligent computers are a likely development in the not too distant future. Here is the vision of the mathematical physicist David Ruelle, which he paints at the beginning of his book [18]:
Supercomputers will some day soon start competing with mathematicians and may well put them forever out of work.... Of course, presentday machines are useful mostly for rather repetitious and somewhat stupid tasks. But there is no reason why they could not become more flexible and versatile, mimicking the intellectual processes of man, with tremendously greater speed and accuracy. In this way, within fifty or a hundred years (or maybe two hundred), not only will computers help mathematicians with their work, but we shall see them take the initiative, introduce new and fruitful definitions, make conjectures, and then obtain proofs of theorems far beyond human intellectual capabilities.
© 1991 by Princeton University Press. Reprinted with permission.
It is never to the point in computer simulation that one's model be indistinguishable from the modelled. Consider, for instance, a good computer simulation of a hurricane, as might be devised by meteorologists. One would not expect to get wet or windblown in its presence. That ludicrous expectation would be akin to a usemention error, like cowering before the word ``lion''.And Searle [22] puts it this way:
No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.How might we build a computer which does think?
To make the brain, we'll use the URM. Take all the bowls and marbles, and your program P_{l} which defines a good computational model of some particular brain, and sit down inside the robot. Now suppose that when certain bowls have a marble in them (modelling an excited neuron) an action is performed by the robot (a small arm movement, for example). Suppose also that input from the video cameras (or microphones, etc.) leads to certain other bowls having a marble dropped in them (corresponding to the excitation of sensory neurons). You now execute the program P_{l}. Then, during each time step t® t+1, the marbles in the bowls change (modelling the varying excitation states of the neurons in the brain) causing the robot to react to external stimuli, to speak through the speaker, and so on. Of course, you must imagine yourself working faster and faster, and the bowls of marbles very small. In this way, the collection of bowls of marbles with you executing the program P_{l} becomes the brain of the robot.
Among the bowls are `input bowls' whose contents reflect properties of the outside world, and `output bowls' whose contents determine actions by the robot.
The sequence of bowl contents r_{j}(t) at times t = 0,1,2,¼, for j = 0,¼,N corresponds to the sequence of neuron firings of the j^{th} neuron in the brain which is modelled by the program P_{l}. The program P_{l} corresponds to the connectivity of the neural network (through the weights w_{ij}). The demon, executing the program P_{l}, corresponds to the electrochemical forces which induce the temporal development of the neural network (its varying neuronal firings).
Question. You know that you think^{36}; how close does this robot come to doing the same?
Whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything.... I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.For `human' in this quote, substitute our `demon', whom we have agreed does not understand anything except the basic instructions required in the execution of the program P_{l}. Remembering that we said the demon corresponds^{37} to the electrochemical forces responsible for the temporal development of the neural network, we certainly would not ascribe `understanding' to those forces, nor to the demon. But although the demon does not understand anything of the outside world, does it follow that the robot has no understanding?
Imagine the robot in China, and the program P_{l} giving a good computational model of the brain of some Chinese person. The robot will function in China in the same way that the Chinese person does. It will make replies, in Chinese, to questions asked of it by Chinese people. Searle imagines himself taking the part of the demon. He then compares his excellent understanding of English with his total lack of understanding of Chinese, and concludes that also the robot could not `understand' Chinese. But again, the demon, or Searleintherobot, is not where understanding is expected to reside. Searle's reference to the understanding of English which Searleintherobot possesses involves a categorymistake, as pointed out by Margaret Boden [1]:
Searle's description of the robot's pseudobrain (that is, of Searleintherobot) as understanding English involves a categorymistake comparable to treating the brain as the bearer  as opposed to the causal basis  of intelligence.
The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neurone firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states.But recall Turing's test based on behavior. The physicist Niels Bohr echoes Turing's ideas this way [10]:
I am quite prepared to talk of the spiritual life of an electronic computer; to say that it is considering or that it is in a bad mood. What really matters is the unambiguous description of its behaviour, which is what we observe. The question as to whether the machine really feels, or whether it merely looks as though it did, is absolutely as meaningless as to ask whether light is ``in reality'' waves or particles. We must never forget that ``reality'' too is a human word just like ``wave'' or ``consciousness.'' Our task is to learn to use these words correctly  that is, unambiguously and consistently.Be that as it may, the `feel' of the functioning brain could very well depend on the detailed structure of the individual neurons, much as the tone of a violin depends on the quality of wood used and the manufacturing techniques employed. So a computer's silicon brain might `feel' different from our biological brain, but nevertheless, it would be a thinking computer.
^{1} For more details see Intel's website
http://www.intel.com/pressroom/archive/releases /cn121796.htmand Sandia's website
http://www.sandia.gov/LabNews/LN062097/teraflops_story.html
^{2} In addition to Putnam's review, discussed here, an online version of Soloman Feferman's review of Shadows of the Mind can be found at
http://psyche.cs.monash.edu.au/v2/psyche207feferman.htmlYou can find various other criticisms of Penrose's approach, together with replies by Penrose, at
http://psyche.cs.monash.edu.au/v2/psyche223penrose.htmlA large bibliography on the subject of `mind and metamathematics' can be found at
http://ling.ucsc.edu/ ~ chalmers/biblio4.html .2which is Part 4: Philosophy of Artificial Intelligence of the Contemporary Philosophy of Mind: An Annotated Bibliography, at
http://ling.ucsc.edu/ ~ chalmers/biblio.html
^{3} Putnam's own view on the nature of human mentality is indicated by the following quote from [17]:
Why should all truths, even all empirical truths, be discoverable by probabilistic automata (which is what I suspect we are) using a finite amount of observational material?
^{4} All quotations from Turing's article in this chapter are reprinted by permission of Oxford University Press.
^{5} In 1990 Dr. Hugh Loebner pledged a Grand Prize of $100,000 for the first computer whose responses were indistinguishable from a human's. Each year a prize of $2000 is awarded to the most human computer, which this year was won on May 16, 1997 by the program Converse, entered by the University of Sheffield. The homepage of the Loebner prize is
http://acm.org/ ~ loebner/loebnerprize.htmlx
^{6} On May 11, 1997 the IBM computer Deep Blue won a six game match against the world chess champion Garry Kasparov. For more details see
http://www.chess.ibm.com/home/html/b.html
^{7} The difficulty in defining each and every concept one uses is nicely put [2] by Ludwig Boltzmann (18441906); reprinted by kind permission of Kluwer Academic Publishers:
Let me begin by defining my position by way of a true story. While I was still at high school my brother (long since dead) tried often and in vain to convince me how absurd was my ideal of a philosophy that clearly defined each concept at the time of introducing it. At last he succeeded as follows: during a lesson a certain philosophic work (I think by Hume) had been highly recommended to us for its outstanding consistency. At once I asked for it at the library, where my brother accompanied me. The book was available only in the original English. I was taken aback, since I knew not a word of English, but my brother objected at once: ``if the work fulfils your expectations, the language surely cannot matter, for then each word must in any case be clearly defined before it is used.''
^{8} To determine if the number of marbles in two bowls is the same, remove one marble from each bowl, then another marble from each bowl, and so on. If both bowls become empty at the same step then they contained the same number of marbles. If only one bowl is empty then they contained different numbers. (Then replace the marbles in their respective bowls.)
^{9} We do not allow instructions of the sort, for example, S(r_{1}), which adds a marble to the bowl with number equal to the content of register R_{1}. In fact, although we are using numbers as names for the bowls, any symbols could be used for these names. They do not have the same quantitative significance as the content of a bowl. We also do not allow instructions of the sort J(1,3,r_{0}) which may cause a jump to the instruction with number equal to the content of register R_{0}. Again, an instruction number and a number of marbles are two different types of things. One is an ordinal number, representing an ordering (first, second, etc.), the other a cardinal number, representing an amount.
^{10} The URM was invented by J.C.Shepherdson and H.E.Sturgis [23]. A nice discussion of the URM is given by N.J.Cutland [7].
^{11} The working registers are R_{0},R_{1},R_{2}; the others remain unchanged and do not influence the computation.
^{12} Used explicitly as part of the jump instructions and implicitly as part of the transfer instructions.
^{13} The case n = m is dealt with by the first application of J(1,2,7).
^{14} See Cutland [7] for further discussion of this and other assignments.
^{15} Here is a program to add l to any number (which is placed in register R_{0}, all other registers initialized to 0): S(0),S(0),¼,S(0) [l copies].
^{16} Here is a program to subtract 1 from any number m (which is placed in register R_{0}, all other registers initialized to 0): S(1),J(0,1,5),S(1),S(2),J(0,0,1),T(2,0),Z(1),Z(2). If m ³ 1, when you complete the execution of this program there will be m1 in register R_{0}, all other registers containing 0. If m = 0, this program will not halt; i.e. you will never complete the program.
^{17} Even if you could carry out your reasoning without language, you could not communicate to others how you arrived at your conclusions unless a common language is agreed.
^{18} What rules of deduction should be included in the formal system? Is there only one correct logical system? Consider other areas of mathematics. What is the correct geometrical theory? The classical geometry of Euclid was considered the only correct geometry for thousands of years, but now we consider the correct geometry to be an empirical question. The curved spacetime of Einstein's general relativity theory is considered a better mathematical model than the flat Euclidean geometry. What is the correct algebraic theory? The algebraic structure of a theory of observable quantities had been supposed to satisfy xy = yx (commutativity) and x(y+z) = xy+xz (distributivity). But now it is considered that quantum theory, involving noncommutative observables, is a better mathematical model of atomic phenomena. Quantum theory still keeps distributivity, but if the empirical data require it, then a further modification of the algebraic structure of the theory could be made. What is the correct logic? Perhaps there is no one correct logic. It may also depend on the properties of the system being modelled. We have empirical geometry, empirical algebra, why not empirical logic?
^{19} We could use a method similar to the one used to encode programs, since programs are finite sequences of instructions and formulas are finite sequences of symbols  and proofs are finite sequences of formulas. So first give numbers to each symbol in the alphabet of L. If f is the formula a_{1}¼a_{b}, where m_{j} is the Gödel number of the symbol a_{j}, then f is assigned the number h(m_{0},¼,m_{b}), where the function h is given in (4). Similarly, if f_{0},¼,f_{b} is a finite sequence of formulas where m_{j} is the Gödel number of the formula f_{j}, then the sequence of formulas is assigned the number h(m_{0},¼,m_{b}).
^{20} Different interpretations are often given to the same English sentence by different people. They may think they have a disagreement as to which statements are true, but actually they may simply have assigned a different meaning to the same words! This is called a problem of semantics.
^{21} Consequently, if the formal system is sound and if there is a formula f which is true in one interpretation and false in another, then neither f nor its negation can be a theorem of the formal system.
^{22} Someone else may interpret the formulas in a completely different way, as statements about some other activity or physical process.
^{23} We suppose that when P_{m} halts, all its working registers other than R_{0} contain 0. Similarly for P_{M}. This can always be arranged. In Figure we take the working registers of P_{M} to be R_{k} for k £ N_{M}.
^{24} In our interpretation f_{j} expresses the statement S_{j}, that the j^{th} program does not produce the number j. However, in another interpretation f_{j} no longer has this meaning. The relation between f_{j} and S_{j} is lost.
^{25} The element of selfreference in the definition is inevitable. The oracle must be able to refer to itself or to something dependent on the oracle. As we have seen in the case of formal systems, a formula can be written down which may be interpreted as stating that the selfsame formula cannot be proved by the formal system.
^{26} Recall the Feynman story above.
^{27} On the other hand, if P_{n} does not produce n but a mathematician claims that it does (without any indication as to what input would produce n), it may not be possible to prove him wrong. There may be no way to determine the truth or falsity of S_{n} and the law of the excluded middle (every statement is either true or false) may not be applicable.
^{28} Positive values of w_{ji} corresponds to exitation and negative values to inhibition. Given rational values for the weights, we can multiply through by a sufficiently large integer to rewrite the equations with integervalued weights.
^{29} Negative integers need not enter directly into the computation. Use may be made, for example, of the difference function Dif(n,m) = nm if n ³ m and = 0 if n < m.
^{30} We shall naturally suppose that a (finite) brain can have only finitely many beliefs at any one time.
^{31} What could be more computational than that?
^{32} It may be that the formal system F does not satisfy the requirements to deduce Gödel's theorem for F. But the point of the argument is that there are statements which H believes to be true but which are not theorems of the formal system F. This will undoubtedly be the case for mathematical statements of a sort which cannot even be expressed in the particular formal system F, and even more so for nonmathematical beliefs.
^{33} If H's beliefs are not theorems of F, what are they? The beliefs are encoded into the Gödel belief number b(t), which is a term of the formal language.
^{34} They only think they are.
^{35} To the reader with such a brain: `Don't lose your marbles!'
^{36} Well, maybe.
^{37} According to the construction of presentday computers, we could also say that the demon corresponds to the CPU (central processing unit) of the computer, which, while executing a program, manipulates the numbers (on/off transistor patterns) stored in its registers and RAM memory locations.