By: Richard Oxenberg

I. Two Positions

The strong AI advocate who wants to defend the position that the human mind is like a computer often waffles between two points of view that are never clearly distinguished, though actually quite distinct. Let us call them ‘position A’ and ‘position B.’ Position A argues that human minds are like computers because they are both ‘intelligent.’ Position B argues that human minds are like computers because they both lack intelligence. The reason these positions are often confused is because of an ambiguity or vagueness in the understanding of what intelligence itself is. But, if we are to consider this question in such a way as to make it relevant for an investigation into the nature of the human mind then we must define intelligence in a way that captures what we mean when we say that a human being is intelligent. I propose the following definition: a being is intelligent to the extent that it is able to knowingly make decisions for the fulfillment of a purpose. Again, this definition of intelligence is based on the effort to capture what we mean when we say that a human being has ‘intelligence.’

Let us, then, consider the two AI positions with this in mind.

II. Position A

Position A is based on the notion that computers are (or can in theory become) intelligent. My claim is that this notion is based on a failure to understand, or really consider, how computers work. Computers, it is true, can be made to mimic intelligence, and very sophisticated computer programs can mimic intelligence in very sophisticated ways, but this does not make them intelligent.

Let’s consider a sophisticated computer application: Imagine a program X that is written in such a manner as to generate another program Y, which will then execute immediately upon the termination of program X. Suppose further, that program Y itself produces a new iteration of program X, with some slight modification, and that this new program X executes immediately upon the termination of program Y, producing, in its turn, yet another iteration of program Y, with a slight modification, and so on. We would now have produced self-modifying application X-Y. Let us add to the complexity even more. Let us suppose that program X is able to produce multiple versions of program Y based upon input that it receives while running. Let us suppose the same thing about program Y. Now we have an application X-Y that not only modifies itself, but ‘learns’; that is, it modifies itself in accordance with its ‘experiences’ (i.e., its input). Of course, if we design this with enough forethought and intelligence, we might even produce an application that learns ‘something,’ that is, becomes progressively better at doing something based upon its ‘experiences.’ Now that seems very much like ‘intelligence.’

But appearances can be deceiving. Let us first of all notice that application X-Y is itself the product of human intelligence. However much it may be able to ‘teach itself’ and ‘modify itself’ it is in fact not the product of itself but the product of considerable human thought. Indeed, the more intelligent application X-Y seems, the more human intelligence we can be sure went into its designed. Such an application would almost certainly not be produced all at once. First a programmer would create a rather simple program X, which would generate a rather simple program Y, and then, over time, would add layers of complexity until application X-Y became a robust and complex application. One can easily imagine that at some point application X-Y might become so complex that its creator would no longer be able to predict what it will do next. This, however, would not be an indication that application X-Y had suddenly become intelligent, but simply an indication that its creator had reached the limits of her own intelligence’s ability to foresee all the implications of what she has created.

Indeed, we know that application X-Y would not have achieved intelligence because we know how it works. What produces the illusion of intelligence in computers can be expressed in a single word: branching. It is the ability of computers to branch, i.e., to execute different operations depending upon variations in input, that creates for the observer the impression that the computer is making a ‘decision’ based on its ‘experience.’ In fact it is doing nothing of the kind. The branching operation is coded by the programmer, who provides for various operations on the basis of variations in input anticipated by the programmer (either more or less abstractly). The computer never makes a ‘decision,’ it simply (dumbly) does the next thing it is programmed to do on the basis of the electronic situation it is in at any given moment. That situation can be made to vary over different iterations of the program (or routine) through variations in input; nevertheless, the computer does not make a ‘decision.’ It simply executes the next instruction it is caused to execute based upon its design and programming.

To realize this is to realize that AI position A is unsound. The computer has no real intelligence in the sense in which we have defined the term: It does not ‘decide’ for a ‘purpose.’ Nor is there any reason to suppose that it can acquire such intelligence by becoming more complex. Its complexity is strictly additive. It becomes more complicated as we program it to do more things on more conditions. Since we know why it does what it does on these conditions there is no reason for us to be deluded into thinking that at some point it acquires something new called ‘intelligence.’

But this leads us to AI position B.

III. Position B

Position B states that the computer is like the human mind because both actually lack intelligence. The argument for position B generally goes as follows: The human mind is reducible to the human brain. The human brain, in turn, may be thought of as a very sophisticated computer executing a super-complex version of self-modifying application X-Y. Who has written application X-Y for the human brain? There is no ‘who.’ Application X-Y is generated from human genes, which are the products of billions of years of genetic evolution, which blindly follows the Darwinian laws of natural selection.

In order to discuss position B effectively we need to consider something we have not yet considered: epistemology. The plausibility of position B is based upon a scientific-empiricist (positivist) epistemology. Basically, the scientific empiricist considers only what can be known objectively, so she looks at the behavior of other human beings and asks herself whether it is perhaps possible that their behavior can be explained as the product of a very sophisticated, biological, version of self-modifying application X-Y; and she answers ‘yes.’
Philosophers, however, (at least some of them) have long been aware of the limitations of this epistemology. Although it is extremely useful in allowing us to do empirical science, it has a major gap when used to generate theories as to the fundamental nature of reality. The gap is this: it is unable to take into account subjective experience. It is simply ‘blind’ to such experience. But we know, we know because we are ourselves ‘subjects,’ that such experience is real; hence it must be taken into account in any full treatment of reality; and certainly in any full treatment of the human mind.

So, taking this into account, let us ask whether the decision making process of a human being is really anything like the branching process of application X-Y. When I make a decision as to what I am going to do I generally do it in some such manner as this: I envision alternate possible futures, think about which actions will lead to which futures, and then choose my action on the basis of the future I want to actualize. Does application X-Y do anything of the kind? Well, we know for a fact that it doesn’t. Application X-Y simply does, at any given moment, the next thing it is programmed to do based upon the electronic conditions prevalent at that moment. It does not envision possible futures, it does not consider alternative actions, and it does not choose on the basis of its desire. So, in fact, there is a profound disanalogy between the operations of the human mind and the operations of application X-Y. Since position B is itself based upon an argument from analogy, this profound disanalogy defeats position B.

Now at this point in the argument the ‘strong AI’ advocate often jumps in to say something like: “well maybe application X-Y, if we could get into its ‘head,’ would be seen to be doing something like the human mind.” Note that this is, in effect, a return to refuted position A. It is also an ad hoc argument from ignorance. It is an argument from ignorance insofar it states that, since we don’t know what the subjective state of application X-Y is, we might as well suppose it to be the same as the subjective experience of the human mind. But this, of course, is a silly argument. It offers no grounds for this supposition, and, of course, we have good reasons to reject it. Everything that application X-Y does can be accounted for on the basis of how we know it operates, which is other than how we know the human mind operates on the basis of our own subjective experience. It is also an ad hoc argument. It is proffered simply to save position B (which, ironically, it tries to do through a return to position A). There is no good reason then to take this objection seriously.

So, to summarize: position A fails because we know that computers in fact do not operate with ‘intelligence.’ We know this because we understand the principles of their operation, which are strictly causal. Position B fails because we know that the human mind does act with intelligence. We know this because we have subjective access to its operation, which we recognize to follow the definition of intelligence we first posited (and which, of course, we derived from our subjective experience of the mind’s operation to begin with.).

IV. Position C

This leads, then, to the following conclusion: Computers are not like the human mind, because the former are not intelligent, and the latter is.


Richard Oxenberg received his Ph.D. in Philosophy from Emory University in 2002, with a concentration in Ethics and Philosophy of Religion. He currently teaches at Endicott College, in Beverly MA.


POLITICAL ANIMAL IS AN OPEN FORUM FOR SMART AND ACCESSIBLE DISCUSSIONS OF ALL THINGS POLITICAL. WHEREVER YOUR BELIEFS LIE ON THE POLITICAL SPECTRUM, THERE IS A PLACE FOR YOU HERE. OUR COMMITMENT IS TO QUALITY, NOT PARTY, AND WE INVITE ALL POLITICAL ANIMALS TO SEIZE THEIR VOICE WITH US.
THINK. DISCUSS. DEFEND. FREELY.