By: Richard Oxenberg
I. Two Positions
The strong AI advocate who wants to defend the position that the human mind is like a computer often waffles between two points of view that are never clearly distinguished, though actually quite distinct. Let us call them ‘position A’ and ‘position B.’ Position A argues that human minds are like computers because they are both ‘intelligent.’ Position B argues that human minds are like computers because they both lack intelligence. The reason these positions are often confused is because of an ambiguity or vagueness in the understanding of what intelligence itself is. But, if we are to consider this question in such a way as to make it relevant for an investigation into the nature of the human mind then we must define intelligence in a way that captures what we mean when we say that a human being is intelligent. I propose the following definition: a being is intelligent to the extent that it is able to knowingly make decisions for the fulfillment of a purpose. Again, this definition of intelligence is based on the effort to capture what we mean when we say that a human being has ‘intelligence.’
Let us, then, consider the two AI positions with this in mind.
II. Position A
Position A is based on the notion that computers are (or can in theory become) intelligent. My claim is that this notion is based on a failure to understand, or really consider, how computers work. Computers, it is true, can be made to mimic intelligence, and very sophisticated computer programs can mimic intelligence in very sophisticated ways, but this does not make them intelligent.
Let’s consider a sophisticated computer application: Imagine a program X that is written in such a manner as to generate another program Y, which will then execute immediately upon the termination of program X. Suppose further, that program Y itself produces a new iteration of program X, with some slight modification, and that this new program X executes immediately upon the termination of program Y, producing, in its turn, yet another iteration of program Y, with a slight modification, and so on. We would now have produced self-modifying application X-Y. Let us add to the complexity even more. Let us suppose that program X is able to produce multiple versions of program Y based upon input that it receives while running. Let us suppose the same thing about program Y. Now we have an application X-Y that not only modifies itself, but ‘learns’; that is, it modifies itself in accordance with its ‘experiences’ (i.e., its input). Of course, if we design this with enough forethought and intelligence, we might even produce an application that learns ‘something,’ that is, becomes progressively better at doing something based upon its ‘experiences.’ Now that seems very much like ‘intelligence.’
But appearances can be deceiving. Let us first of all notice that application X-Y is itself the product of human intelligence. However much it may be able to ‘teach itself’ and ‘modify itself’ it is in fact not the product of itself but the product of considerable human thought. Indeed, the more intelligent application X-Y seems, the more human intelligence we can be sure went into its designed. Such an application would almost certainly not be produced all at once. First a programmer would create a rather simple program X, which would generate a rather simple program Y, and then, over time, would add layers of complexity until application X-Y became a robust and complex application. One can easily imagine that at some point application X-Y might become so complex that its creator would no longer be able to predict what it will do next. This, however, would not be an indication that application X-Y had suddenly become intelligent, but simply an indication that its creator had reached the limits of her own intelligence’s ability to foresee all the implications of what she has created.
Indeed, we know that application X-Y would not have achieved intelligence because we know how it works. What produces the illusion of intelligence in computers can be expressed in a single word: branching. It is the ability of computers to branch, i.e., to execute different operations depending upon variations in input, that creates for the observer the impression that the computer is making a ‘decision’ based on its ‘experience.’ In fact it is doing nothing of the kind. The branching operation is coded by the programmer, who provides for various operations on the basis of variations in input anticipated by the programmer (either more or less abstractly). The computer never makes a ‘decision,’ it simply (dumbly) does the next thing it is programmed to do on the basis of the electronic situation it is in at any given moment. That situation can be made to vary over different iterations of the program (or routine) through variations in input; nevertheless, the computer does not make a ‘decision.’ It simply executes the next instruction it is caused to execute based upon its design and programming.
To realize this is to realize that AI position A is unsound. The computer has no real intelligence in the sense in which we have defined the term: It does not ‘decide’ for a ‘purpose.’ Nor is there any reason to suppose that it can acquire such intelligence by becoming more complex. Its complexity is strictly additive. It becomes more complicated as we program it to do more things on more conditions. Since we know why it does what it does on these conditions there is no reason for us to be deluded into thinking that at some point it acquires something new called ‘intelligence.’
But this leads us to AI position B.
III. Position B
Position B states that the computer is like the human mind because both actually lack intelligence. The argument for position B generally goes as follows: The human mind is reducible to the human brain. The human brain, in turn, may be thought of as a very sophisticated computer executing a super-complex version of self-modifying application X-Y. Who has written application X-Y for the human brain? There is no ‘who.’ Application X-Y is generated from human genes, which are the products of billions of years of genetic evolution, which blindly follows the Darwinian laws of natural selection.
In order to discuss position B effectively we need to consider something we have not yet considered: epistemology. The plausibility of position B is based upon a scientific-empiricist (positivist) epistemology. Basically, the scientific empiricist considers only what can be known objectively, so she looks at the behavior of other human beings and asks herself whether it is perhaps possible that their behavior can be explained as the product of a very sophisticated, biological, version of self-modifying application X-Y; and she answers ‘yes.’
Philosophers, however, (at least some of them) have long been aware of the limitations of this epistemology. Although it is extremely useful in allowing us to do empirical science, it has a major gap when used to generate theories as to the fundamental nature of reality. The gap is this: it is unable to take into account subjective experience. It is simply ‘blind’ to such experience. But we know, we know because we are ourselves ‘subjects,’ that such experience is real; hence it must be taken into account in any full treatment of reality; and certainly in any full treatment of the human mind.
So, taking this into account, let us ask whether the decision making process of a human being is really anything like the branching process of application X-Y. When I make a decision as to what I am going to do I generally do it in some such manner as this: I envision alternate possible futures, think about which actions will lead to which futures, and then choose my action on the basis of the future I want to actualize. Does application X-Y do anything of the kind? Well, we know for a fact that it doesn’t. Application X-Y simply does, at any given moment, the next thing it is programmed to do based upon the electronic conditions prevalent at that moment. It does not envision possible futures, it does not consider alternative actions, and it does not choose on the basis of its desire. So, in fact, there is a profound disanalogy between the operations of the human mind and the operations of application X-Y. Since position B is itself based upon an argument from analogy, this profound disanalogy defeats position B.
Now at this point in the argument the ‘strong AI’ advocate often jumps in to say something like: “well maybe application X-Y, if we could get into its ‘head,’ would be seen to be doing something like the human mind.” Note that this is, in effect, a return to refuted position A. It is also an ad hoc argument from ignorance. It is an argument from ignorance insofar it states that, since we don’t know what the subjective state of application X-Y is, we might as well suppose it to be the same as the subjective experience of the human mind. But this, of course, is a silly argument. It offers no grounds for this supposition, and, of course, we have good reasons to reject it. Everything that application X-Y does can be accounted for on the basis of how we know it operates, which is other than how we know the human mind operates on the basis of our own subjective experience. It is also an ad hoc argument. It is proffered simply to save position B (which, ironically, it tries to do through a return to position A). There is no good reason then to take this objection seriously.
So, to summarize: position A fails because we know that computers in fact do not operate with ‘intelligence.’ We know this because we understand the principles of their operation, which are strictly causal. Position B fails because we know that the human mind does act with intelligence. We know this because we have subjective access to its operation, which we recognize to follow the definition of intelligence we first posited (and which, of course, we derived from our subjective experience of the mind’s operation to begin with.).
IV. Position C
This leads, then, to the following conclusion: Computers are not like the human mind, because the former are not intelligent, and the latter is.
Richard Oxenberg received his Ph.D. in Philosophy from Emory University in 2002, with a concentration in Ethics and Philosophy of Religion. He currently teaches at Endicott College, in Beverly MA.
The author demonstrated a complete lack of understanding of what machine learning is, of current findings in neuroscience and also managed to contradict his own logic while trying to attack his own strawman. Bravo!
Can you please elaborate? I’m interested in getting a better understanding and am wondering if you could (a) provide a source that clearly and simply explains what machine learning is, (b) expand on what these “current findings in neuroscience” might be (source or study?j, and (c) explicitly identify where the author’s argument contradicts itself.
Thanks
I agree, and not only current findings in neuroscience, the basic structure of how our brain works is not understood by him leading him to make erroneous statements and implications.
“When I make a decision as to what I am going to do I generally do it in some such manner as this: I envision alternate possible futures, think about which actions will lead to which futures, and then choose my action on the basis of the future I want to actualize. Does application X-Y do anything of the kind? Well, we know for a fact that it doesn’t. Application X-Y simply does, at any given moment, the next thing it is programmed to do based upon the electronic conditions prevalent at that moment. It does not envision possible futures, it does not consider alternative actions, and it does not choose on the basis of its desire. ”
In a certain sense people are just following the cultural and biological programming they are made of. All decisions have a reason for why they were made. This reason was by definition why that choice was made instead of any other. We determine value from a societal or biological perspective and act on what will lead us to that value. We may confuse ourselves because we are very complex and believe we are going after one goal when really it is another we are after but nonetheless all actions are done because of a pull from some goal.
It is not hard to imagine a process where a computer looks at an array of decisions and based on which one gets the process to its valued goal, executes that decisions. Chess A I does something of the sort. It can change direction based on changing environments.
Maybe your X-Y computer can’t change course at the level of complexity you imagine it at but don’t believe your model is exhaustive.
“Computers are not like the human mind, because the former are not intelligent, and the latter is.”
Yet.
Your definition of intelligence is conflated with consciousness. Your subjective knowledge of your decision making process is not itself a sign of intelligence. You can be consciously aware of things that have no connection to your brain’s decision making process (see Gazzaniga’s split brain experiments) and in neuroscience it’s reasonably well established that your conscious experiences are for the most part generated *after* unconscious processes reach decisions. “Intelligence”, having knowledge of the world and being able to act on that knowledge toward a goal, probably has nothing at all to do with consciousness. Once you correctly define intelligence as being potentially unconscious then you can do away with your concerns with argument B and once you recognize that you will also see that machine learning *algorithms* (go read about q-learning) can act in an intelligent manner without any human intervention. Even the q-learning algorithm is discoverable by random mutation over time (genetic algorithm machine learning), which takes the human out of the system entirely. Your anthropocentric argument for intelligence is, in short, outdated.
Well you seem to have forgotten what your professors have told you when you joined your first programming course.
“You cannot program a computer to solve a problem that you do not know how to solve by yourself”.
In the end, machine learning algorithms are inherently deterministic. The computer switches it’s state based on the inputs received, a cost function that is intended to reflect how far am I from my goal, and ironically a random component, which if you truly understand cs know that is not so random after all. All this combined gives you an idea of intelligence, however, anything the program does is exactly what the programmer intended it to do. In the end, all its changes of state can be anticipated by any humans with enough spare time.
The Turing test intends to identify true AI(artificial intelligence), by virtue of its apparent similarity to BI(biological intelligence). But might it unintentionally reveal that BI isn’t sufficiently conscious (aware enough), to differentiate between non-sentient AI robots and pseudo-sentient homo sapiens? Put another way: Might the device doing the determination be of such a standard, that any device being evaluated by it doesn’t have to achieve that much?
The authors first argument is much like Paley’s arguments for a cosmic designer in Natural Theology or Evidences of the Existence and Attributes of the Deity, that any intelligence is proof of a greater intelligence which must have created it. (possiblyunsurprising given their education). Failing to understand that computer intelligences may not function like ours, and even for current neural nets only at low levels humans can understand the processes, but cannot necessarily specify how the net “made its decision”.
This is also much like we understand how the neurons in our brains work, understand how the synaptic feedbacks are changed by various hormones, but have no real clue how consciousness works, currently staking it up to an emergent effect of a sufficiently complex network.
I suspect this article is the result of someone having no expertise in neuro-biology, nor machine learning, having instead a PhD in comparative religion. I mean, for a real world example, just look at the recent achievements of OpenAI, not only perfectly learning human techniques for games and applying them at such a high level that champions stand no chance, but also developing entirely novel techniques for said games that the top human players are struggling to learn.
Another point to be made is : Does consciousness really matter? Most recent studies show that humans make a decision, begin to act upon it, become aware of this, and then rationalise it, in that order. Machines are currently at stage 2 of that process, they may never need stage 3 and 4 as in reality the action is what is important.
TL;DR The author is mistaking consciousness for intelligence.
It seems recent studies involving electrodes implanted in human brain shows humans have the ability to control and regulate their neural activity volitionally. Quantum science says mind can influence the outcome of a process. Unlike computers we are self aware of our thoughts and actions. Self awareness create choices .Recent scientific studies shows physical and social environments can make a difference in the neural flexibility.
I concur with the majority of comments here, especially the ones stating that the author is conflating intelligence with consciousness.
In Cognitive Psychology, we find something called The Computational Paradox, which asserts: “The more we discover that the brain is like a computer, the more we discover that the brain is not like a computer.” I think this paradox leans toward affirming Oxenberg’s argument.
Let me thank everyone for your comments. I might note that the article is not intended to be a discussion of recent developments in artificial intelligence or recent discoveries in neuroscience.
The argument of the article comes down to something very simple: human intelligence, as we experience it from “within,” is self-aware and teleological, whereas computer operations, as we have ourselves designed them, are deterministic and without self-knowledge. Hence computers are not ‘intelligent,’ where human intelligence serves as the standard for what is meant by “intelligence.”
Of course this is not to say that computers cannot mimic and exceed human calculative abilities. They certainly can and will no doubt continue to progress in their ability to do so. But this is not “intelligence” as it has been traditionally understood.
For instance, when we ask if we will one day discover intelligent life on another planet we don’t mean to ask whether we will one day discover very sophisticated calculating machines. We are asking whether we will one day encounter self-aware, rational, teleologically ordered, decision-making beings.
I think we dehumanize ourselves when we suppose that we are nothing more than computers and/or that computers are nothing less than we. In order not to do so we should remain mindful that *artificial* intelligence is not actual intelligence.
Bravo, Richard. A simple, astute, succinct response; richly textured. A fusion of Ockham’s razor and Promethean torch; recollecting Kierkegaard observation: “You’ll never find consciousness at the other end of a microscope.”
Thanks, Stefan. That Kierkegaardian quote about sums it up.