The development of artificial intelligence (AI) is a small aspect of the computer revolution; though with the creation of AI we, as humans, are able to improve our quality of life. For example, AI can be used to monitor power production plants or to make machines of all kinds more understandable and under the control of humans; even with all its ability it is unlikely that an artificial intelligence system will be able to replace the human mind. A standard definition of artificial intelligence is that it is simply the effort to produce on computers forms of behavior that, if they were done by human beings, we would regard as intelligent. But within this definition, there is still a variety of claims and ways of interpreting the results of AI programs. The most common and natural approach to AI research is to ask of any program, what can it do? What are the actual results in terms of output? On this view, what matters about a chess-playing program, for example, is simply how good it is. Can it, for example, beat chess grand masters? But there is also a more theoretically oriented approach in artificial intelligence, which was the basis of the AI contribution to the new discipline of cognitive science.
According to this theoretical approach, what matters are not just the input-output relations of the computer but also what the program can tell us about actual human cognition (Pack, 1994). Viewed in this light, AI aims to give not just a commercial application but a theoretical understanding of human cognition. To make this distinction clear, think of your pocket calculator. It can outperform any living mathematician at multiplication and division and so qualifies as intelligent on the definition of artificial intelligence I just gave. But this fact is of no psychological interest because such computers do not attempt to mimic the actual thought processes of people doing arithmetic (Crawford, 1994). On the other hand, AI programs that simulate human vision are typically theoretical attempts to understand the actual processes of human beings in perceiving the external world.
Just to have labels, let us distinguish between "AI as practical application" (AIPA) and "AI as cognitive science" (AICS). A great deal of the debate about AI confuses the two views, so that sometimes success in AI's practical application is supposed to provide theoretical insights in cognitive science. Chess-playing programs are a good example. Early chess playing programs tried to mimic the thought processes of actual chess players, but they were not very successful. More recent successes have been achieved by ignoring the thoughts of chess masters and simply using the much-greater computational power of contemporary hardware. This approach, called "brute force," exploits the fact that specially designed computers can calculate hundreds of thousands or even millions of moves, something no human chess player can do (Matthys, 1995).
The best current programs can thus beat all but the very best chess players, but it would be a mistake to think of them as contributions to AICS (Ptacek, 1994). They tell us nothing about human cognition, except that an electrical machine working on different principles can defeat human beings in playing chess, as it can defeat human beings in doing arithmetic. For the sake of the discussion, let us assume that AIPA is completely successful and that we will soon have programs whose performance can equal or beat that of any human in any comprehension task at all. Assume we had machines that could not only play better chess but display equal or better comprehension of natural languages, write equal or better novels and poems, and prove equal or better mathematical theorems. In short, let us fantasize any success of AIPA that we care to imagine. What should we make of these results? What would be the implications for AICS of such successes in AIPA? Well, even within the cognitive science approach, there are some further distinctions to be made.
The strongest claim of all is that if we programmed a digital computer with the right programs and if it had the right inputs and outputs, then it would have thoughts and feelings in exactly the same literal sense in which you and I have thoughts and feelings. According to this view, the computer implementing an AICS program is not just simulating intelligent thought processes; it actually has these thought processes. Again, on this view, the computer is not a metaphor for the mind; rather, the appropriately programmed computer literally has a mind; so if we had an AIPA program that appropriately matched human cognition, we would artificially have created an actual mind. From what I have learned thus far about AI, it is to say that the mind is to the brain as the program is to the hardware. The mind, in short, is just the program in the hardware or wetware of the human brain, but these very same minds could be equally programmed in commercial digital computers manufactured by Compaq or IBM.
One should always distinguish Strong AI from other forms of AICS. At the opposite end of the scale is the weakest claim of artificial intelligence: simply, that the appropriately programmed digital computer is a tool that can be used in the study of human cognition. By attempting to simulate the formal structure of cognitive processes on a computer, we can better come to understand cognition. On this weaker view, the computer plays the same role in the study of human beings that it plays in any other discipline (Tubes, 1995; Crawford, 1994). We use computers to simulate the behavior of weather patterns, airline flight schedules, and the flow of money in things such as the Brazilian economy.
But no one engaged in programming any of these computer simulations thinks that the computer program literally makes rainstorms, so that when we turn the machine on we are likely to be drenched; nor do they suppose that the computer will literally take off and fly to San Diego when we are doing a computer simulation of airline flights. Nor does anyone suppose that the computer simulation of the flow of money in the Brazilian economy will increase our supply of cruzeiros. Similarly, in accordance with the weaker conception of AI, we should not think that a computer simulation of cognitive processes actually did any real thinking. According to this weaker, or more cautious, version of AICS, we can use the computer to do models or simulations of mental processes, as we can use the computer to do simulations of any other process that we can describe precisely enough to enable us to program its simulation on a computer. This other extreme of artificial intelligence I call Weak AI. No one, I believe, could argue with Weak AI..