1. Background
Researchers in AI fields often suggest our human’s mental activity can be just like a computer program. That may means our brain is just a
- information processor like a CPU
- has working and long-term memory like ROM\RAM
That’s the Strong AI. This paper in intended to prove that Strong AI is false
The “Turing Test”: Anything capable of conducting a conversation well-enough to fool a typical human questioner (or “interlocutor”) into thinking that it is conversing with a genuine human ought to be deemed “intelligent.”
The author of this paper, Searle@UCB gave The Chinese Room thought experiment, arguing that program can pass the Turing Test but still have no intelligence at all because the program can just follow the instruction and make it look like a human from outside.
2. Abstract
This article is an attempt to explore the consequences of two propositions
- Intentionality in human beings (and animals) is a product of causal features of the brain.
I assume this is an empirical fact about the actual causal relations between mental processes and brains.
It says simply that certain brain processes are sufficient for intentionality.
- Instantiating a computer program is never by itself a sufficient condition of intentionality.
The main argument of this paper is directed at establishing this claim
The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality.
So Strong AI is not about machines but about programs, and no program by itself is sufficient for thinking.