0924
hours. I like to think of September as the beginning of a new year.
This is all you have to say? – Amorella
Nothing
else is really on my mind. I am basically unfocused this morning. Doug and I
were talking about watching the Northern Lights in 1954 – 1955. We both would
be up in the dark to see them – very pretty and eerie too, particularly shades
of green; some are downright spooky.
From Bing Images
To you this shade of green evokes a
spiritual manifestation in the physical world. – Amorella
1027 hours. Indeed, it does. Yet the
only recollection I have of this was a similar shade near the base of the wall
and floor near the coat closet in our Majken Place hallway when I noted a
seemingly spiritual presence, a remnant of a person who died in a submarine in
the Pacific during World War II, the brother of the next door neighbor who had
had a priest come out to exorcise the spirit of his brother dwelling in their
house. I don’t believe about the brother and the exorcism but whatever it was
it appeared a reality to me at the time. I remember it having features similar
to that of a sparking fire, as a welder might create though it had no heat. In
fact the hallway felt cold as if the heat was being drained from it. I really
have no idea what it was. Kim and Carol were watching television on the first
floor. I went to bed and had the impression that my soul was exchanged for an
older one. Very odd. That was a long time ago. Lots of imagination, but I could
write about it as if it were real even though I had my doubts. I always have my
doubts. I haven’t thought about that experience for a while, probably since the
last time I mentioned it in this blog. I am getting too old to wonder about
such things. The easiest and best explanation is poor wiring in the brain. I’ve
learned long ago to live with it. I think I need a nap. (1046)
1050 hours. Actually, a better picture below sends the 'feeling' of being spooked that I used to have.
From Bing Images
You napped until lunch at
Panera/Chipotle. Carol is at Kroger’s on Tylersville before you head home. –
Amorella
1427 hours. I can’t believe I slept
almost two hours on the living room carpet but I did. I am wondering on Dead 4
and what Merlyn is up to. I mean what do the Dead do? He sees the spirits of
people but I have never had him move from point A to point B except when he
left Avalon for Elysium. – We dropped off the groceries and are now at Pine
Hill Lakes Park and Carol is taking her walk. You would never know it was
raining until about an hour or so ago.
You are thinking of having Merlyn take a
trip into Richard’s head but this is a gathering from watching “Under the Dome”
last night and a favorite film, “Being John Malkovich”. However, this is not
going to happen in here. – Amorella
1523 hours. Amorella, I don’t know
where to begin.
Why don’t we begin someplace else other than
Merlyn’s sanctuary? – Amorella
1525 hours. I have no idea where or
with whom?
You and Carol watched last night’s
“Manhattan” and “Unforgettable”. You are both quite impressed with “Manhattan”
– actors, settings, and script. You wish it were more historically accurate
rather than a period drama. – Amorella
1812 hours. I thought I would be ready
to work on Dead 4, but I cannot think of a setting or worse, a focus for
Merlyn. We know how the Dead operate. Why does Merlyn spend his time with the
Dead when he can be with the Living? It doesn’t make sense to me. – “They also serve who stand and wait.”
John Milton. How do the Dead serve standing and waiting? And, how do the Living
serve standing and waiting?
In here, both serve themselves while
waiting. – Amorella
1824 hours. I can’t argue with that,
at least for the Living.
1843
hours. How about Merlyn sitting in the restaurant talking to Socrates about
what the consequence will be with the Living reading about how it is to be
Dead? What real difference would it make even if it were true and not fiction?
To talk about
angels does not mean that angels have to exist.
Chairs exist,
and to talk about them is intentional existence.
Intentionality
is noted below in Wikipedia.
** **
Brentano coined the expression
"intentional inexistence" to indicate the peculiar ontological status
of the contents of mental phenomena. According to some interpreters the 'in-'
of 'in-existence' is to be read as locative, i.e. as indicating that "an
intended object [. . .] exists in or has ‘‘in-existence,’’ existing not
externally but in the psychological state" (Jacquette 2004, p. 102),
while others are more cautious, affirming that: "It is not clear whether
in 1874 this [...] was intended to carry any ontological commitment"
(Chrudzimski and Smith 2004, p. 205).
A major problem within intentionality
discourse is that participants often fail to make explicit whether or not they
use the term to imply concepts such as agency or desire, i.e. whether it
involves teleology. Dennett (see below) explicitly invokes teleological
concepts in the 'intentional stance'. However, most philosophers use
intentionality to mean something with no teleological import. Thus, a thought
of a chair can be about a chair without any implication of an intention or even
a belief relating to the chair. For philosophers of language, intentionality is
largely an issue of how symbols can have meaning. This lack of clarity may
underpin some of the differences of view indicated below.
To bear out further the diversity of
sentiment evoked from the notion of intentionality, Husserl followed on
Brentano, and gave intentionality more widespread attention, both in
continental and analytic philosophy. In contrast to Brentano's view, French
philosopher Jean-Paul Sartre (Being and Nothingness) identified intentionality
with consciousness, stating that the two were indistinguishable. German
philosopher Martin Heidegger (Being and Time), defined intentionality as
"care" (Sorge), a sentient condition where an individual's
existentiality, facticity, and forfeiture to the world identifies their
ontological significance, in contrast to that which is the mere ontic
(thinghood).
Other twentieth century philosophers such
as Gilbert Ryle and AJ Ayer were critical of Husserl's concept of intentionality
and his many layers of consciousness, Ryle insisting that perceiving is not a
process and Ayer that describing one's knowledge is not to describe mental
processes. The effect of these positions is that consciousness is so fully
intentional that the mental act has been emptied of all content and the idea of
pure consciousness is that it is nothing (Sartre also referred to
"consciousness" as “nothing”).
Platonist Roderick Chisholm has revived
the Brentano thesis through linguistic analysis, distinguishing two parts to
Brentano's concept, the ontological aspect and the psychological aspect.
Chisholm's writings have attempted to summarize the suitable and unsuitable
criteria of the concept since the Scholastics, arriving at a criterion of
intentionality identified by the two aspects of Brentano's thesis and defined
by the logical properties that distinguish language describing psychological
phenomena from language describing non-psychological phenomena. Chisholm's
criteria for the intentional use of sentences are: existence independence,
truth-value indifference, and referential opacity.
In current artificial intelligence and
philosophy of mind intentionality is a controversial subject and sometimes
claimed to be something that a machine will never achieve. John Searle argued
for this position with the Chinese room thought experiment, according to which
no syntactic operations that occurred in a computer would provide it with
semantic content. As he noted in the article, Searle's
view was a minority position in artificial intelligence and philosophy of mind.
Selected and edited from Wikipedia --
Intentionality
** **
This
leads to the “Chinese Room” below,
also from Wikipedia.
** **
Chinese
room
If you can carry on an
intelligent conversation with an unknown partner, does this imply that the
unknown partner understands the conversation, has a mind, and experiences
consciousness?
The Chinese room is
a thought experiment presented by John Searle to challenge the claim that it is
possible for a digital computer running a program to have a "mind"
and "consciousness" in the same sense that people do; simply by
virtue of running the right program. The experiment is intended to help refute
a philosophical position that Searle named "strong AI":
"The appropriately
programmed computer with the right inputs and outputs would thereby have a mind
in exactly the same sense human beings have minds."
To contest this view,
Searle writes in his first description of the argument: "Suppose that I'm
locked in a room and ... that I know no Chinese, either written or
spoken". He further supposes that he has a set of rules in English that
"enable me to correlate one set of formal symbols with another set of
formal symbols", that is, the Chinese characters. These rules allow him to
respond, in written Chinese, to questions, also written in Chinese, in such a
way that the posers of the questions – who do understand Chinese – are
convinced that Searle can actually understand the Chinese conversation too,
even though he cannot. Similarly, he argues that if there is a computer program
that allows a computer to carry on an intelligent conversation in written
Chinese, the computer executing the program would not understand the
conversation either.
The experiment is the centerpiece
of Searle's Chinese room argument, which holds that a program cannot
give a computer a “mind”, “understanding” or “consciousness”, regardless of how
intelligently it may make it behave. The argument is directed against the
philosophical positions of functionalism and computationalism, which hold that
the mind may be viewed as an information processing system operating on formal
symbols. Although it was originally presented in reaction to the statements of
artificial intelligence researchers, it is not an argument against the goals of
AI research, because it does not limit the amount of intelligence a machine can
display. The argument applies only to digital computers and does not apply to
machines in general. This kind of argument against AI was described by John
Haugeland as the "hollow shell" argument.
Searle's argument first appeared
in his paper "Minds, Brains, and Programs", published in Behavioral and Brain Sciences in 1980.
It has been widely discussed in the years since.
Chinese
room thought experiment
Searle's thought experiment
begins with this hypothetical premise: suppose that artificial intelligence
research has succeeded in constructing a computer that behaves as if it
understands Chinese. It takes Chinese characters as input and, by following the
instructions of a computer program, produces other Chinese characters, which it
presents as output. Suppose, says Searle, that this computer performs its task
so convincingly that it comfortably passes the Turing test: it convinces a
human Chinese speaker that the program is itself a live Chinese speaker. To all
of the questions that the person asks, it makes appropriate responses, such
that any Chinese speaker would be convinced that he is talking to another
Chinese-speaking human being.
The question Searle wants
to answer is this: does the machine literally "understand"
Chinese? Or is it merely simulating the ability to understand Chinese?
Searle calls the first position “strong AI” and the latter "weak AI".
Searle then supposes that
he is in a closed room and has a book with an English version of the computer
program, along with sufficient paper, pencils, erasers, and filing cabinets.
Searle could receive Chinese characters through a slot in the door, process
them according to the program's instructions, and produce Chinese characters as
output. If the computer had passed the Turing test this way, it follows, says
Searle, that he would do so as well, simply by running the program manually.
Searle asserts that there
is no essential difference between the roles of the computer and himself in the
experiment. Each simply follows a program, step-by-step, producing a behavior,
which is then interpreted as demonstrating intelligent conversation. However,
Searle would not be able to understand the conversation. ("I don't speak a
word of Chinese," he points out.) Therefore, he argues, it follows that
the computer would not be able to understand the conversation either.
Searle argues that without
"understanding" (or “intentionality”), we cannot describe what the
machine is doing as "thinking" and since it does not think, it does
not have a "mind" in anything like the normal sense of the word.
Therefore he concludes that "strong AI" is false.
Philosophy
Although the Chinese Room
argument was originally presented in reaction to the statements of AI researchers,
philosophers have come to view it as an important part of the philosophy of
mind. It is a challenge to functionalism and the computational theory of mind
and is related to such questions as the mind—body problem, the problem of other
minds, the symbol-grounding problem, and the hard problem of conscousness.
Strong AI
Searle identified a
philosophical position he calls “strong AI”:
The appropriately
programmed computer with the right inputs and outputs would thereby have a mind
in exactly the same sense human beings have minds.
The definition hinges on
the distinction between simulating a mind and actually having a
mind. Searle writes that "according to Strong AI, the correct simulation
really is a mind. According to Weak AI, the correct simulation is a model of
the mind."
The position is implicit in
some of the statements of early AI researchers and analysts. For example, in
1955, AI founder Herbert A. Simon declared that "there are now in the
world machines that think, that learn and create" and claimed that they
had "solved the venerable mind—body problem, explaining how a system
composed of matter can have the properties of mind.” John Haugeland wrote that
"AI wants only the genuine article: machines with minds, in the
full and literal sense. This is not science fiction, but real science, based on
a theoretical conception as deep as it is daring: namely, we are, at root, computers
ourselves."
Searle also ascribes the
following positions to advocates of strong AI:
•
AI systems can
be used to explain the mind;
•
The study of
the brain is irrelevant to the study of the mind; and
•
The Turing test
is adequate for establishing the existence of mental states.
•
Strong AI as
computationalism or functionalism
In more recent presentations
of the Chinese room argument, Searle has identified "strong AI" as
"computer functionalism” (a term he attributes to Daniel Dennett).
Functionalism is a position in modern philosophy of mind that holds that we can
define mental phenomena (such as beliefs, desires, and perceptions) by
describing their functions in relation to each other and to the outside world.
Because a computer program can accurately represent functional relationships as
relationships between symbols, a computer can have mental phenomena if it runs
the right program, according to functionalism.
Stevan Harnad argues that
Searle's depictions of strong AI can be reformulated as "recognizable
tenets of computationalism, a position (unlike "strong AI")
that is actually held by many thinkers, and hence one worth refuting.”
Computationalism is the position in the philosophy of mind which argues that
the mind can be accurately described as an information-processing system.
Each of the following,
according to Harnad, is a "tenet" of computationalism:
•
Mental states
are computational states (which is why computers can have mental states and
help to explain the mind);
•
Computational
states are implementation-independent — in other words, it is the software that
determines the computational state, not the hardware (which is why the brain,
being hardware, is irrelevant); and that
•
Since
implementation is unimportant, the only empirical data that matters is how the
system functions; hence the Turing test is definitive.
•
Strong AI vs.
biological naturalism
Searle holds a
philosophical position he calls “biological naturalism”: that consciousness and
understanding require specific biological machinery that is found in brains. He
writes "brains cause minds" and that "actual human mental
phenomena [are] dependent on actual physical–chemical properties of actual
human brains". Searle argues that this machinery (known to neuroscience as
the “neural correlates of consciousness) must have some (unspecified)
"causal powers" that permit the human experience of consciousness.
Searle's faith in the existence of these powers has been criticized.
Searle does not disagree
that machines can have consciousness and understanding, because, as he writes,
"we are precisely such machines". Searle holds that the brain is, in
fact, a machine, but the brain gives rise to consciousness and understanding
using machinery that is non-computational. If neuroscience is able to isolate
the mechanical process that gives rise to consciousness, then Searle grants
that it may be possible to create machines that have consciousness and
understanding. However, without the specific machinery required, Searle does
not believe that consciousness can occur.
Biological naturalism
implies that one cannot determine if the experience of consciousness is
occurring merely by examining how a system functions, because the specific
machinery of the brain is essential. Thus, biological naturalism is directly
opposed to both behaviorism and functionalism (including "computer
functionalism" or "strong AI"). Biological naturalism is similar
to identity theory (the position that mental states are "identical
to" or "composed of" neurological events), however, Searle has
specific technical objections to identity theory. Searle's biological
naturalism and strong AI are both opposed to Cartesian dualism, the classical
idea that the brain and mind are made of different "substances".
Indeed, Searle accuses strong AI of dualism, writing that "strong AI only
makes sense given the dualistic assumption that, where the mind is concerned,
the brain doesn't matter."
Consciousness
Searle's original
presentation emphasized "understanding"—that is, mental states with
what philosophers call “intentionality” —and did not directly address other
closely related ideas such as "consciousness". However, in more
recent presentations Searle has included consciousness as the real target of
the argument.
Computational models of
consciousness are not sufficient by themselves for consciousness. The
computational model for consciousness stands to consciousness in the same way
the computational model of anything stands to the domain being modeled. Nobody
supposes that the computational model of rainstorms in London will leave us all
wet. But they make the mistake of supposing that the computational model of
consciousness is somehow conscious. It is the same mistake in both cases.
—John R.
Searle, Consciousness and Language, p. 16
David Chalmers writes
"it is fairly clear that consciousness is at the root of the matter"
of the Chinese room.
Colin McGinn argues that
that the Chinese room provides strong evidence that the hard problem of
consciousness is fundamentally insoluble. The argument, to be clear, is not
about whether a machine can be conscious, but about whether it (or anything
else for that matter) can be shown to be conscious. It is plain that any other
method of probing the occupant of a Chinese room has the same difficulties in
principle as exchanging questions and answers in Chinese. It is simply not
possible to divine whether a conscious agency inhabits the room or some clever
simulation.
Searle argues that this
only true for an observer outside of the room. The whole point of the
thought experiment is to put someone inside the room, where they can
directly observe the operations of consciousness. Searle claims that from his
vantage point within the room there is nothing he can see that could imaginably
give rise to consciousness, other than himself, and clearly he does not have a
mind that can speak Chinese.
Computer
science
The Chinese room argument
is primarily an argument in the philosophy of mind, and both major computer
scientists and artificial intelligence researchers consider it irrelevant to
their fields. However, several concepts developed by computer scientists are
essential to understanding the argument, including symbol processing, Turing
machines, Turing completeness, and the Turing test.
Strong AI vs. AI
research
Searle's arguments are not
usually considered an issue for AI research. Stuart Russell and Peter Norvig
observe that most AI researchers "don't care about the strong AI
hypothesis—as long as the program works, they don't care whether you call it a
simulation of intelligence or real intelligence." The primary mission of
artificial intelligence research is only to create useful systems that act
intelligently, and it does not matter if the intelligence is "merely"
a simulation.
Searle does not disagree
that AI research can create machines that are capable of highly intelligent
behavior. The Chinese room argument leaves open the possibility that a digital
machine could be built that acts more intelligent than a person, but
does not have a mind or intentionality in the same way that brains do. Indeed,
Searle writes that "the Chinese room argument ... assumes complete success
on the part of artificial intelligence in simulating human cognition."
Searle's "strong
AI" should not be confused with “strong AI” as defined by Ray Kurzweil and other futurists, who use the
term to describe machine intelligence that rivals or exceeds human
intelligence. Kurzweil is concerned primarily with the amount of
intelligence displayed by the machine, whereas Searle's argument sets no limit
on this. Searle argues that even a super-intelligent machine would not
necessarily have a mind and consciousness.
Symbol processing
The Chinese room (and all
modern computers) manipulate physical objects in order to carry out
calculations and do simulations. AI researchers Allen Newell and Herbert A.
Simon called this kind of machine a physical symbol system. It is also
equivalent to the formal systems used in the field of mathematical logic.
Searle emphasizes the fact that this kind of symbol manipulation is syntactic (borrowing
a term from the study of grammar). The computer manipulates the symbols using a
form of syntax rules, without any knowledge of the symbol's semantics (that is,
their meaning).
Chinese room as a
Turing machine
The Chinese room has a
design analogous to that of a modern computer. It has a Von Neumann
architecture, which consists of a program (the book of instructions), some
memory (the papers and file cabinets), a CPU, which follows the instructions
(the man), and a means to write symbols in memory (the pencil and eraser). A
machine with this design is known in theoretical computer science as “Turing
complete”, because it has the necessary machinery to carry out any computation
that a Turing machine can do, and therefore it is capable of doing a
step-by-step simulation of any other digital machine, given enough memory and
time. Alan Turing writes, "all digital computers are in a sense
equivalent." The widely accepted Church-Turing thesis holds that any
function computable by an effective procedure is computable by a Turing
machine. In other words, the Chinese room can do whatever any other digital
computer can do (albeit much, much more slowly).
There are some critics,
such as Hanoch Ben-Yami, who argue that the Chinese room cannot simulate all
the abilities of a digital computer, such as being able to determine the
current time.
Turing test
The Turing test is a test
of a machine's ability to exhibit intelligent behaviour. In Alan Turing’s
original illustrative example, a human judge engages in a natural language
conversation with a human and a machine designed to generate performance
indistinguishable from that of a human being. All participants are separated
from one another. If the judge cannot reliably tell the machine from the human,
the machine is said to have passed the test.
Complete
argument
Searle has produced a more
formal version of the argument of which the Chinese Room forms a part. He
presented the first version in 1984. The version given below is from 1990. The
only part of the argument, which should be controversial is A3 and it is this
point which the Chinese room thought experiment is intended to prove.
He begins with three
axioms:
(A1)
"Programs are formal (syntactic)."
A
program uses syntax to manipulate symbols and pays no attention to the
semantics of the symbols. It knows where to put the symbols and how to move them
around, but it doesn't know what they stand for or what they mean. For the
program, the symbols are just physical objects like any others.
(A2)
"Minds have mental contents (semantics)."
Unlike
the symbols used by a program, our thoughts have meaning: they represent things
and we know what it is they represent.
(A3)
"Syntax by itself is neither constitutive of nor sufficient for
semantics."
This is
what the Chinese room thought experiment is intended to prove: the Chinese room
has syntax (because there is a man in there moving symbols around). The Chinese
room has no semantics (because, according to Searle, there is no one or nothing
in the room that understands what the symbols mean). Therefore, having syntax
is not enough to generate semantics.
Searle posits that these
lead directly to this conclusion:
(C1)
Programs are neither constitutive of nor sufficient for minds.
This
should follow without controversy from the first three: Programs don't have
semantics. Programs have only syntax, and syntax is insufficient for semantics.
Every mind has semantics. Therefore programs are not minds.
This much of the argument
is intended to show that artificial intelligence can never produce a machine
with a mind by writing programs that manipulate symbols. The remainder of the
argument addresses a different issue. Is the human brain running a program? In
other words, is the computational theory of mind correct? He begins with an
axiom that is intended to express the basic modern scientific consensus about
brains and minds:
(A4)
Brains cause minds.
Searle claims that we can
derive "immediately" and "trivially" that:
(C2) Any
other system capable of causing minds would have to have causal powers (at
least) equivalent to those of brains.
Brains
must have something that causes a mind to exist. Science has yet to determine
exactly what it is, but it must exist, because minds exist. Searle calls it
"causal powers". "Causal powers" is whatever the brain uses
to create a mind. If anything else can cause a mind to exist, it must have
"equivalent causal powers". "Equivalent causal powers" is
whatever else that could be used to make a mind.
And from this he derives
the further conclusions:
(C3) Any
artifact that produced mental phenomena, any artificial brain, would have to be
able to duplicate the specific causal powers of brains, and it could not do
that just by running a formal program.
This
follows from C1 and C2: Since no program can produce a mind, and
"equivalent causal powers" produce minds, it follows that programs do
not have "equivalent causal powers."
(C4) The
way that human brains actually produce mental phenomena cannot be solely by
virtue of running a computer program.
Since programs do not have
"equivalent causal powers", "equivalent causal powers"
produce minds, and brains produce minds, it follows that brains do not use
programs to produce minds.
Selected and edited from – Wikipedia – Chinese Room
** **
2141
hours. Now, what I think is that the above material has a part of this
conversation between Merlyn and Socrates in Dead 4.
Yes, it does. This was brought up in the
blog in the last two weeks as a prep. Enough for tonight. – Amorella
2143 hours. I do not understand the
above because it is not in my training exactly. I did give an exploratory talk
about artificial intelligence in the Ohio Writing Project at Miami University
in the 1990’s though and have some very early background on the subject having
been a part of the World Future Society in the 1970’s and 80’s – but the key
business for me here is that I can follow the logic, at least enough for a
greater sense of understanding if not all the particulars of the subject. This
is very interesting and I am curious how Socrates and Merlyn are going to
handle it in their own terms. This stuff pumps me up. It is the cool beans of
writing these Merlyn stories.
Post, boy. – Amorella
2149 hours. And, to think that until
1843 hours today I had no idea what this segment was going to be about; not
a clue.
No comments:
Post a Comment