Afternoon.
Earlier Gary contacted you about lunch. He picked you up and drove to ?? on
Mason-Montgomery Road cattycornered from Potbelly's. You each had hamburgers
(quite good) and chips with a barbeque dip; he had two ales and you two Coke
Zeroes. Later, once home you and Carol had a pleasant chat about Gary and
Teresa, then off to Graeter's, and now a short stop at Kroger's at Cox and
Tylersville Roads for milk and bananas. Last night late you dropped this
'consciousness' commentary from Quora and want to do more focusing on this
subject and its definition. - Amorella
1617 hours. I intuitively feel that if we
knew more scientifically about consciousness we would have a better idea what
heartansoulanmind were. We can see how heart and soul and mind are used within
the parameters of consciousness, but perhaps it is the other way around, that
heartansoulanmind is the natural development, the peak of our 'golden touch' of
humanity and that consciousness springs from this. One is a prerequisite to the
other. This commentary is based on older science books but I presently agree
with the shape and tone of the content.
**
**
Steve Van Damme, Professional monk with a zest for the
highest knowledge and an artistic side.
Updated Wed
|
Q: What are the most interesting facts science can’t explain?
Modern Science has yet to expound even a rudimentary understanding,
much less a consensus, of what consciousness is.
Let me explain that.
Modern science has a great deal of objective experimental and
observational knowledge about what consciousness produces in terms of utilizing
the senses, and expressions of the mind, intellect and ego. There also is a
good deal of knowledge of how the human brain works, although major discoveries
are still being probed. Thousands of studies.
Societies have a very deep libraries of the amazing products of
the mind through science, arts, exploration, industry, invention, commerce,
philosophy, and so forth. We see an awesome array of the multitude of marvelous
expressions of consciousness.
But all that is nothing but what consciousness can DO. Not
what consciousness is. Science has yet to understand even fundamentally
what consciousness itself is, beyong the expressions of mind, intellect, ego
and senses. And by ‘beyond’ I mean when those expressions of consciousness are
not expressing, what’s left? Anything? They can’t sense anything, can’t measure
anything, can’t even label anything beyond what consciousness expresses. The
rise of philosophies such as Descarte’s “I think, therefore I am” foreshadows
this inability of even many present day philosophers to even define the immense
power that produces the products of consciousness. They are always looking from
the outside.
This curios situation is because many modern scientists are
convinced that the only true knowledge is that which can be verified
objectively. If they can’t perceive it through the senses, and measure it with
instruments, they just disregard any theories about it as ‘unknown’ or
‘mysterious’ or ‘metaphysics’ or other such ‘non-scientific’ things.
Consciousness itself, beyond the mind, intellect, and ego, is
nothing but pure SUBJECTIVITY. There’s a mysterious ‘nothingness’ to it that
objective science simply can’t access (yet everyone who can think has
consciousness). Therefore, “objective” science is by definition excluded from
ever knowing consciousness itself, despite consciousness being the source of
science itself. Consciousness is inherently subjective.
That does not preclude a knowledge of consciousness itself,
fortunately. If one is open to a scientific approach to subjective knowledge,
one can explore one’s own consciousness, and come to know it. Scientific
exploration is not requiring an objective community for its own validation.
There have been many scientists who have intuitively known things far beyond
the understanding of other scientists, and they tended to work alone. Nicola
Tesla is a good example. (A reading of Tesla’s experiences, particularly in his
youth, are worth the time.)
Part of the reason that the study of consciousness itself has
been so inaccessible to modern science is that it’s study has been left to the
fields of psychiatry, physiology, neuroscience, biology, and related. They are
all using the wrong ‘instrument.’ One’s nervous system itself is the instrument!
And it has to be fine-tuned to clearly ‘see’ what consciousness really is.
Astronomer William Campbell couldn’t prove Einstein’s General
Theory of Relativity at his second attempt because his instrument wasn’t quite
up to the task, but that didn’t quite disprove the theory, as Campbell
concluded it did. Campbell’s upgraded equipment later confirmed the theory.
Similarly, the instrument of the nervous system needs ‘upgrading’ for
consciousness to ‘see itself’ clearly to be its own conclusive evidence.
The fact is, consciousness itself can only be studied within the
realm of consciousness itself. The attention must be directed inside, not
outside as objective science is in the habit and business of doing.
When consciousness knows itself, then real knowing happens. “I
am, therefore I am” gains meaning and is its own validation of itself.
There actually has been a lot of experimentation into and
discovery of the nature of consciousness over millennia, and there is a large
body of records of what it is, but unfortunately it has been shrouded in
‘mystery’ and cultural perspective and labeled ‘esoteric’ whereas it has been
just as scientific in its exploration as objective science is about objective
research which is limited by the ability of the senses to fool the mind and
intellect so much. Direct observation of consciousness bypasses the deluding
and obscuring nature of the senses. This is accomplished by techniques that
bring the mind to stillness, eventually revealing to the patient researcher IN consciousness
that consciousness has the qualities of infinite expansion, eternal sameness,
and infinite potential energy. And that’s just the beginning of discoveries to
be had.
Many of these subjective scientists of consciousness in the past
and the present have experienced pure consciousness directly, and found the
contact to be blissful, and a far more real state of consciousness than just
waking, dreaming, and deep sleep states of consciousness which are just
temporary phases—expressions—of the same underlying foundation of consciousness
itself. Beyond the three states . . ..
The truth is, one can know consciousness itself only by BEING
that consciousness, being restfully awake in consciousness itself alone,
without the division of consciousness into thoughts, intellect, and ego. There
are amazing things to know there that only those who ‘go there’ can understand.
There is one area where logical minds can get an inkling of the
extent of consciousness. Mathematicians have been working on a theory that consciousness
is all that there is by using mathematical proofs. This actually builds on the
intuitions of a number of notable scientists before them. The theory that the
universe is fundamentally a field of consciousness has been seriously proposed
by such luminaries of the modern scientific community as Sir James Jeans(1),
Max Planck (quoted in Klein)(2), Erwin Schrodinger(3), Niels Bohr, Eugene
Wigner(4), Sir Arthur Eddington and Albert Einstein(5). This view has been
similarly expressed by other eminent physical scientists, for example, Sir
Arthur Eddington’s “mind stuff” and Wolfgang Pauli’s “unity of all being” (6)
p. 124). In an article on quantum mechanics appearing in Scientific American,
French physicist Bernard d'Espagnat summarized the field by stating, “The
doctrine that the world is made up of objects whose existence is independent of
human consciousness turns out to be in conflict with quantum mechanics and with
the facts established by experiment” (7) p. 158.
The theory, that consciousness is all that there is, itself is
something with which most of the modern scientific community has yet to come to
regard as either valid or decisively disprove rather than merely discard
without any exploration, much less proof otherwise. This is because the doubters
simply don’t ‘see’ pure consciousness themselves. Empiricism’s limitations
preclude their understanding how consciousness could give rise to life, from
stimulating the fluctuations of quantum mechanics to the formation of
differentiated organs to the wholeness of consciousness on the move in a living
human nervous system.
Modern scientists have to first break their world view that
there is only matter and isolated force fields, and that matter somehow as if
assembles consciousness out of matter and these isolated force fields.
Otherwise, they will be laughed at by future scientists who will look back at
them just like we can laugh at 19th century scientists who held onto the idea
that machines heavier than air could never fly, and the human voice could not
be sent around the world through waves.
It’s said that ‘space is the final frontier’ but right now
modern science knows more about outer space than inner space.
Jeans J. The Mysterious Universe. New York: Macmillan; 1932.
2. Klein DB. The Concept of Consciousness: A Survey. Lincoln,
NE: University of Nebraska Press; 1984.
3. Schrodinger E. What is Life and Mind and Matter. Cambridge:
Cambridge University Press; 1944,1985.
4. Wigner E. Symmetries and Reflections. Woodbridge, CT: Ox Bow
Press; 1967.
5. Einstein A. Albert Einstein Quotes on Spirituality. Judaism
Online; [cited 2014]; Available from: Albert Einstein Quotes on Spirituality.
6. Dossey L. Recovering the Soul. New York: Bantam Books; 1989.
7. D’Espagnat B. The quantum theory and reality. Scientific
American. 1979;241(5):158–81.
Selected
and edited from -- https://www.quora.com/What-are-the-most-interesting-facts-science-cant-explain-yet/answer/Steve-Van-Damme
[online 30 April 2017]
** **
You are home. Go online and
look for criteria for consciousness in terms of self-awareness. - Amorella
1652 hours. Out of the blue suggestion, I'll check Wikipedia.
** **
Self-awareness is the capacity for introspection and
the ability to recognize oneself as an individual separate from the environment
and other individuals. It is not to be confused with consciousness in
the sense of qualia. While consciousness is a term given to being aware of
one's environment and body and lifestyle, self-awareness is the recognition of
that awareness.
Neurobiological basis
Introduction
There
are questions regarding what part of the brain allows us to be self-aware and
how we are biologically programmed to be self-aware. V.S.Ramachandran has
speculated that mirror neurons may provide the neurological basis of human
self-awareness. In an essay written for the Edge Foundation in 2009
Ramachandran gave the following explanation of his theory: "... I also
speculated that these neurons can not only help simulate other people's
behavior but can be turned 'inward'—as it were—to create second-order
representations or meta-representations of your own earlier brain
processes. This could be the neural basis of introspection, and of the
reciprocity of self-awareness and other awareness. There is obviously a
chicken-or-egg question here as to which evolved first, but... The main point
is that the two co-evolved, mutually enriching each other to create the mature
representation of self that characterizes modern humans.
Psychology
Self-awareness
has been called "arguably the most fundamental issue in psychology, from
both a developmental and an evolutionary perspective."
Self-awareness
theory, developed by Duval and Wicklund in their 1972 landmark book A theory of objective self-awareness,
states that when we focus our attention on ourselves, we evaluate and compare
our current behavior to our internal standards and values. This elicits a state
of objective self-awareness.
We become
self-conscious as objective evaluators of ourselves. However self-awareness is
not to be confused with self-consciousness. Various
emotional states are intensified by self-awareness. However, some people may
seek to increase their self-awareness through these outlets. People are more
likely to align their behavior with their standards when made self-aware.
People will be negatively affected if they don't live up to their personal
standards. Various environmental cues and situations induce awareness of the
self, such as mirrors, an audience, or being videotaped or recorded. These cues
also increase accuracy of personal memory. In
one of Demetriou's neo-Piagetian theories of cognitive development,
self-awareness develops systematically from birth through the life span and it
is a major factor for the development of general inferential processes.
Moreover, a series of recent studies showed that self-awareness about cognitive processes participates in general intelligence on a par with processing efficiency functions, such as working memory, processing speed, and reasoning. Albert Bandura's theory of self-efficacy builds on our varying degrees of self-awareness. It is "the belief in one's capabilities to organize and execute the courses of action required to manage prospective situations." A person's belief in their ability to succeed sets the stage to how they think, behave and feel. Someone with a strong self-efficacy, for example, views challenges as mere tasks that must be overcome, and are not easily discouraged by setbacks. They are aware of their flaws and abilities and choose to utilize these qualities to the best of their ability. Someone with a weak sense of self-efficacy evades challenges and quickly feels discouraged by setbacks. They may not be aware of these negative reactions, and therefore do not always change their attitude. This concept is central to Bandura's social cognitive theory, "which emphasizes the role of observational learning, social experience, and reciprocal determinism in the development of personality."
Moreover, a series of recent studies showed that self-awareness about cognitive processes participates in general intelligence on a par with processing efficiency functions, such as working memory, processing speed, and reasoning. Albert Bandura's theory of self-efficacy builds on our varying degrees of self-awareness. It is "the belief in one's capabilities to organize and execute the courses of action required to manage prospective situations." A person's belief in their ability to succeed sets the stage to how they think, behave and feel. Someone with a strong self-efficacy, for example, views challenges as mere tasks that must be overcome, and are not easily discouraged by setbacks. They are aware of their flaws and abilities and choose to utilize these qualities to the best of their ability. Someone with a weak sense of self-efficacy evades challenges and quickly feels discouraged by setbacks. They may not be aware of these negative reactions, and therefore do not always change their attitude. This concept is central to Bandura's social cognitive theory, "which emphasizes the role of observational learning, social experience, and reciprocal determinism in the development of personality."
Philosophy
Locke
An early philosophical discussion of self-awareness is that of
John Locke. Locke was apparently influenced by Rene Descartes' statement
normally translated 'I think, therefore I am' (Cogito ergo sum). In chapter XXVII "On Identity and
Diversity" of Locke's An Essay
Concerning Human Understanding (1689) he conceptualized consciousness as
the repeated self-identification of oneself through which moral responsibility could
be attributed to the subject—and therefore punishment and guiltiness justified,
as critics such as Nietzsche would point out, affirming "...the psychology
of conscience is not 'the voice of God in man'; it is the instinct of cruelty
... expressed, for the first time, as one of the oldest and most indispensable
elements in the foundation of culture." John Locke does not use the terms self-awareness
or self-consciousness though.
According to Locke, personal identity (the self) "depends
on consciousness, not on substance. We are the same person to the extent that
we are conscious of our past and future thoughts and actions in the same way as
we are conscious of our present thoughts and actions. If consciousness is this
"thought" which doubles all thoughts, then personal identity is only
founded on the repeated act of consciousness: "This may show us wherein
personal identity consists: not in the identity of substance, but ... in the
identity of consciousness." For example, one may claim to be a
reincarnation of Plato, therefore having the same soul. However, one would be
the same person as Plato only if one had the same consciousness of Plato's
thoughts and actions that he himself did. Therefore, self-identity is not based
on the soul. One soul may have various personalities.
Locke
argues that self-identity is not founded either on the body or the substance,
as the substance may change while the person remains the same. "Animal
identity is preserved in identity of life, and not of substance", as the
body of the animal grows and changes during its life. describes a case of a
prince and a cobbler in which the soul of the prince is transferred to the body
of the cobbler and vice versa. The prince still views himself as a prince,
though he no longer looks like one. This border-case leads to the problematic
thought that since personal identity is based on consciousness, and that only
oneself can be aware of his consciousness, exterior human judges may never know
if they really are judging—and punishing—the same person, or simply the same
body.
Locke argues that one may be judged for the actions of one's body rather than one's soul, and only God knows how to correctly judge a man's actions. Men also are only responsible for the acts of which they are conscious. This forms the basis of the insanity defense which argues that one cannot be held accountable for acts in which they were unconsciously irrational, or mentally ill— In reference to man's personality, Locke claims that "whatever past actions it cannot reconcile or appropriate to that present self by consciousness, it can be no more concerned in it than if they had never been done: and to receive pleasure or pain, i.e. reward or punishment, on the account of any such action, is all one as to be made happy or miserable in its first being, without any demerit at all."
Locke argues that one may be judged for the actions of one's body rather than one's soul, and only God knows how to correctly judge a man's actions. Men also are only responsible for the acts of which they are conscious. This forms the basis of the insanity defense which argues that one cannot be held accountable for acts in which they were unconsciously irrational, or mentally ill— In reference to man's personality, Locke claims that "whatever past actions it cannot reconcile or appropriate to that present self by consciousness, it can be no more concerned in it than if they had never been done: and to receive pleasure or pain, i.e. reward or punishment, on the account of any such action, is all one as to be made happy or miserable in its first being, without any demerit at all."
Theater
Main article: Theater
Theater
also concerns itself with other awareness besides self-awareness. There is a
possible correlation between the experience of the theater audience and
individual self-awareness. As actors and audiences must not "break"
the fourth wall in order to maintain context, so individuals
must not be aware of the artificial, or the constructed perception of his or
her reality. This suggests that both self-awareness and the social constructs
applied to others are artificial continuums just as theater is. Theatrical
efforts such as Six Characters in Search
of an Author, or The Wonderful Wizard of Ozk, construct
yet another layer of the fourth wall, but they do not destroy the primary
illusion. Refer to Erving Goffman's Frame
Analysis: An Essay on the Organization of Experience.
Science fiction
In science
fiction, self-awareness describes an essential human property that often (depending
on the circumstances of the story) bestows personhood onto a non-human. If a
computer, alien or other object
is described as "self-aware", the reader may assume that it will be
treated as a completely human character, with similar rights, capabilities and
desires to a normal human being. The
words "sentience", "sapience", and
"consciousness" are used in
similar ways in science fiction.
Selected
and edited from Wikipedia - self-awareness
[several sub-sections
deleted from above article - rho]
** **
On to another connection:
** **
Sentience
From
Wikipedia, the free encyclopedia
.
Sentience is the capacity to feel, perceive or experience subjectively. Eighteenth-century
philosophers used the concept to distinguish the ability to think (reason) from the ability to feel (sentence).
In modern Western philosophy, sentience is the ability to experience sensations (known
in philosophy of mind as "qualia". In Eastern philosophy, sentience
is a metaphysical quality of all things that requires respect and care. The
concept is central to the philosophy of animal rights and also the anti-abortion movement
because sentience is necessary for the ability to suffer, and thus is held to
confer certain rights.
Philosophy and sentience
In the
philosophy of consciousness, sentience can refer to the ability of any entity
to have subjective perceptual experiences, or as some philosophers refer to
them, "qualia". " This
is distinct from other aspects of the mind and
consciousness, such as creativity, intelligence, sapience, self-awareness and
intentionality (the ability
to have thoughts about something). Sentience is a minimalistic way of defining consciousness, which otherwise
commonly collectively describes sentience plus other characteristics of the
mind.
Some
philosophers, notably Colin McGinn, believe that sentience will never be
understood, a position known as "new mysterianism". They do not deny
that most other aspects of consciousness are subject to scientific
investigation but they argue that subjective experiences will never be explained; i.e., sentience
is the only aspect of consciousness that can't be
explained. Other philosophers (such as Daniel Dennett, who also argues that
non-human animals are not sentient) disagree, arguing that all aspects of
consciousness will eventually be explained by science.
Ideasthesia
According
to the theory of ideasthesia, a sentient system has to have the capability to
categorize and to create concepts. Empirical evidence suggests that sentience
about stimuli is closely related to the process of extracting the meaning of
the stimuli. How one understands the stimuli determines how one experiences
them.
Indian religions
See also: Sentient
beings (Buddhism)
Eastern
religions including Hinduism, Buddhism, Sikhism and Jainism recognize non-humans as sentient beings. In Jainism and
Hinduism, this is closely related to the concept of ahimsa, nonviolence toward
other beings. In Jainism, all matter is endowed with sentience; there are five
degrees of sentience, from one to five.
Water, for
example, is a sentient being of the first order, as it is considered to possess
only one sense, that of touch. Man is considered a sentient being of the fifth
order. According to Buddhism, sentient beings made of pure consciousness are
possible. In Mahayana Buddhism, which includes Zen and Tibetan Buddhism, the concept is related to the
Bodhisattva, an enlightened being devoted to the liberation of others. The
first vow of a Bodhisattva states: "Sentient beings are numberless; I vow
to free them."
Sentience in Buddhism is the state of having
senses (sat + ta in Pali,
or sat + tva in Sanskrit). In Buddhism, there are
six senses, the sixth being the subjective experience of the mind. Sentience is
simply awareness prior to the arising of Skandha. Thus, an animal qualifies as
a sentient being.
Science fiction
See also: Artificial intelligence in fiction and Sentient Al
In science fiction, an alien, android, robot, hologram or
computer described as "sentient" is usually treated as a fully human
character, with similar rights, qualities, and capabilities as any other
character. Foremost among these properties is human level intelligence (i.e., sapience), but sentient characters also
typically display desire, will, consciousness, ethics, personality, insight,
humor, ambition and many other human qualities. Sentience, in this context,
describes an essential human property that brings all these other qualities
with it. Science fiction uses the words sapience,
self-awareness, and consciousness in similar ways.
This supports usage that is incorrect outside science fiction.
For example, a character describing his cat as "not sentient" in one
episode of Star Trek: The Next
Generation, whereas the term was originally used (by philosopher Jeremy
Bentham and others) to emphasize the sentience of animals (certainly including
cats).
Science
fiction has explored several forms of consciousness beside that of the
individual human mind, and how such forms might perceive and function. These
include Group Sentience, where a single mind is composed of multiple
non-sentient members (sometimes capable of reintegration, where members can be
gained or lost, resulting in gradually shifting mentalities); Hive Sentience,
which is the extreme form of insect hives, with a single sentience extended
over huge numbers of non-sentient bodies; and Transient Sentience, where a
lifeform is sentient of that transience.
Selected
and edited from Wikipedia
** **
And, on to another connection:
** **
Artificial consciousness
Artificial consciousness (AC),
also known as machine
consciousness (MC) or synthetic consciousness (Gamez 2008; Reggia 2013), is a field
related to artificial intelligence and cognitive robotics. The aim of the
theory of artificial
consciousness is to "define that which would have to be synthesized were
consciousness to be found in an engineered artifact" (Aleksander 1995).
Neuroscience hypothesizes that consciousness is
generated by the interoperations of various parts of the brain, called the neural correlates of
consciousness or NCC,
though there are challenges to that perspective. Proponents of AC believe it is
possible to construct systems (e.g.,
computer systems) that can
emulate this NCC interoperation.
Artificial consciousness concepts are also pondered in
the philosophy of artificial intelligence through questions about mind,
consciousness and mental states.
Philosophical views
As there are many hypothesized types of consciousness,
there are many potential implementations of artificial consciousness. In the
philosophical literature, perhaps the most common taxonomy of consciousness is
into "access" and "phenomenal" variants. Access
consciousness concerns those aspects of experience that can be apprehended, while phenomenal
consciousness concerns those aspects of experience that seemingly cannot be
apprehended, instead being characterized qualitatively in terms of “raw feels”,
“what it is like” or qualia (Block 1997).
Plausibility debate
Type-identity theorists and other skeptics hold the view that consciousness can only be realized in
particular physical systems because consciousness has properties that
necessarily depend on physical constitution (Block 1978; Bickle 2003).
In his article "Artificial Consciousness: Utopia or
Real Possibility" Giorgio Buttazzo says that despite our current
technology's ability to simulate autonomy, "Working in a fully automated
mode, they [the computers] cannot exhibit creativity, emotions, or free will. A
computer, like a washing machine, is a slave operated by its components."
For other theorists (e.g., functionalists), who define
mental states in terms of causal roles, any system that can instantiate the
same pattern of causal roles, regardless of physical constitution, will
instantiate the same mental states, including consciousness). (Putnam 1967).
Computational Foundation
argument
One of the most explicit arguments for the plausibility
of AC comes from David Chalmers. His proposal, found within his article
Chalmers 2011, is roughly that the right kinds of computations are sufficient
for the possession of a conscious mind. In the outline, he defends his claim
thus: Computers perform computations. Computations can capture other systems’
abstract causal organization.
The most controversial part of Chalmers' proposal is that
mental properties are "organizationally invariant". Mental properties
are of two kinds, psychological and phenomenological. Psychological properties,
such as belief and perception, are those that are "characterized by their
causal role". He adverts to the work of Armstrong 1968 and Lewis 1972 in claiming that "[s]ystems with
the same causal topology…will share their psychological properties."
Phenomenological properties are not prima facie definable
in terms of their causal roles. Establishing that phenomenological properties
are amenable to individuation by causal role therefore requires argument.
Chalmers provides his Dancing Qualia Argument for this purpose.
Chalmers begins by assuming that agents with identical
causal organizations could have different experiences. He then asks us to
conceive of changing one agent into the other by the replacement of parts
(neural parts replaced by silicon, say) while preserving its causal
organization. Ex-hypothesi, the experience of the agent under transformation
would change (as the parts were replaced), but there would be no change in
causal topology and therefore no means whereby the agent could
"notice" the shift in experience.
Critics of AC object that Chalmers begs the question in
assuming that all mental properties and external connections are sufficiently
captured by abstract causal organization.
Ethics
Main
articles: Ethics of artificial intelligence, Machine ethics and Roboethics
If it were certain that a particular machine was
conscious, its rights would be an ethical issue that would need to be assessed
(e.g. what rights it would have under law). For example, a conscious computer
that was owned and used as a tool or central computer of a building or large
machine is a particular ambiguity. Should laws be made for such a case, consciousness
would also require a legal definition (for example a machine's ability to
experience pleasure or pain, known a sentience). Because artificial
consciousness is still largely a theoretical subject, such ethics have not been
discussed or developed to a great extent, though it has often been a theme in
fiction (see below).
The rules for the 2003 Loebner Prize competition
explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open
source Entry entered by the University of Surrey or the Cambridge Center wins
the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be
awarded to the body responsible for the development of that Entry. If no such
body can be identified, or if there is disagreement among two or more
claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may
legally possess, either in the United States of America or in the venue of the
contest, the Cash Award and Gold Medal in its own right.
Research and implementation proposals
Aspects of consciousness
There are various aspects of consciousness generally
deemed necessary for a machine to be artificially conscious. A variety of
functions in which consciousness plays a role were suggested by Bernard Baars
(Baars 1988) and others. The functions of consciousness suggested by Bernard
Baars are Definition and Context Setting, Adaptation and Learning, Editing,
Flagging and Debugging, Recruiting and Control, Prioritizing and
Access-Control, Decision-making or Executive Function, Analogy-forming
Function, Metacognitive and Self-monitoring Function, and Autoprogramming and
Self-maintenance Function.
Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) (and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Igor Aleksander suggested 12 principles for artificial consciousness (Aleksander 1995) (and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Awareness
Awareness could be one required aspect, but there are many problems with the exact definition
of awareness. The results
of the experiments of neuroscanning on monkeys suggest that a prcess, not only a
state or object, activates neurons. Awareness includes creating and testing
alternative models of each process based on the information received through
the senses or imagined, and is also useful for making predictions. Such
modeling needs a lot of flexibility. Creating such a model includes modeling of
the physical world, modeling of one's own internal states and processes, and
modeling of other conscious entities.
There are at least three types of awareness: agency awareness, goal awareness, and
sensorimotor awareness, which may also be conscious or not. For example, in
agency awareness you may be aware that you performed a certain action
yesterday, but are not now conscious of it. In goal awareness you may be aware
that you must search for a lost object, but are not now conscious of it. In
sensorimotor awareness, you may be aware that your hand is resting on an
object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction
between awareness and consciousness is frequently blurred or they are used as
synonyms.
Memory
Conscious events interact with memory systems in learning, rehearsal, and
retrieval. The IDA model elucidates the role of consciousness
in the updating of perceptual memory, transient
episodic memory, and procedural memory. Transient episodic and declarative
memories have distributed representations in IDA, there is evidence that this
is also the case in the nervous system. In
IDA, these two memories are implemented computationally using a modified
version of Kanerva's Sparce distributed memory architecture.
Learning
Learning is also considered necessary for AC. By Bernard
Baars, conscious experience is needed to represent and adapt to novel and
significant events (Baars 1988). By Axel Cleeremans and Luis Jiménez, learning is defined
as "a set of philogenetically [sic]
advanced adaptation processes that critically depend on an evolved sensitivity
to subjective experience so as to enable agents to afford flexible control over
their actions in complex, unpredictable environments" (Cleeremans 2001).
Anticipation
The ability to predict (or anticipate) foreseeable events
is considered important for AC by Igor Aleksander. The emergentist multiple
drafts principle proposed by
Daniel Dennett in Consciousness Explained may be useful for prediction: it
involves the evaluation and selection of the most appropriate "draft"
to fit the current environment. Anticipation includes prediction of
consequences of one's own proposed actions and prediction of consequences of
probable actions by other entities.
Relationships between real world states are mirrored in
the state structure of a conscious organism enabling the organism to predict
events. An artificially conscious machine should be able to anticipate events
correctly in order to be ready to respond to them when they occur or to take preemptive
action to avert anticipated events. The implication here is that the machine
needs flexible, real-time components that build spatial, dynamic, statistical,
functional, and cause-effect models of the real world and predicted worlds,
making it possible to demonstrate that it possesses artificial consciousness in
the present and future and not only in the past. In order to do this, a
conscious machine should make coherent predictions and contingency plans, not
only in worlds with fixed rules like a chess board, but also for novel
environments that may change, to be executed only when appropriate to simulate
and control the real world.
Subjective experience
Subjective experiences or qualia are widely considered to
be the hard problem of consciousness.
Indeed, it is held to pose a challenge to physicalism, let alone
computationalism. On the other hand, there are problems in other fields of
science which limit that which we can observe, such as the uncertainty
principle in physics, which have
not made the research in these fields of science impossible.
Role of cognitive architectures
Main
article: Cognitive architecture
The term "cognitive architecture" may refer to
a theory about the structure of the human mind, or any portion or function
thereof, including consciousness. In another context, a cognitive architecture
implements the theory on computers. An example is QuBIC: Quantum and Bio-inspired
Cognitive Architecture for Machine Consciousness. One of the main goals of
a cognitive architecture is to summarize the various results of cognitive
psychology in a comprehensive computer model. However, the results need to be
in a formalized form so they can be the basis of a computer program.
Symbolic or hybrid proposals
Franklin's Intelligent
Distribution Agent
Stan Franklin (1995, 2003) defines an autonomous agent as
possessing functional consciousness when
it is capable of several of the functions of consciousness as identified by
Bernard Baars' Global Workspace Theory (Baars 1988, 1997). His brain child IDA (Intelligent Distribution
Agent) is a software implementation of GWT, which makes it functionally
conscious by definition. IDA's task is to negotiate new assignments for sailors
in the US Navy after they end a tour of duty, by matching each individual's
skills and preferences with the Navy's needs. IDA interacts with Navy databases
and communicates with the sailors via natural language e-mail dialog while
obeying a large set of Navy policies. The IDA computational model was developed
during 1996–2001 at Stan Franklin's "Conscious" Software Research
Group at the University of Memphis.
It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and 2003 for details ). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."
It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled (see Franklin 1995 and 2003 for details ). While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task."
Ron Sun's cognitive
architecture CLARION
CLARION posits a two-level representation that explains the distinction between
conscious and unconscious mental processes.
CLARION has been successful in accounting for a variety
of psychological data. A number of well-known skill learning tasks have been
simulated using CLARION that span the spectrum ranging from simple reactive
skills to complex cognitive skills. The tasks include serial reaction time
(SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC)
tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA)
task, and the Tower of Hanoi (TOH) task (Sun 2002). Among them, SRT, AGL, and
PC are typical implicit learning tasks, very much relevant to the issue of
consciousness as they operationalized the notion of consciousness in the
context of psychological experiments.
Ben Goertzel's OpenCog
Ben Goertzel is pursuing an embodied AGI through the open-source OpenCog project.
Current code includes embodied virtual pets capable of learning simple
English-language commands, as well as integration with real-world robotics,
being done at the Hong Kong Polytechnic University.
Connectionist proposals
Haikonen's cognitive
architecture
Pentti Haikonen (2003) considers
classical rule-based computing inadequate for achieving AC: "the brain is
definitely not a computer. Thinking is not an execution of programmed strings
of commands. The brain is not a numerical calculator either. We do not think by
numbers." Rather than trying to achieve mind and consciousness by identifying and implementing
their underlying computational rules, Haikonen proposes "a special
cognitive architecture to
reproduce the processes of perception, inner imagery, inner perception, inner
imagery, inner speech, pain, pleasure, emotions and the cognitive functions
behind these. This bottom-up architecture would produce higher-level functions
by the power of the elementary processing units, the artificial neurons,
without algorithms or programs.
Haikonen believes that, when implemented with sufficient
complexity, this architecture will develop consciousness, which he considers to
be "a style and way of operation, characterized by distributed signal
representation, perception process, cross-modality reporting and availability
for retrospection."
Haikonen is not alone in this process view of consciousness,
or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired
architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity
implementation of the architecture proposed by Haikonen (2003) was reportedly not capable of AC, but
did exhibit emotions as expected. See Doan (2009) for a comprehensive introduction to
Haikonen's cognitive architecture. An updated account of Haikonen's
architecture, along with a summary of his philosophical views, is given in
Haikonen (2012).
Shanahan's cognitive
architecture
Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global
workspace with a mechanism for internal simulation ("imagination")
(Shanahan 2006). For discussions of Shanahan's architecture, see (Gamez 2008)
and (Reggia 2013) and Chapter 20 of (Haikonen 2012).
Takeno's self-awareness
research
Self-awareness in robots is being investigated by Junichi
Takeno at Meiji University in Japan. Takeno is asserting that he
has developed a robot capable of discriminating between a self-image in a
mirror and any other having an identical image to it, and this claim has already been
reviewed (Takeno, Inaba and Suzuki 2005). Takeno asserts that he first
contrived the computational module called a MoNAD, which has a self-aware
function, and he then constructed the artificial consciousness system by
formulating the relationships between emotions, feelings and reason by connecting
the modules in a hierarchy (Igarashi, Takeno 2007).
Takeno completed a mirror image cognition experiment
using a robot equipped with the MoNAD system. Takeno proposed the Self-Body
Theory stating that "humans feel that their own mirror image is closer to
themselves than an actual part of themselves." The most important point in
developing artificial consciousness or clarifying human consciousness is the development
of a function of self-awareness, and he claims that he has demonstrated
physical and mathematical evidence for this in his thesis.
He also demonstrated that robots can study episodes in
memory where the emotions were stimulated and use this experience to take
predictive actions to prevent the recurrence of unpleasant emotions (Torigoe,
Takeno 2009).
Aleksander's impossible
mind
Igor Aleksander, emeritus professor of Neural Systems
Engineering at Imperial College, has extensively researched artificial neural
networks and claims in his book Impossible Minds: My Neurons, My
Consciousness that the
principles for creating a conscious machine already exist but that it would
take forty years to train such a machine to understand language. Whether this
is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain
is a neural state machine—is open to doubt.
Thaler's Creativity Machine
Paradigm
Stephen Thaler proposed a possible connection between
consciousness and creativity in his 1994 patent, called "Device for the
Autonomous Generation of Useful Information" (DAGUI), or the so-called
"Creativity Machine," in which computational critics govern the
injection of synaptic noise and degradation into neural nets so as to induce
false memories or confabulations that may qualify as potential ideas or
strategies. He recruits this neural architecture and methodology to account for
the subjective feel of consciousness, claiming that similar noise-driven neural
assemblies within the brain invent dubious significance to overall cortical
activity. Thaler's theory and the
resulting patents in machine consciousness were inspired by experiments in
which he internally disrupted trained neural nets so as to drive a succession
of neural activation patterns that he likened to stream of consciousness.
Michael Graziano's
attention schema
Main
article: Michael Graziano and The brain basis of consciousness
In 2011, Micha el
Graziano and Sabine Kastler published a paper named "Human
consciousness and its relationship to social neuroscience: A novel
hypothesis" proposing a theory of consciousness as an attention schema. Graziano
went on to publish an expanded discussion of this theory in his book
"Consciousness and the Social Brain" This Attention Schema Theory of
Consciousness, as he named it, proposes that the brain tracks attention to
various sensory inputs by way of an attention schema, analogous to the well
study body schema that tracks the spatial place of a person’s body.
This relates to artificial consciousness by proposing a
specific mechanism of information handling, that produces what we allegedly
experience and describe as consciousness, and which should be able to be
duplicated by a machine using current technology. When the brain finds that
person X is aware of thing Y, it is in effect modeling the state in which
person X is applying an attentional enhancement to Y. In the attention schema
theory, the same process can be applied to oneself. The brain tracks attention
to various sensory inputs, and one's own awareness is a schematized model of
one's attention. Graziano proposes specific locations in the brain for this
process, and suggests that such awareness is a computed feature constructed by
an expert system in the brain.
Selected and edited from - Wikipedia - artificial
consciousness
** **
You have to clean up the above for the blog,
but the point is that you understand the 'basics' to determine your own sense
of criteria to be used in Soki's Choice. Post after clean up. - Amorella
1730 hours.
Stop for Carol's supper of ground chuck, tomatoes and rice stuffed in green
pepper -- very good. We watched NBC News and an episode of PBS's "Home
Fires". Otherwise, I have mostly been reading intently (the above) and
editing. Today's Wikipedia articles are the best yet, especially this last one
on artificial intelligence. This gives me the inner criteria for consciousness
that I need for understanding before Soki or Amorella do something with it in
context of Soki's Choice. I read this for an overall sense of what I
mean by consciousness. It is in the last few days of posting. Fortunately, this
is fiction but I want it to show probability and not fantasy.
You want to learn what you can for your own
personal philosophical use. To what end you are not sure. Post. - Amorella
No comments:
Post a Comment