What if a machine were able to
think? This, of course, is a perennially
popular theme in science fiction, with stories about computers or androids
becoming so sophisticated that they achieve a state of self-awareness. They become, in essence, living beings,
albeit ones that are composed of circuits and wires rather than blood and
veins. And usually, in these stories,
dire consequences follow for the human race.
But creating a genuinely thinking,
self-aware machine is a very tall order, and, after an initial enthusiastic
rush of optimism among proponents of “artificial intelligence” in the early
years of the computer revolution, the outlook for this achievement has dimmed
considerably. It has been discovered
that human (and animal) intelligence involves much more than a process of
simply engaging in sequential trains of logic, and, even with the development
of “neural nets” which simulate certain non-linear patterns of thinking, such
as pattern recognition, the gulf between this and genuine thinking still seems
to be immense. All of these operations,
after all, no matter how sophisticated, simply involve the carrying out of
instructions that have been defined by the programmer. There is no genuine “understanding” on the
computer’s part of what it is processing.
Philosopher John Searle has compared the operations of a computer to a
man sitting in a room who is being sent messages in Chinese, a language that he
doesn’t understand. He is able to
“communicate” with the senders because he has an instruction book, written in
his own language, which directs him to send back particular sequences of
Chinese symbols whenever he encounters certain sequences of Chinese symbols in
the messages that he is receiving. He
has absolutely no comprehension of what any of these symbols mean, and if there
is indeed some sort of “conversation” going on between those who are exchanging
messages with him in Chinese, he is completely unaware of it. Searle contends that this is exactly what is
happening with computers, and that it represents a nearly insurmountable
problem in creating a genuine, “thinking”, machine.
Philosopher Ned Block, however,
took Searle’s thought experiment one step further. What, he said, if there was not just one
processor of these symbols, but millions, or billions, of people, each of whom
had an instruction book of their own, with directions in their own language on
what to do whenever they received a set of symbols. Each of them might be directed to pass
another set of symbols on to a particular person nearby, who then passes on a
different set to somebody else, in accordance with his instructions, and so
on. None of the individuals in this
massive group would understand the symbols that each of them was processing,
just as the single man in the room in Searle’s thought experiment
couldn’t. But would the group somehow comprehend the messages
being exchanged with the external communicant, even if none of the individuals
that made it up did? The obvious answer
seems to be that there would be no difference between this scenario and the one
described by Searle. In both cases we
have people carrying out instructions on how to send words in a language that
they don’t understand. Whether it is
just one person or a billion would seem to make no difference.
But wait. Aren’t the billions of neurons in a human (or
animal) brain doing exactly this? Isn’t
each neuron receiving electro-chemical signals from such sources as the optical
nerves of the eye, and then passing on electro-chemical signals to other
neurons, with each neuron simply automatically carrying out instructions on how
to respond to certain stimuli in a manner that nature had “hard-wired” it to
do? And yet, somehow, from this process I
am able to create an image in my mind, think about it, have feelings about it,
and remember it. My neurons can’t have a
conversation with the external world, but I
can, although I couldn’t do it without them.
But has consciousness really emerged from these processes alone, as a
sort of “epiphenomenon”, or is something else responsible for it – something
which has yet to be discovered by modern science?
In spite of this apparent
counter-example from living organisms, Searle’s arguments still seem pretty
compelling: it is hard to believe that the mere processing of a series of
simple instructions, regardless of how many millions of these processes are now
being carried out in a modern computer, could ever amount to anything like
genuine thinking, or even experiencing, let alone self-awareness. Hence, many computer architects have lowered
the bar, and set as the ultimate goal for artificial intelligence the mere
simulation of thinking. While conceding
that a computer might never be able to genuinely think, they aspire to develop
a machine that can behave in ways that make us think that they’re thinking.
Alan Turing |
In 1950, Alan M. Turing, a British
mathematician, logician, and visionary of computer science, suggested a
criterion for determining when such a threshold has been reached: the “Turing
Test”. Imagine communicating with some
other entity via a keyboard and a monitor.
You don’t know if this other entity, which is located somewhere else, is
a human being or a machine. You must try
to determine this by typing questions with the keyboard and viewing the
entity’s responses on the monitor. Can a
machine ever be built that will be able to convince any such person that it is
a living, thinking being, regardless of the questions that are asked? Turing was optimistic. He predicted, in the year that he suggested
this test, “. . . that in about fifty years’ time, it will be possible to
programme computers . . . to . . . play the imitation game so well that an
average interrogator will not have more than a 70 percent chance of making the
right identification in five minutes of questioning.”
It turns out that Turing was
overoptimistic. It is now more than 60
years since he made that prediction, and to date no machine has been able to
fulfill it. (And this in spite of the
fact that a $100,000 prize has been offered annually in a competition to anyone
who can present a machine which passes a version of the Turing Test. This competition, known as the Loebner Prize,
has been in existence since 1990.
Smaller sums have been awarded to machines in the contest that have
demonstrated simpler feats of conversational mimicry.)
How exactly should a computer be
programmed to mimic living beings? Are
there a set of fundamental instructions that would be most effective in
creating a “life-like” computer or automaton?
Some computer architects, rather than focusing on sophisticated programs
that would simulate high-level life processes such as communication and
problem-solving, have instead turned their attention to the opposite end of the
biological ladder, and concentrated on the creation of self-replicating
automata. Self-replication, after all,
seems to be at the basis of all life, down to the cellular level. The science fiction writer Isaac Asimov, in
his 1942 short story Runaround, put
equal programming emphasis on what androids could not do, with his “Three Laws of Robotics”, though even in these the
goal of self-protection was explicitly acknowledged.
I believe that there actually is
one basic directive or instruction, which, if it could somehow be programmed
into a machine, would most effectively produce a machine that emulated a living
being. The directive would be to
maximize its locus of meaningful activity.
Now on the surface I know that this doesn’t seem to offer much in the
way of specifics. What, I suspect one
would ask, is “meaningful” supposed to mean?
The best way to get at this is to explain those conditions under which a
machine (or even an actual living being) would completely fail at following
this directive.
At one extreme, imagine a being
that can perceive its environment (whether this is through the standard human
senses of sight, hearing, touch, etc., or something different, and even
mechanical, such as a machine receiving inputs, is immaterial), but is
incapable of acting upon its environment, in any way. For a person, this might be like being
strapped to a table for his entire life, confined in an empty room, and forced
to spend his entire time watching a television monitor, and hearing sounds from
a speaker. He wouldn’t even be able to “change
the channels”: everything that he saw, heard, and felt would be guided by
influences over which he had absolutely no control. This, I argue, would be a life devoid of
meaning. If I have absolutely no
influence over my environment – if I can’t even affect it in the most trivial
of ways – then it is ultimately of no consequence to me. One might argue that I might still develop
sympathies for the other beings that I am forced to watch, and so be
meaningfully engaged in that way. But my
level of engagement would be no more so than that I might have in the real
world with the fictional character of a television program or movie. My degree of substantial connection would be
the same in both cases – which is to say I would have no real connection with
them at all.
At the opposite extreme, imagine a
being who could control everything in its environment, including other living
beings. These others would be like
puppets, in that every action that they performed would be in accordance with
the will of this omnipotent being. In
such an environment, for this being, there would be no novelty, no spontaneity,
nothing new to learn or discover. An
existence such as this would be a sterile one, because of its complete
predictability. Complete power, complete
control, would ultimately be meaningless, because with it would come an
existence that offered no possibilities for growth through encounter with the
unknown. For one living such an
existence, it would be like spending life confined in a room filled with games
in which one moved all of the pieces.
Meaning, then, for me, represents a
happy middle between these two extremes of complete control and complete lack
of control. It represents life in a
world where I have some power over circumstances that affect me, so that I am
more than a passive observer, but not a power so extensive, so complete, that I
eliminate any possibility of novelty, of surprise, of having a genuinely
spontaneous experience. And to maximize
meaning – where absence of meaning would be represented by either of these two
extremes – would entail expanding one’s perceptual capabilities, while also
attempting to exert some control over the widening field of experience. It would also entail risk, of course, since
my efforts to widen my personal horizons will also potentially put me into
contact with somebody or something that could diminish, or even destroy, my
ability to exercise any kind of control over my life.
Would a machine ever be capable of
doing this? Would mobility be a prerequisite
for success? Would a machine have to, at
the very least, be able to procure the necessary resources for its continued
sustenance? Perhaps not. If the machine had the capability to
communicate with its surrounding environment, it might be able to “persuade”
others to attend to its needs, and even to assist it in its project of both
sustaining itself and increasing its loci of perception and control. Such an idea is not as far-fetched as it
sounds. While computers may not have yet reached the degree of sophistication
in communication to pass the Turing Test, they have for some time had the
capability to engage in at least rudimentary forms of conversation with human
subjects. In the mid-1960s, a computer
program named ELIZA was developed which could mimic a psychotherapist, by
asking the sort of leading questions that are often used by practitioners in
this field. (These questions were
constructed based upon prior answers of the subject that the computer was
communicating with.) In the 2013 science
fiction film Her, a man falls in love
with a computer that has a female voice.
The film was inspired by an actual internet web application called
“Cleverbot”, which mimics human conversation extremely well and has scored
highly on the Turing Test. Given the
capacity for human beings to develop deep emotional ties to animals of much
lower intelligence – and even in many cases inanimate objects – it does not
seem unlikely at all that some computer could someday enlist a human being who
has fallen under its spell to support its objectives. (And isn’t it the case that many people
already practice a form of this art?)
This idea of the prime directive
came to me many years ago: not as the result of thinking about computers and
artificial intelligence, but actually as a consequence of an ongoing debate
that I had with a college friend of mine.
We were starkly different in temperament: I tended to live a very
ordered existence, planning the activities of each day very carefully and
thoroughly, while she was much more of a spontaneous sort, and relished the
novelty of unplanned experience. She was
disdainful of my own behavior, saying that by controlling the activities of my
day so completely, I was squelching any chance of doing something genuinely
interesting, let alone exciting. I, on
the other hand, had developed a contempt for persons who exercised little or no
control over their own lives, because in the working class environment of my
youth, I had seen many people who seemed to be little more than helpless pawns
or victims of life: they “got pregnant”, “got married”, “got into trouble”, “got
hired”, “lost a job” – life for them seemed to be a series of passive
verbs. I made a resolution early in my
life that to escape this environment, I would exert a greater level of control
over my life than they had their own.
But still, I could see that my
college friend had a point. After all,
in spite of her love of spontaneity, and contempt for a regimented life, it was
plain to me that she was not a victim like so many of the kids in my
neighborhood had been. She was in fact
doing very well in college, and later went on to a very successful professional
career. As I contemplated our respective
behaviors, as well as those of my childhood peers, I finally concluded that
there were two equally perilous extremes that one could succumb to in the
management of one’s life. At the one
extreme there was the over-controlled life, which I likened to a ship that
visited the same ports at rigorously scheduled intervals. For the captain and crew of such a ship,
existence was a very circumscribed thing, with a very predictable and
repetitious sequence of experiences. The
captain was in complete control of his ship, his crew, and his destiny, but
this complete control effectively eliminated any opportunities for learning,
growth, or change. At the opposite
extreme were those who were lost at sea on a makeshift raft, without a sail or
oar, completely at the mercy of the elements, tossed to and fro by the random
waves that hit the raft. Such an
existence was also a pathetic one, in its own sort of way, and an extremely
dangerous one as well. Between these two
extremes there was the explorer, a captain who, while in control of his ship,
regularly steered it into uncharted waters, in search of new lands to discover
and explore. There were risks, to be
sure, from the elements, as well as from peoples and animals that might be
encountered in foreign shores, but the rewards generally outweighed the
dangers. And even when the ship was
sailing into unfamiliar territory, its captain was doing so with a purpose, and
with the benefit of years of accumulated knowledge acquired from many such
voyages in the past. I would like to
think that my college friend and I both – at least in our better moments – were
like the explorer, though while she occasionally ran the risk of ending up on
that life raft, I, on the other hand, regularly risked succumbing to the
temptation of limiting my voyages to familiar ports.
If a machine, then, could be programmed
to emulate the behavior of an explorer, endeavoring to broaden its boundaries
of perception by branching out into new and different environments, but also
endeavoring to maintain some degree of control over these encounters, then I
think it would most effectively mimic the behavior of a living being. But I do think that this would merely be a
successful act of mimicry, or simulation, rather than a genuine replication of
the life process. After all, in this
case, the machine would still simply be following a programmed set of
instructions, and the procedure would be just as open to Searle’s
critique. It represents, then, a
necessary, but not a sufficient, condition for the attainment of something like
living consciousness.
What would give genuine meaning to
this process would be two things. The
first is a conception of self, a feeling of identity. To be alive, there must be a sense of an I: these things are happening to me; I
want to do this or that. Would a
self-concept arise naturally out of a sufficiently complex process of
information processing, as some believe happens with our own vast network of
billions of neurons in our brains? Would
the programmed machine explorer eventually attain self-consciousness after
expanding its sphere of perceptions and accumulated information beyond some
threshold? (This is certainly a popular
motif in science fiction, though exactly how such a thing could happen is
never, of course, satisfactorily explained.)
Or is something else required – something that we have yet to
discover? In any case, without this
sense of identity, this self-directed frame of reference, there could not
possibly be something that is manifesting an authentic existence.
And even the existence of an I
would not be sufficient for genuinely meaningful experience. Imagine yourself in a universe where you were
completely alone, living in a room that contained (aside from whatever was
required to sustain you physically, such as food and water) a set of simple
puzzles to solve. As soon as you have
completed these, a door opens onto a wider space, with more elaborate
furnishings, and where there are more puzzles and other diversions, of a higher
level of complexity. You set about
working on these, and when you successfully master them, yet another door
opens, into an even wider space, with a set of more diverse and interesting
surroundings, and even more challenging puzzles to contend with. You eventually surmise that this is – and
will be – the sum total of your existence.
You are within what may very well be an infinite set of concentric
enclosed spaces, with each wider space moving outward containing more elaborate
and interesting scenery and a more sophisticated set of challenges to grapple
with. As you successfully master the
puzzles in each wider space, both your locus of control and your locus of
perceptual awareness increase. At the
end of such a life, regardless of how “successful” you were (as measured by the
number of widening spaces you were able to attain, during your lifetime), will
you feel that your life was a meaningful one?
Will you feel that it was a satisfying one? I suspect that you wouldn’t, nor would any
reasonably intelligent being with a sense of self.
What’s missing? An “other”: one or more beings, also existing
in your universe, that are able to engage with you and your projects: competing
with you, collaborating with you, evaluating you, and seeking evaluation from
you. The philosopher Martin Buber talked
of two dimensions of relationship that one has with one’s external environment:
“I-It”, and “I-Thou”. The first
relationship is a strictly instrumental one, in which what is encountered is
treated as a thing, to be manipulated, shaped, destroyed, or put aside. The second relationship is a personal one, in
which a sort of resonance occurs: a recognition that the other you are
encountering is, in some sort of way, like you: more than a thing, for the same
(perhaps only vaguely understood) reason that you realize that you, yourself,
are more than merely a thing.
As I engage in following the “prime
directive” in an “I-It” universe, in which there is only me and what I perceive
to be a universe consisting only of things (even if many of these “things” are
actually alive, such as trees and grass), I run the risk of falling into one or
the other of the two extremes of meaninglessness. On the one hand, I may find myself in a
sterile, unchallenging environment: a gray, relatively empty world that leaves
me little to contemplate or contend with.
On the other hand, I may find myself completely overwhelmed by the
environment around me – crippled by it somehow, perhaps literally, for example
if I stumble and plummet down into a ravine, and find myself unable to
move. But even if I am successful in
widening my sphere of experience, there will still be a certain emptiness or
sterility to my existence.
In an “I-Thou” universe, as I set
about trying to increase my range of experience and control, I face a much more
interesting, and potentially fulfilling, set of challenges. Any encounter with a “Thou” (another being) –
even a friendly one – is inherently a contest.
Each of us vies for control during the encounter, while at the same time
yearning to benefit from the experience – to widen our horizons as a result of
it. When I converse with others, I
relish the opportunity to draw from the experiences conveyed in their words,
while at the same time I subtly compete to insert my own statements, and to
thereby maintain a sufficient level of influence over the dialogue. I want to be heard, at least as much as I
want to listen.
Every encounter with an other – a
“Thou” – is fraught with both opportunity and danger. In my daily commute on the subway, there is
the regular temptation to settle into that over-controlled life of the captain
who never strays from the familiar ports, in this case by sitting silently next
to another passenger during the entire ride.
The bold move into spontaneity – starting a conversation with that
fellow traveler – opens a universe of new possibilities for experience: an
enlightening discussion, a new friend, a business opportunity, or an
opportunity for romance. On the other
hand, risking an encounter with the “Thou” brings dangers as well: an
unpleasant exchange, an outright rejection or rebuff, or worse, violence or
predation. The struggle to maximize
one’s locus of meaning becomes a much more highly charged one in a universe
populated with “Thous” and not just “Its”.
And the line of demarcation between
an “It” and a “Thou” is a very fuzzy one.
In many cases, we treat our fellow human beings as means solely, rather
than ends (as in the case of dictators who were responsible for the deaths of
millions of human lives in the service of an abstract idea or policy),
while even animals can assume a central
place in our lives, such as family pets.
Each of us draws that line in unique ways, and we conduct our behavior
in accordance with it.
I am reminded of an episode from the classic American television series, The Twilight Zone, about a man who shunned the company of other people, and who found meaning in his life only through the reading of books. He was a lowly employee of a bank, and one day, after emerging from the sheltered seclusion of that bank’s vault, he discovered that the entire human race had been instantly annihilated as the result of a nuclear war. His reaction was one of elation, as he hurried to the local library, and exulted in the prospect of now being able to spend the rest of his entire life alone, leisurely poring through all of the volumes now at his disposal, without interference from others. But just as he was about to enjoy his first book in what he perceived to be a paradise of seclusion, he dropped his glasses, accidentally stepped on them, and smashed them to pieces, leaving him virtually blind. In that moment, his heaven became a hell, as there was nobody that he could turn to replace the broken spectacles. His one source of real pleasure was now permanently beyond his reach, and he had only a life of misery, utter helplessness, and desolation to look forward to. He learned only too late that even for him, a misanthrope, the road to meaning entailed an ongoing engagement with the “Thou”, with the other people that could make his pursuit of the prime directive possible.
I am reminded of an episode from the classic American television series, The Twilight Zone, about a man who shunned the company of other people, and who found meaning in his life only through the reading of books. He was a lowly employee of a bank, and one day, after emerging from the sheltered seclusion of that bank’s vault, he discovered that the entire human race had been instantly annihilated as the result of a nuclear war. His reaction was one of elation, as he hurried to the local library, and exulted in the prospect of now being able to spend the rest of his entire life alone, leisurely poring through all of the volumes now at his disposal, without interference from others. But just as he was about to enjoy his first book in what he perceived to be a paradise of seclusion, he dropped his glasses, accidentally stepped on them, and smashed them to pieces, leaving him virtually blind. In that moment, his heaven became a hell, as there was nobody that he could turn to replace the broken spectacles. His one source of real pleasure was now permanently beyond his reach, and he had only a life of misery, utter helplessness, and desolation to look forward to. He learned only too late that even for him, a misanthrope, the road to meaning entailed an ongoing engagement with the “Thou”, with the other people that could make his pursuit of the prime directive possible.
I don’t know if we will ever
actually be able to program a machine to follow that prime directive: to follow
a goal of maximizing its locus of meaningful experience by constantly seeking
out new means to expand its avenues of experience and its capacity for exerting
a reasonable modicum of control over its widening environment. I think that it would be very interesting, in
such a case, to see how such a machine would actually behave. Would it live out the worst nightmares
embodied in science fiction novels: of computers or androids who seek to
conquer or even eradicate the human race?
Or would it provide a rational model for how living, intelligent beings
should actually conduct themselves?
Would such a machine ever attain genuine consciousness, with the
capacity to know itself as an “I”, and to engage in real relationships with
others as “Thous”? That is the far more
interesting question. If the answer
proves to be an affirmative one, then it will certainly have a profound impact
on how we view life, consciousness, and meaning itself.
No comments:
Post a Comment