MATRIX PHILOSOPHY: ARTIFICIAL ETHICS by Julia Driver
The significance of The Matrix
as a movie with deep philosophical overtones is well recognized. Whenever
the movie is discussed in philosophy classes, comparisons are made
with Descartes Meditations, particularly the dream argument
and the evil genius scenario, both of which are intended to generate
skeptical doubt. How do we know, for example, that we are awake now,
rather than merely dreaming? How do we know that our thoughts are
not being manipulated, and that our perceptions of reality
are accurate? The Matrix makes these doubts stand out vividly.
However, The Matrix raises many other interesting philosophical
issues, and ones that are worthy of further discussion. This essay
explores some of the moral issues raised in The Matrix. The
first is the issue of the moral status of the created beings, the
artificial intelligences, which figure into the universe
of The Matrix. The second is the issue of whether or not one
can do anything wrong in circumstances where ones experiences
are non-veridical; that is, where ones experiences fail to reflect
reality.
I. The Moral Status of Programs
There is a reality to the Matrix.
The substance of that reality may differ dramatically from the substance
we label real — the real world is the
desert reality that Morpheus reveals to Neo. But it is clear that,
out of the grip of the Matrix, though still having certain dream-like
experiences, Neo and his enlightened friends are dealing with actual
sentient programs, and making decisions that have actual effects for
themselves as well as the machines and the programs. What is the moral
status of the sentient programs that populate the Matrix, or, for
that matter, the moral status of the machines themselves? The universe
of The Matrix is populated with beings that have been created
— created by programmers or created by the machine universe
itself. The agents, such as Smith, Neos pursuer, are prime examples.
These beings come into and go out of existence without comment on
the part of whoever controls the switches — and without any
moral debate on the part of the humans who also would like to see
the agents destroyed. There seems to be an implicit view that their
existence is less significant, their lives of less moral import, than
the lives of naturally existing creatures such as ourselves.
An obvious explanation for this attitude is that humans are long accustomed
to thinking of themselves as being at the center of the universe.
The geographic point changed with Copernicus. However, our view of
our dominant place in the moral universe has stayed fixed. But, once
again, science — and particularly, now, cognitive science holds
the potential for challenging this certainty. And science fiction
such as The Matrix, which explores differing directions for
these potentialities, also brings challenges to this worldview. What
The Matrix offers is a vivid thought experiment. It is a thought
experiment which makes us ask the sort of what if? question
that leads to a change in self conception. It forces us to see where
our well accepted moral principles would take us within one possible
world.
We know that killing human beings is wrong. It is wrong because human
beings have moral standing. Human beings are widely believed to have
this standing in virtue of consciousness and sentience. For example,
a rock has no moral standing whatsoever. Kicking a rock does not harm
it, and no moral rights are violated. It is an inanimate, non-conscious
object incapable of either thought or sensation. Animals, however,
are generally taken to have some moral standing in virtue of their
sentience. Kicking an animal for no compelling reason is generally
taken to be immoral. Human beings have greater standing in virtue
of their higher rational capacities. They can experience more varied
and complex harms, and a wider range of emotional responses
such as resentment in virtue of their rationality. How
one came into existence is not taken to be morally significant. Some
people are the products of natural conception, and some are the result
of conception in the laboratory. This makes no difference to the possession
of those qualities we take to be morally significant consciousness
and rationality. And, surely, the substance from which someone
is created is completely irrelevant to the issue of moral status.
If a persons consciousness could somehow be transferred to a
metallic or plastic robotic body, the end result would still be a
person.
It would seem, then, that the fact that one is created, or artificial,
is in no way relevant to ones moral standing. And, if this is
the case, then the world of The Matrix presents underappreciated
moral complexities. Agents such as Smith, while not very pleasant,
would arguably have moral standing, moral rights. Of course, Neo has
the right to defend himself — Smith is not, after all, an innocent.
Indeed, if the religious theme is pursued, he is an agent of darkness.
But any innocent creations of the machines — beings brought
into existence to populate the Matrix — also would have moral
rights. Just as it would be wrong to flip a switch and kill an innocent
human being, no matter how that human being came into existence, it
would be wrong to flip a switch and kill a sentient program. As long,
of course, as that program possessed the qualities we regard as morally
relevant. And this is where one of the primary issues raised by
the possibility of artificial intelligence becomes important to the
question at hand. Do these programs possess consciousness? Since we
are considering the world of The Matrix, lets look at
what evidence seems to exist in the movie. While we dont have
much information about the machines themselves, their agents are on
ample display.1
Smith, of course, and his colleagues seem remarkably without affect.
Yet, at critical points they do display emotions: anger, fear, and
surprise. They seem able to plan and to carry through on a plan. Smith
also displays a capacity for sadistic pleasure — at one point
he displays this, when he forces Neos mouth shut. Smith also
displays extreme fear near the end of the movie, when Neo leaps through
him. The agents display many, if not all, of the responses we associate
with consciousness and sentience. But this brings us to another skeptical
challenge posed in The Matrix. How can we be sure they do posses
minds, and are not mere automata, albeit highly complex ones? Though
the movie invites this reflection, it is important to see where this
challenge can take us. The "how can I be sure?" question
can extend beyond the agents to our fellow human beings. Since a persons
conscious experiences are essentially private, one cannot be directly
aware of anothers experiences. We might try, as St. Augustine
suggested, to solve this problem by appeal to analogy: I do directly
experience my own mental states — I know that I am a conscious,
aware, being. I also know on the basis of observation that I am structurally
similar to other human beings. Thus, I reason by analogy, that they
must experience mental states as well.2
And, indeed, The Matrix invites such a comparison when the agents
display behavior consistent with the experience of certain psychological
states.3
Given, then, that we believe what we are invited to believe it would
follow that the sentient programs, the cyber persons, do possess those
qualities we associate with moral standing. They have moral rights
on the basis of consciousness and sentience and rationality. Thus,
their moral standing is the same as that of human beings.
It is possible that human beings have some additional value —
a kind of antiquarian value. We are, so to speak, "the originals."
The original Mona Lisa, for example, has value in excess of its copies.
But this kind of value is not moral value and does not reflect on
the moral standing of the object, or the moral significance of the
lives themselves. The Mona Lisa does have value, but no moral standing
since it is a mere painting; it lacks consciousness. It may be damaged,
but not harmed in the way that humans and sentient creatures can be
harmed.
Perhaps the machines view humans this way. To the machines, the value
of humans is mainly instrumental. They are valued as a source of energy,
but they may also have some antiquarian value. Humans are merely relics
of a past they themselves helped to destroy. If thats the case,
the machines have turned the tables. They are making the same moral
mistake humans apparently made in the context of The Matrix, in viewing
other rational life forms as simple instruments, to use and destroy
as one wishes. Indeed, both sides of the conflict seem to have displayed
some moral blindness. The humans, in using and destroying, and the
machines, certainly, in their subjection of the humans. But both sides
view themselves as fighting for survival, and I imagine that Smith
and Smiths creators, as well as Neo and his friends, would argue
that moral qualms like these are a luxury.
[ Top ]
II. Manipulation and Immorality
The world that the pre-enlightened
Neo inhabits is one made up by machines. The machines have created
a humdrum existence for humans, to keep them happy and pacified and
free of the knowledge that they are being used as a source of energy
for the machines. Most humans believe that this world is real, but
they are mistaken. Within this world they build lives for themselves,
have relationships, eat lovely dinners, and at least seem to both
create and destroy. To some extent this existence is dream like. It
isnt real. When the unenlightened person thinks hes eating
a steak, he isnt. Instead, the machines generate mental experiences
which correspond to the experience of eating a steak, but which are
non-veridical that is, the person is not actually eating
a steak. There is no real or actual steak. The human beings
actions, in that respect, have no real or actual consequences in a
world that exists independently of his or her mind. However, even
in this unenlightened state, the humans do have some control,
since what they do in the Matrix has consequences which
are realized in the real world. Getting smashed by a truck in the
Matrix kills the person in reality. The Matrix offers a brain-in-a-vat
experience, but one where the experiencer does have some control.4
The enlightened can, in principle, understand the rules of the Matrix
and learn to exert that control with full understanding.5
But, as the steak example illustrates, there are many other actions
they perform that seem to have no effects in the real world. The pre-enlightened
Neo and most of the humans living in the Matrix seem to be deluded.
One issue raised by this is the extent to which they can be held responsible
for their actions in the Matrix. Suppose, just for the sake of argument,
that something like wearing fur is immoral. Is simply making a choice
to wear fur, along with the belief that one is wearing fur, enough
to make one guilty of wrongdoing? Is it really only the thought
that counts, morally? A competing view is that the choices people
make must result in actual bad consequences in order for them to be
guilty of wrongdoing; or, actual good consequences in order for them
to be considered to have acted rightly. So, the issue is that of whether
or not the moral quality of a persons actions — its rightness
or wrongness — is determined solely by his or her subjective
states, or whether, instead, actual consequences figure into this
determination.
In the Matrix if fur is worn it is virtual fur, and not real —
though the wearer does not realize this. Again, this is because he
or she is being mentally manipulated. But is this a genuine delusion?
Certainly, an insane person who fails to have a grip on reality, and
is deluded in this sense, is thought to have diminished moral
responsibility for what he or she does while deluded. Such a person
is generally held to not be morally responsible in those circumstances.
He is not punished, though he may be confined to a mental hospital
and treated for his insanity. The explanation is that the actions
performed while insane are not truly voluntary. If the persons who
live in the Matrix are similarly deluded, then it would seem that
they are not responsible for what they do in the Matrix.
Some writers have argued that one cannot be held responsible for what
happens in a dream, since dreams themselves are not voluntary,
nor are the actions one seems to perform in a dream.6
Other writers, such as Henry David Thoreau, had the view that what
we seemed to do in a dream reflected on our character; and the contents
of dreams could reveal true virtue or vice.7
Even if the actions one performs in a dream have no actual good or
bad consequences, they reveal truths about ones emotional make-up,
and ones inner desires, and these, in turn are revealing of
character. But, as weve discussed, the Matrix isnt a dream.
The unenlightened exist, rather, in a state of psychological manipulation.
The actions they seem to perform dont always have the effects
(in reality) that they have reason to expect, based on their manipulated
experiences. But even in the Matrix we can argue that they make voluntary
choices. They are not irrational. They are not like the insane. Neo
believes what any rational, reasonable person would believe under
the circumstances. The pre-enlightened are analogous to persons who
make decisions based on lies that others have told them. They act,
but without relevant information. Its that condition that Neo
would like to rectify at the end of The Matrix.
The view I favor is that without actual bad effects the actions
of those in the Matrix are not immoral. But, again, this claim is
controversial. Some would argue that its simply "the thought
that counts"; that it is the persons intentions which determine
the moral quality of what he or she does. Immanuel Kant, for example,
is famous for having claimed that all that matters, intrinsically,
is a good will actual consequences are irrelevant to
moral worth.8
However, it would then be the case that forming bad intentions in
ones dreams is also sufficient for immorality, and this seems
highly counterintuitive. If thats true, then the intention to
do something immoral along with the belief that one has so acted,
is enough to make one guilty of moral wrongdoing. Instead, it seems
more plausible that it must also be the case that there is some actual
bad brought about, or at least the realistic prospect of some actual
bad consequences, and thus non-veridical wrongdoing in
the Matrix is not actual wrongdoing.
This seems to be clearly the case in a dream. In a dream, when the
dreamer decides to do something bad that decision doesnt impact
on the real world. But the Matrix is not really a dream. If we assume
that the virtual world of the Matrix is complete — that
is, completely like the real world before the machines took over —
then the virtual harms are still real in that they are
realizized in terms of actual unpleasant mental states. The
virtual fur coat is the result then of a virtual animal getting killed,
but a virtual animal with all the right sorts of mental states —
in this case, pain and suffering. If this is the case, then the killer,
though mistaken in thinking the dead animal real has still
produced bad effects in the form of genuine pain and suffering. And
thus, the action is immoral even though non-veridical. However, if
the world of the Matrix is incomplete, the issue becomes more complicated.
If Cyphers virtual steak comes from a virtual meat locker, and
the meat locker is the end of the line — and the acquisition
of the steak does not involve the killing of a virtual animal with
all the same psychology of pain and suffering a real animal
feels, then no moral harm has been done.
But note that Thoreaus point still holds even though the Matrix
is not exactly like a dream. That is — even if a person hasnt
actually done anything bad, or caused any real harm to another sentient
life form, we may still make a negative evaluation of the persons
character.
But my guess is that the Matrix is a complete alternate reality created
in the image of the pre-machine reality. And the Matrix, if it does
offer such a complete replication of the pre-machine reality, is truly
a self-contained world. It has its own objects, its own people, animals
and
ethics. The systematic deception of the humans doesnt
change this.
Julia
Driver
[ Top ]
Footnotes
1. The issue
of the moral status of the machines themselves should be kept distinct
from the issue of the moral status of the sentient programs. I will
focus on the latter issue here in discussion, simply because the movie
provides more information about the behavior of these constructs.
But the same points would hold for the machines themselves
if they have those qualities that are morally significant, consciousness
and rationality, then they also possess moral standing.
2. St. Augustine, The
Trinity (8.6.9). Again, this line of reasoning is controversial
since it relies on a single case analogy.
3. A lot hinges on what
we take to be structurally similar. Some would argue that
while the sentient programs are not themselves structures, the machines
are, and thus the machines may possess consciousness, though the programs
cannot. However, I believe the sentient programs can be structurally
similar if thats understood functionally their code has
structure which provides functional equivalence to the physical states
that underlie our mental states. But, this issue would be extremely
controversial, and there isnt enough time to delve into it more
fully here.
4. See Christopher Graus
introductory essays on this site for more on dream skepticism and
brain-in-a-vat skepticism.
5.The unenlightened, on
the other hand, are constantly being "Gettiered". A woman
may have justified true belief that her husband is dead, because she
has just seen him smashed by a truck. But being in the
Matrix she lacks true knowledge because she is deceived in the true
manner of his death.
6. See, for example, William
Manns "Dreams of Immorality," Philosophy (1983),
pp. 378-85.
7. Thoreau writes about
this in A Week on the Concord and the Merrimack (1849).
8. This
also is controversial, but see Kants Foundations of the Metaphysics
of Morals, trans. Lewis White Beck, and critical essays ed. by
Robert Paul Wolf (NY: MacMillan, 1969):
Nothing in the world — indeed, nothing
even beyond the
world — can possibly be conceived which
could be called good without qualification except a good will
The
good will is not good because of what it effects or accomplishes or
because of its adequacy to achieve some proposed end; it is good only
because of its willing, i.e., it is good of itself. (pp. 11-12)
[ Top ]
|