Can a Computer be Conscious? Part 3 (Ethics)

First, an overdue credit to Aeryk, the reason this blog even exists.  Once upon a time, Aeryk and I raided a pizza buffet place, and with a gut full of cheap pizza, we waxed philosophical.  We spoke of the purpose of life, and I opined that for me, learning was very enjoyable, but ultimately meaningless if what I learned died with me.  Aeryk then suggested I serialize what I learned by blogging, and the rest is history.

Introduction

I recently met Aeryk for coffee and he said that my recent articles on Computer Consciousness (here and here) seemed to be building up to Ethics.  He had a point.  Since fellow bloggers ausomeawestin and selfawarepatterns have done some interesting angles on computers, robots and ethics, I’d like to go in a different direction with this.

Assuming a computer achieved consciousness, what — if any — moral impact follows?  For instance, would the computer have moral obligations? Would the computer have moral rights?

The Computer’s Moral Obligation

What are the moral implications of the following?

  1. A rock falls on a person.
  2. Cindy accidentally trips James.
  3. A machine is programmed to kill.
  4. John intentionally insults Mark.

I think only #4 exhibits any moral import, as I judge the morality of an act by the intention behind it. The rock and machine are acted upon by outside forces, and Cindy’s act had no intention behind it.

By the way, I’m not claiming Free Will exists, but Moral thinking seems to require it, so I’ll just assume it exists for the sake of this argument.

What is intention? What if the rock was conscious and interpreted the forces acting upon it as its own desire to fall on a person?  What if the machine interpreted its programming as a desire, a free choice to kill?  Would that change anything?  Not for me, because intention requires free will — the ability to act free of constraints. The illusion of will doesn’t count.

So what does consciousness have to do with free will? Nothing.  There is nothing in consciousness that would imply a freedom of will; for that matter, there is nothing in consciousness that would imply anything, as consciousness is 100% unnecessary for describing any of our acts.  What’s more, there’s nothing incoherent about being conscious and determined (think of a dream).

What’s more, the converse is plausible (inasfar as free will is plausible).  An agent can be an ultimate originator of an act without being conscious.

So why ask about free will with respect to consciousness if the two are independent of each other?  Maybe it’s because I see one property (consciousness) that I think is an essence of “me”, and assume another property (free will) that’s the essence of “me” follows. This was covered in a little more detail in Part 1.

How would you feel about being wronged by a determined, yet conscious entity?  What about being wronged by a non-conscious, yet free-willed entity?  Does the belief that the entity is conscious affect your reaction to the act?

The Machine as Moral Recipient

Would consciousness create moral obligations in us towards the machine?  There’s a stronger claim to that, but it still does not follow.

Take an example: is kicking a rock immoral?  What about kicking a person?  I believe the first isn’t immoral, but the second is.  Why?  Because a rock presumably feels no pain, while a person does.  So right off the bat, right and wrong are (at least partly) defined in terms of causing pain, and to feel pain, one must be conscious.

So there’s reason to regard consciousness seriously when thinking of ethical obligations towards an entity that may be conscious, but this isn’t enough.  Notice that the person must feel pain.  Just because an entity is conscious does not mean the entity can feel happiness or pain; those are particular states that may hold in some types of consciousness, but not in others.

Consciousness can exist without emotions.  There are numerous cases in my life in which I am conscious, but without emotion.  For instance, I am emotionless when confronted with the letter k vs. the letter l, the color red vs. the color blue, and so on.  Consciousness need not imply emotion, so why not a machine that is this way all the time?

If an entity was conscious without emotion, then it could not be hurt.  Therefore, is one obligated to act “morally” towards it?  Could a person then happily “torment” this machine without moral qualms?  Even if the machine was designed to yell in “pain” when damaged, would this imply what we think of as pain, or would it also be an emotionless state?

Troubling Questions with a Comical Interlude

What if a machine can feel pain, but is reprogrammed to be indifferent or even enjoy torments?  What if someone does it, just so that person can “torment” it with a clear conscience?  Technically, the machine isn’t being hurt, yet does that sit well with you?  If not, why not?  What if it’s happier now than it would have been had it been left alone?  If we argue it’s unethical to deprive it of freedom, what if it was never free to begin with?  Or maybe we should ask what’s so hot about freedom that it trumps happiness.  Is it really better being a miserable king than a happy slave?

What if this were done to people, via brain-washing or genetic engineering?

This question was given a comical treatment in The Restaurant at the End of the Universe.  In that scene, our protagonists were in a restaurant and were confronted with a cow that wanted to be eaten.  It approached them, told them of the choicest parts of its body, and happily waited for their decision.  It even mocked the protagonist’s discomfort at eating a talking cow.  This same protagonist had no problem eating non-talking (and non-consenting) cows.

As an aside, that book was the second installment of The Hitchhiker’s Guide to the Galaxy, which is a rich source of philosophical topics.  In fact, there’s a book about it.

Conclusion

Many questions, few answers.   Hey, I needed a “conclusion” section for balance!

Advertisements

16 thoughts on “Can a Computer be Conscious? Part 3 (Ethics)

  1. Thanks for the link. Great questions!

    If a robot has been programmed to avoid damage to itself, to the extent that it prioritizes its actions to avoid that damage, and focuses resources to stop or minimize the damage when it is happening, is it feeling pain?

    I agree that the universe as the end of time scene is priceless!

    1. No problem :).

      IMO, pain is only pain if there’s an inner state of pain, and just acting in a fashion consistent with pain doesn’t necessarily mean pain is present. We’re back to the problem of trying to figure out if something is conscious.

      What I love about The Hitchiker’s Guide series is how it manages to be utterly absurd, yet still raise completely valid points. I might just classify it as a philosophical novel. In fact, in some ways, the structure is similar to Candide.

      1. You raise an interesting point. What does it mean to say that an entity has an “inner state” of something? My hand gets burned and my brain receives strong electrical impulses from the nerves in my hand. I’m programmed by evolution to want to do something about it, to focus my mental and physical resources to minimize the damage to my hand, or repair it afterward. I can override this programming, but only with extreme difficulty, and usually in the service of some good that ultimately satisfies some other programming on my part. Other than complexity, what separates my experience from the robot’s?

        Good point on Hitchiker’s. Philosophical fiction with a sense of humor. Brilliant on Adam’s part.

      2. That’s a great question; inner states unfortunately are those things that seem so fundamental that it’s hard if not impossible to explain them any further. The only explanation that seemed workable was “what it was like to be something”. Again, the inability to observe an inner state from the outside stymies all attempts to get further.

        Presumably the robot has no “inner state” no “awareness” or “it’s not like anything to BE” the robot. Thus the external reactions may be identical, but this inwardness is not.

        Yeah, Adams was awesome.

  2. Thanks for the link; I’m honored! Also, this is a fantastic piece with lots of fascinating ideas, so I will tend to only a few things that stood out to me.

    Early on you note that we are working with the moral data that we “judge the morality of an act by the intention behind it”. The moral difference between throwing a rock at a person with the intention of hurting them, and doing something that causes a rock to fall on someone’s head when you could not have reasonably foreseen this happening, is intuitively vast because intentions are relevant to the moral status of an action.

    Now, defining ‘intention’ in regards to morality has proven difficult; theorists concerned with the doctrine of double effect have been toiling over the necessary and sufficient conditions for ‘intention’ since St. Aquinas. One of the more popular conceptualizations is by Michael Bratman. He posits that the necessary and sufficient conditions for an ‘intention’ are that the agent reasons out a plan for how to make the effect obtain, other intentions are suppressed in order to ensure the obtaining of the effect, and the agent tracks their success at making the effect obtain.

    I think this conceptualization is satisfactory, but I want to note that it remains silent on the question of free will. Either hard determinism or agent-causal libertarianism could be true while this conceptualization of ‘intention’ is true. The hard determinist says that it feels like we have free will but in reality we do not because of infinite causal chains; thus, this view of intention accommodates the feeling of planning outcomes that are predetermined by causal chains. But these necessary and sufficient conditions entail a certain first-person phenomenological experience that, to me, necessitates consciousness. So, I want to register a disagreement: I submit that consciousness is required for intentions, but free will is not required for intentions.

    The consequence of this view is that it seems that it could be morally wrong to kick the wrong IF I intended to hurt the rock. However, while there is no doubt something moronic about intending to hurt the rock as it seems to intend to hurt the rock one must truly believe that they CAN hurt the rock, I do think that if the person really truly wanted to hurt the rock, then it would be immoral. I think that we normally don’t consider kicking rocks morally wrong because we presuppose that no one actually intends to hurt rocks. From this it seems to follow that we might have certain duties (they have negative rights — freedom from the intentionally being caused pain) to computers, even if they are as devoid of consciousness or intelligence as a rock.

    1. BEAUTIFUL!

      I love your reasoning, especially how consciousness could be required for intention from a certain viewpoint. I think my post is compatible with the view you put forth, but unfortunately I was vague on key terms, going by what I assumed were the colloquial views.

      I was trying to explain the role of intention as the typical person (IMO) would see it. Such a person would see intention as the ultimate cause of any act, and therefore, incompatible with anything other than unconstrained free will.

      Your point on the rock is spot on. Yes, if the person kicked a rock thinking s/he would hurt it, that person would be immoral. Again, the unspoken assumption in my article was that the person kicking the rock and kicking the person would have the typical views of both (ie: that one doesn’t have feelings, while the other does).

      I think intention (like much else) is one of those concepts that breaks down under close scrutiny. For instance, if we take the deterministic view, then intention would be an inner state as there would be no will as most people think of it. If so, then intention has no causal power, which would violate what most people think of as intention. In such a view, a rock that fell off a ledge because it was precariously balanced would have intended to fall if it had a mental state and was good at justifying pre-existing conditions (which is what we often seem to do when we claim we cause an act).

  3. I’m not sure quite what you’re getting at in this post, but one thought occurred ….

    If a machine is conscious then it might as well be a human being or other animal , so I’m not sure that conscious machines would raise any new ethical problems. If conscious machines are possible then we are one, so it seem to me the discussion wouldn’t be changed if it was all about human beings.

    After all, from memory Asimov’s three laws could apply equally well to humans.

    1. Some questions arise with the prospect of machine consciousness, and one of them is ethical, so the post was an attempt to explore the ramifications of machine consciousness on ethics, and why (or even if) consciousness should have such an impact.

      Another way of looking at it is this: is consciousness central to certain ethical theories? I wasn’t so much getting at anything, as I was trying to explore the question.

      I agree that a conscious machine might as well be a person or animal. But to draw this distinction out, it helps to look at an alien consciousness, especially since we may be able to “accidentally” put some of our anthrocentric prejudices aside.

      I think Asimov’s Laws are too severe. Law #1 can force people to put themselves at risk, Law #2 makes us slaves to others, and Law #3 constrains the right to self defense. Remove Law #2 and relax some of the requirements of the remaining two Laws and you have a foundation for a workable ethics.

      Interestingly enough though, the three Laws are similar to hardcore ethics of unconditional love.

  4. I can’t quite remember the three laws at the moment and will need to look them.up.

    It seems to me that the laws for machine behaviour, biological or mechanical, would not need to be any different from the laws of human ethical behaviour. Or not unless we wanted them to be our servants. Maybe this is what led Asimov to make their laws different from ours.

    1. You hit the nail on the head. The 3 Laws were intended for Robots as servants, and some of Asimov’s stories revolved around the injustice and dilemmas raised by those laws.

      But you still have a solid (and really interesting) point, because really stringent ethical systems work out to those laws. For instance, what of religions or philosophies that advocate we work for the service of others? Or those that advocate non-violence, even if it means not being able to defend ourselves? What about those that tell us not to resist others, or even to walk 2 miles with someone when we are forced to walk 1? Sounds a lot like the 3 laws to me.

      So actually, equating Asimov’s 3 Laws with Ethics is true, provided we stress that these are VERY high ethical ideals (that nonetheless exist in some circles), and are not the ethics most people advocate.

  5. Yes. Perhaps something like Jainism. If we share their belief in our common identity then something close to Asimov’s laws would seem to follow naturally, but without the slavery clauses.

  6. So I read your article and decided I’d comment some other day since I was exhausted and it’s been two weeks or so. Yay me.
    I agree with your analysis that consciousness does not imply either free will or pain/pleasure. I’d say that, if we were to give a computer moral obligations or be morally obligated towards it, this computer would have to be a simulated human, for instance. We could do this (theoretically) by running the human as a huge neural network.
    One thought experiment: imagine your friend Matt (you probably don’t have a friend called Matt but whatever) is scanned and his neurons translated into a software neural network. You program a way to communicate, so that the computer interprets his signals and synthesizes a voice, for example, and does the same from outside to inside, creating the appropiate neural response to the different stimuli.
    Now the computer starts to speak and says: “man it’s me! What’s happening? Everything feels kinda weird! Help me out!” Then you start talking to him and he recalls events from your childhood, jokes around just like Matt would and behaves consistently like someone who just awoke and found himself in a not-very-well-done interface with the world.
    Now you still haven’t solved the hard problem of consciousness. You don’t know if that’s Matt or a philosophical zombie. You don’t know if that being has qualia.
    So the question is: would you care at all about all of this philosophical considerations? Or just assume that’s Matt in the computer? What does that tell us about ourselves? And about others?

    1. Great question.

      Emotionally, I’d think the system were Matt, while logically claiming it wasn’t. I’d do this because it acted like Matt. Maybe the behavior would stoke my own emotions, memory, etc… (my identity-complex) and would thus trump everything else? Maybe people are treated as people-for-me (they are who they are because of how they make me feel)?

      In fact, let’s take the opposite view. Assume Matt is a close friend that I never met personally, but I communicate via chat and email. Now imagine that Matt suffered a severe personality change and memory loss (due to brain trauma). I’d argue the person was still Matt (it would be Matt’s consciousness), but I’d act as if it wasn’t Matt.

      I made it an email example since the visual image could be something I’d also attach to, but maybe even that would not be enough for me to act as if it were still Matt… What if I were in touch with Matt? What if Matt were replaced by a duplicate with a different personality, and I knew this?

      Again, I really like your thought experiment.

      1. I got it by watching “Almost human,” JJ Abrams’s new show. There were these androids that talked like humans and acted like humans and I just kept thinking that, even if they were not conscious, I would still treat them as if they were. I mean, if I stick a dagger in his belly and he starts screaming and crying, I’ll probably stop, regardless of how much I doubt he’s really suffering.
        So I guess what I was getting at was that we seem to be programmed to assume that consciousness in any entity that behaves similarly to us and that whether they are actually conscious is irrelevant to the way we treat them.
        In your example, I’d tend to agree with you. If he suffers a personality change (and a memory loss) then I’ll just stop treating him as Matt or just stop treating him at all.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s