First, an overdue credit to Aeryk, the reason this blog even exists. Once upon a time, Aeryk and I raided a pizza buffet place, and with a gut full of cheap pizza, we waxed philosophical. We spoke of the purpose of life, and I opined that for me, learning was very enjoyable, but ultimately meaningless if what I learned died with me. Aeryk then suggested I serialize what I learned by blogging, and the rest is history.
I recently met Aeryk for coffee and he said that my recent articles on Computer Consciousness (here and here) seemed to be building up to Ethics. He had a point. Since fellow bloggers ausomeawestin and selfawarepatterns have done some interesting angles on computers, robots and ethics, I’d like to go in a different direction with this.
Assuming a computer achieved consciousness, what — if any — moral impact follows? For instance, would the computer have moral obligations? Would the computer have moral rights?
The Computer’s Moral Obligation
What are the moral implications of the following?
- A rock falls on a person.
- Cindy accidentally trips James.
- A machine is programmed to kill.
- John intentionally insults Mark.
I think only #4 exhibits any moral import, as I judge the morality of an act by the intention behind it. The rock and machine are acted upon by outside forces, and Cindy’s act had no intention behind it.
By the way, I’m not claiming Free Will exists, but Moral thinking seems to require it, so I’ll just assume it exists for the sake of this argument.
What is intention? What if the rock was conscious and interpreted the forces acting upon it as its own desire to fall on a person? What if the machine interpreted its programming as a desire, a free choice to kill? Would that change anything? Not for me, because intention requires free will — the ability to act free of constraints. The illusion of will doesn’t count.
So what does consciousness have to do with free will? Nothing. There is nothing in consciousness that would imply a freedom of will; for that matter, there is nothing in consciousness that would imply anything, as consciousness is 100% unnecessary for describing any of our acts. What’s more, there’s nothing incoherent about being conscious and determined (think of a dream).
What’s more, the converse is plausible (inasfar as free will is plausible). An agent can be an ultimate originator of an act without being conscious.
So why ask about free will with respect to consciousness if the two are independent of each other? Maybe it’s because I see one property (consciousness) that I think is an essence of “me”, and assume another property (free will) that’s the essence of “me” follows. This was covered in a little more detail in Part 1.
How would you feel about being wronged by a determined, yet conscious entity? What about being wronged by a non-conscious, yet free-willed entity? Does the belief that the entity is conscious affect your reaction to the act?
The Machine as Moral Recipient
Would consciousness create moral obligations in us towards the machine? There’s a stronger claim to that, but it still does not follow.
Take an example: is kicking a rock immoral? What about kicking a person? I believe the first isn’t immoral, but the second is. Why? Because a rock presumably feels no pain, while a person does. So right off the bat, right and wrong are (at least partly) defined in terms of causing pain, and to feel pain, one must be conscious.
So there’s reason to regard consciousness seriously when thinking of ethical obligations towards an entity that may be conscious, but this isn’t enough. Notice that the person must feel pain. Just because an entity is conscious does not mean the entity can feel happiness or pain; those are particular states that may hold in some types of consciousness, but not in others.
Consciousness can exist without emotions. There are numerous cases in my life in which I am conscious, but without emotion. For instance, I am emotionless when confronted with the letter k vs. the letter l, the color red vs. the color blue, and so on. Consciousness need not imply emotion, so why not a machine that is this way all the time?
If an entity was conscious without emotion, then it could not be hurt. Therefore, is one obligated to act “morally” towards it? Could a person then happily “torment” this machine without moral qualms? Even if the machine was designed to yell in “pain” when damaged, would this imply what we think of as pain, or would it also be an emotionless state?
Troubling Questions with a Comical Interlude
What if a machine can feel pain, but is reprogrammed to be indifferent or even enjoy torments? What if someone does it, just so that person can “torment” it with a clear conscience? Technically, the machine isn’t being hurt, yet does that sit well with you? If not, why not? What if it’s happier now than it would have been had it been left alone? If we argue it’s unethical to deprive it of freedom, what if it was never free to begin with? Or maybe we should ask what’s so hot about freedom that it trumps happiness. Is it really better being a miserable king than a happy slave?
What if this were done to people, via brain-washing or genetic engineering?
This question was given a comical treatment in The Restaurant at the End of the Universe. In that scene, our protagonists were in a restaurant and were confronted with a cow that wanted to be eaten. It approached them, told them of the choicest parts of its body, and happily waited for their decision. It even mocked the protagonist’s discomfort at eating a talking cow. This same protagonist had no problem eating non-talking (and non-consenting) cows.
Many questions, few answers. Hey, I needed a “conclusion” section for balance!