Home –  moral ideas
Category Archives: moral ideas

Morality is the way humans solve Prisoner's Dilemma problems.

In a Prisoner Dilemma problem, each agent has to choose between an action A that would benefit itself by a certain amount X, or an action B that would benefit each individual in the group by less than X. However, if most of the agents choose their best option, A, no one would benefit, or they could even get damaged.

There are several example of how to make this less abstract, but I will use an uncommon one. Let's say that you have to  choose between advancing your career in a selfish and dodgy way, hurting several people in the process of getting to the top. The other option is to painstakingly treat everyone in a nice way, never step on anyone's toe, and try to do as good of a career as you can within these limits. Let's say that you `can be pretty sure that, with the first option, you will get to the top earlier. What would you do? What should you do? What do you think anyone should do?

Intuitively, we know that one of the option is morally wrong, the other one is morally right, or neutral. We know  which one is which because we feel it, and we don't have to think about it.1

It also happens that the immoral option is the most rational, in terms of evolutionary fitness. It doesn't make sense to get the longer route, suffer through it, and maybe not even getting to the same results, where I can be on a better position faster, without hurting my chance of finding a suitable mate for my offsprings (maybe not within the pool of people I have hurt - but I will have access to another pool of people, more powerful and thus more convenient, evolutionistically speaking).2

This same reasoning holds for everybody, but if everybody would do that, we would leave in a horrible world where everybody hurt each other for their own benefit.  This, like many moral problem, is a Prisoner's Dilemma problem (from now on, PDp)3.

Across history, there must have been groups that consistently tackled the PDp by chosing the most individually convenient action. We are not one of those groups. Those groups are probably extinct or evolved into something different, as their actions would in the long run damage the group itself, and would make any civilzation impossible.

We, as a human speces, have mostly solved these types of problems through coordinate signalling. We have developed a way to signal to each other that someone is solving a PDp in an individualist, group-hurting way.  The signals lead to a punishment: ostracizing, imprisonment, etc. These signals are mostly aimed at other agents in the group. At a certain point, however, it just becomes convenient to aim them at ourselves: we don't want to be the target of retaliation, we want to prevent punishment, and thus we need to automatically tell ourself what is the best thing to do to solve PDp. But, watch out! The best thing to do in this case is the opposite of what you would do if you were a rational individualistic agent. Thus, this feeling has to be an innate and irrational (it has to come from your gut and not from your head) because it goes completely against our evolutionary drive of doing the best thing for ourselves.

So we send signal to ourselves to avoid punishment. You also know that a signal of "you are doing something PDp-wrong" (as in something would hurt your group and benefit yourself) is most likely going to be followed by a punishment. When you consistently associate  a signal to a certain punishment, the signal becomes the punishment itself. In this way, signalling that someone is doing something PDp-wrong is a way to punish them, and people have developed ways of efficiently signalling each other. Internally, people can signal+punish themselves with a sense of guilt for making a PDp-wrong action. Externally, they can use a variety of techniques, such as social shaming. If this seems absolutely horrible for you, imagine a society where this doesn't happen. If you take the signalling out of the equation, the set of people that solves PDp in an individualistic way will take over, and this would the horrible for everyone.

So we developed signals for indicating actions that are good individually but bad for the group. These signals are associated with punishment, and they are "punishing" themselves. They can be targeted at each other, but it soon becomes convenient to target them at yourself as well, for preventing group retaliation. Put all of this together, and you get a morality system.

Morality is the way humans solve Prisoner's Dilemma problems. A moral problem is a Prisoner's Dilemma problem.

Why the most rational action in moral problems is to behave immoraly? Because morality has been developed precisely to prevent people to behave rationally in moral problems.

Drink, drive, and kill: are you morally responsible?

This is controversial.
I am about 60% confident about the following argument.

I design two thought experiments to better understand the moral implications of drinking, driving, and killing, or all those similar situations where you voluntarily put yourself in a state of altered consciousness, and then commit something terrible.

Though experiment number 1. There is a red button in front of you. If you press it, you will perform an action: you will initiate an action, and you won't be able to stop it until the selected action is completed. You being conscious or not of your action is irrelevant. What is important is that you cannot control your body doing the action. The actions are randomly chosen across all the possible actions that a drunk person does. Most of these actions are harmless: most of the time you just have a nice night out with your friends, some time you'll do something stupid like getting a tattoo, but in a few cases you'll do something terrible and kill someone. You don't have to press the button.
If you do, and you end up killing someone, did you make an immoral choice?
My intuition says yes, and I bet yours does too.

Though experiment number 2. There is a blue button in front of you. If you press it, two actions can happen: 1) with 99% of chance, 10'000 people with incurable cancer will be cured immediately, or 2) with 1% of chance, 1 random person in the world will die. You don't have to press the button.
If you do, and you end up killing the one person, did you make an immoral choice?

My moral intuition says no. My moral intuition says that the immoral action is not pressing the button. But saying no to this though experiment changed my point of view about the first one.

The second though experiment suggests that we shouldn't morally judge people based on the outcome of their action. Their outcome may be just based on luck. We should morally judge them based on the integral over all possible outcomes from their action.

How is this connected with drinking and driving? Assume that when you get drunk, you enter in an altered state of consciousness. Within this altered state, someone is obviously doing an action, and this someone inhabits your body, but is not you as his/her mental state is substantially different from your average mental state. If this is true4 you shouldn't be judged based on the action itself (since is not you doing it), but you should be judged based on the decision of entering this new mental state (giving the reins of your action to drunk you). I argue that the way to judge this decision morally depends on all the possible outcomes estimated at the moment you are getting drunk.

To answer the question in the title:

Your moral responsibility in this case depends on the state of the world when you take the decision to get drunk 2. If you are about to drive home and now you get drunk, you are as more morally responsible as if you got drunk  at home, alone, whatever is the outcome of your action. Let me spell it clearer: if you drink, drive, and not kill, you are as morally responsable as if you drink and drive3. Even more: if you are about to drive, but not driving yet, and now you get drunk, you are in the same moral landscape as someone that had now decided to drive, and ends up killing someone, even if you decide not to drive4.