Home –  Decision-Making
Category Archives: Decision-Making

Morality is the way humans solve Prisoner's Dilemma problems.

In a Prisoner Dilemma problem, each agent has to choose between an action A that would benefit itself by a certain amount X, or an action B that would benefit each individual in the group by less than X. However, if most of the agents choose their best option, A, no one would benefit, or they could even get damaged.

There are several example of how to make this less abstract, but I will use an uncommon one. Let's say that you have to  choose between advancing your career in a selfish and dodgy way, hurting several people in the process of getting to the top. The other option is to painstakingly treat everyone in a nice way, never step on anyone's toe, and try to do as good of a career as you can within these limits. Let's say that you `can be pretty sure that, with the first option, you will get to the top earlier. What would you do? What should you do? What do you think anyone should do?

Intuitively, we know that one of the option is morally wrong, the other one is morally right, or neutral. We know  which one is which because we feel it, and we don't have to think about it.1

It also happens that the immoral option is the most rational, in terms of evolutionary fitness. It doesn't make sense to get the longer route, suffer through it, and maybe not even getting to the same results, where I can be on a better position faster, without hurting my chance of finding a suitable mate for my offsprings (maybe not within the pool of people I have hurt - but I will have access to another pool of people, more powerful and thus more convenient, evolutionistically speaking).2

This same reasoning holds for everybody, but if everybody would do that, we would leave in a horrible world where everybody hurt each other for their own benefit.  This, like many moral problem, is a Prisoner's Dilemma problem (from now on, PDp)3.

Across history, there must have been groups that consistently tackled the PDp by chosing the most individually convenient action. We are not one of those groups. Those groups are probably extinct or evolved into something different, as their actions would in the long run damage the group itself, and would make any civilzation impossible.

We, as a human speces, have mostly solved these types of problems through coordinate signalling. We have developed a way to signal to each other that someone is solving a PDp in an individualist, group-hurting way.  The signals lead to a punishment: ostracizing, imprisonment, etc. These signals are mostly aimed at other agents in the group. At a certain point, however, it just becomes convenient to aim them at ourselves: we don't want to be the target of retaliation, we want to prevent punishment, and thus we need to automatically tell ourself what is the best thing to do to solve PDp. But, watch out! The best thing to do in this case is the opposite of what you would do if you were a rational individualistic agent. Thus, this feeling has to be an innate and irrational (it has to come from your gut and not from your head) because it goes completely against our evolutionary drive of doing the best thing for ourselves.

So we send signal to ourselves to avoid punishment. You also know that a signal of "you are doing something PDp-wrong" (as in something would hurt your group and benefit yourself) is most likely going to be followed by a punishment. When you consistently associate  a signal to a certain punishment, the signal becomes the punishment itself. In this way, signalling that someone is doing something PDp-wrong is a way to punish them, and people have developed ways of efficiently signalling each other. Internally, people can signal+punish themselves with a sense of guilt for making a PDp-wrong action. Externally, they can use a variety of techniques, such as social shaming. If this seems absolutely horrible for you, imagine a society where this doesn't happen. If you take the signalling out of the equation, the set of people that solves PDp in an individualistic way will take over, and this would the horrible for everyone.

So we developed signals for indicating actions that are good individually but bad for the group. These signals are associated with punishment, and they are "punishing" themselves. They can be targeted at each other, but it soon becomes convenient to target them at yourself as well, for preventing group retaliation. Put all of this together, and you get a morality system.

Morality is the way humans solve Prisoner's Dilemma problems. A moral problem is a Prisoner's Dilemma problem.

Why the most rational action in moral problems is to behave immoraly? Because morality has been developed precisely to prevent people to behave rationally in moral problems.

Quantile Probability Plot

In this post I will present a code I've written to generate Quantile Probability Plots. You can download the code from the MATLAB File Exchange Website.

What are these Quantile Probability Plots? They are some particular plots very used in the Psychology field, expecially in RT experiments when we want to analyse several distributions from several subjects in different conditions. First of all let's see an example of a quantile probability plot in the literature:

Ratcliff

(from Ratcliff and Smith, 2011)

As explained in Simen et al., 2009
"Quantile Probability Plots (QPP) provide a compact form of representation for RT and accuracy data across multiple conditions. In a QPP quantiles of a distribution of RTs of
a particular type (e.g., crrect responses) are plotted as a function of proportion of  responses of that type: thus, a vertical column of N markers would be centered above the position 0.8 if N quantiles were computed from the correct RTs in a task condition in which accuracy was 80%). The ith quantile of each distribution is then connected by a line to the ith quantiles of othe distribution."

For example, in the graph presented we have four counditions. From the graph we can extrapolate the percentage of correct responses to be around 0.7, 0.85, 0.9 and 0.95 (look at the right side). For each one of this distribution we computer 5 quantiles, plotted against the y axis (in ms). In the left side we have the distributions from the same conditions, but for the error responses!

With my code you can easily generate this kind of graph and even something more. Let's see how to use it.

First of all, you need to organize the file in this way: first column has to be the dependent variable (for example, reaction times in our case); second column the correct or incorrect label (1 for correct, 0 for incorrect); third column, the condition (any float/integer number, does not need to be in order). This could be enough, but most of the time you want to calculate the average  across more than one subject. If this is the case, you need to indicate another column with the subject number.

The classic way to use it is just call it with the data:

quantProbPlot(data) will generate:

untitled

We may want to put some labels to indicate the conditions. In this case we ca use the optional argument 'condLabel': quantProbPlot(data,'condLabel',1) will generate:simpleLabel

(for this and the following plots, the distributions will always look different just because everytime I generated a different sample distributions!)

This is only the beginning. In some papers they plot the classic QPP with a superimposed scatter plot of individual RTs in each condition. A random noise is added on the x axes to improve redeability. Example: quantProbPlot(data, 'scatterPlot',1):

Scatter

Nice isn't it? I also elaborate some strategies to better compare error responses with correct responses, through two optional parameters. One is "separate" and the other one is "reverse". Separate can take 0, 1 or 2, whereas reverse can only take 0 or 1 and works only if separate is >0.

Generally, separate=1 separaters the correct responses with the incorrect one in two subplots, whereas separate=2 plot the correct and incorrect with two different lines. This is usually quite useless, for example:

Separate2

however, you can make it much more interesting when you combine it with reverse. Infact, if you call quantProbPlot(quantData,'separate',2,'reverse',1) you will obtain this nice graph:

Separate2Reverse

which allows you to easily compare correct and incorrect responses!

With all these options, you can play around with scatter plot, separating, labels etc. in order to easily analyse your data.

I include in the file also the Drift Diffusion Model file that I proposed last time. I used this file to  generate the dataset I use to test the Quantile Probability Plot code. For some example, open the "testQuantProbPlot", also included in the zip file.

DOWNLOAD HERE!

Drift Diffusion Model

You can find some information about the Drift Diffusion Model here. There are probably several versions of the Drift Diffusion Model coded in MATLAB. I coded my own for two purposes:

1) If I code something, I better understand it

2) I wanted to have a flexible version that I can easily modify.

I attach to this post my MATLAB code for the DDM. It is optimized to the best of my skill. It can be used to simulate a model with two choices (as usual) or one choice, with or without variability across trial (so it can actually be used to simulate a Pure DDM or a Extended DDM). The code is highly commented.

The file can be downloaded here or, if you are have a MATLAB account, here.

If you take a look at the code, you could get confused about the presence at the same time of the for loop and the cumsum function. The cumsum is a really fast way in MATLAB to operate a cumulative sum of a vector. However, in this case I have to sum an undefined number of points (since the process stops when it hits the threshold, and it is not possible to know it beforehand). I could just use an incredibly high number of points and hope the process hits the threshold at some point. Or I could use a for loop to keep summing points until it hits a threshold. Both these methods are computationally expensive. So I used a compromise: the software runs cumsum for the first maxWalk points,and, if the threshold has not been hit, runs cumsum again (starting from the end of the previous run) within a loop (it repeats the loop for 100 times, every time for maxWalk points). After some testing, this version is generally much faster than a version with only cumsum or only a for loop.

This is an example of the resulting RT distribution with a high drift rate (e.g., the correct stimulus is easily identifiable):

untitled

I will soon post an alternative version of this file. They are less efficient, but allow to plot the process as it accumulates, which is quite cool.