Morality is the way humans solve Prisoner's Dilemma problems.

In a Prisoner Dilemma problem, each agent has to choose between an action A that would benefit itself by a certain amount X, or an action B that would benefit each individual in the group by less than X. However, if most of the agents choose their best option, A, no one would benefit, or they could even get damaged.

There are several example of how to make this less abstract, but I will use an uncommon one. Let's say that you have to  choose between advancing your career in a selfish and dodgy way, hurting several people in the process of getting to the top. The other option is to painstakingly treat everyone in a nice way, never step on anyone's toe, and try to do as good of a career as you can within these limits. Let's say that you `can be pretty sure that, with the first option, you will get to the top earlier. What would you do? What should you do? What do you think anyone should do?

Intuitively, we know that one of the option is morally wrong, the other one is morally right, or neutral. We know  which one is which because we feel it, and we don't have to think about it.1

It also happens that the immoral option is the most rational, in terms of evolutionary fitness. It doesn't make sense to get the longer route, suffer through it, and maybe not even getting to the same results, where I can be on a better position faster, without hurting my chance of finding a suitable mate for my offsprings (maybe not within the pool of people I have hurt - but I will have access to another pool of people, more powerful and thus more convenient, evolutionistically speaking).2

This same reasoning holds for everybody, but if everybody would do that, we would leave in a horrible world where everybody hurt each other for their own benefit.  This, like many moral problem, is a Prisoner's Dilemma problem (from now on, PDp)3.

Across history, there must have been groups that consistently tackled the PDp by chosing the most individually convenient action. We are not one of those groups. Those groups are probably extinct or evolved into something different, as their actions would in the long run damage the group itself, and would make any civilzation impossible.

We, as a human speces, have mostly solved these types of problems through coordinate signalling. We have developed a way to signal to each other that someone is solving a PDp in an individualist, group-hurting way.  The signals lead to a punishment: ostracizing, imprisonment, etc. These signals are mostly aimed at other agents in the group. At a certain point, however, it just becomes convenient to aim them at ourselves: we don't want to be the target of retaliation, we want to prevent punishment, and thus we need to automatically tell ourself what is the best thing to do to solve PDp. But, watch out! The best thing to do in this case is the opposite of what you would do if you were a rational individualistic agent. Thus, this feeling has to be an innate and irrational (it has to come from your gut and not from your head) because it goes completely against our evolutionary drive of doing the best thing for ourselves.

So we send signal to ourselves to avoid punishment. You also know that a signal of "you are doing something PDp-wrong" (as in something would hurt your group and benefit yourself) is most likely going to be followed by a punishment. When you consistently associate  a signal to a certain punishment, the signal becomes the punishment itself. In this way, signalling that someone is doing something PDp-wrong is a way to punish them, and people have developed ways of efficiently signalling each other. Internally, people can signal+punish themselves with a sense of guilt for making a PDp-wrong action. Externally, they can use a variety of techniques, such as social shaming. If this seems absolutely horrible for you, imagine a society where this doesn't happen. If you take the signalling out of the equation, the set of people that solves PDp in an individualistic way will take over, and this would the horrible for everyone.

So we developed signals for indicating actions that are good individually but bad for the group. These signals are associated with punishment, and they are "punishing" themselves. They can be targeted at each other, but it soon becomes convenient to target them at yourself as well, for preventing group retaliation. Put all of this together, and you get a morality system.

Morality is the way humans solve Prisoner's Dilemma problems. A moral problem is a Prisoner's Dilemma problem.

Why the most rational action in moral problems is to behave immoraly? Because morality has been developed precisely to prevent people to behave rationally in moral problems.

Drink, drive, and kill: are you morally responsible?

This is controversial.
I am about 60% confident about the following argument.

I design two thought experiments to better understand the moral implications of drinking, driving, and killing, or all those similar situations where you voluntarily put yourself in a state of altered consciousness, and then commit something terrible.

Though experiment number 1. There is a red button in front of you. If you press it, you will perform an action: you will initiate an action, and you won't be able to stop it until the selected action is completed. You being conscious or not of your action is irrelevant. What is important is that you cannot control your body doing the action. The actions are randomly chosen across all the possible actions that a drunk person does. Most of these actions are harmless: most of the time you just have a nice night out with your friends, some time you'll do something stupid like getting a tattoo, but in a few cases you'll do something terrible and kill someone. You don't have to press the button.
If you do, and you end up killing someone, did you make an immoral choice?
My intuition says yes, and I bet yours does too.

Though experiment number 2. There is a blue button in front of you. If you press it, two actions can happen: 1) with 99% of chance, 10'000 people with incurable cancer will be cured immediately, or 2) with 1% of chance, 1 random person in the world will die. You don't have to press the button.
If you do, and you end up killing the one person, did you make an immoral choice?

My moral intuition says no. My moral intuition says that the immoral action is not pressing the button. But saying no to this though experiment changed my point of view about the first one.

The second though experiment suggests that we shouldn't morally judge people based on the outcome of their action. Their outcome may be just based on luck. We should morally judge them based on the integral over all possible outcomes from their action.

How is this connected with drinking and driving? Assume that when you get drunk, you enter in an altered state of consciousness. Within this altered state, someone is obviously doing an action, and this someone inhabits your body, but is not you as his/her mental state is substantially different from your average mental state. If this is true4 you shouldn't be judged based on the action itself (since is not you doing it), but you should be judged based on the decision of entering this new mental state (giving the reins of your action to drunk you). I argue that the way to judge this decision morally depends on all the possible outcomes estimated at the moment you are getting drunk.

To answer the question in the title:

Your moral responsibility in this case depends on the state of the world when you take the decision to get drunk 2. If you are about to drive home and now you get drunk, you are as more morally responsible as if you got drunk  at home, alone, whatever is the outcome of your action. Let me spell it clearer: if you drink, drive, and not kill, you are as morally responsable as if you drink and drive3. Even more: if you are about to drive, but not driving yet, and now you get drunk, you are in the same moral landscape as someone that had now decided to drive, and ends up killing someone, even if you decide not to drive4.

 

Governments in a Multidimensional Space

I live in the UK, where the government is not great. However, when I talk about politics with friends and colleagues, I often draw parallels with the Italian government.

The Italian government is a clear example of a horrible system. There is almost no one in Italy that would question that, and I can bet some money that most of the Italian politicians believe that the current system is pretty bad. However, I do think that the Italian government is more advanced than others, but not in the classic sense: I mean that its current, horrible state, will be reached by other governments, eventually.

Think about a prototipical  bad, despicable, corrupt, horrible government, and think of it as a state where other governments are headed to. Current governments around the world are at a certain distance to this horrible one. Italy happens to be closer to it than the UK government, but they are all slowly moving in that direction, at different speed. The horrible government can be then seen as an attractor.

This is a very pessimistic view, and I do not believe in it completely. But is a good introduction to the idea of governments as attractors, and not only that. The idea of a government developing in time, given the set of rules it's been built with, is very powerful, and can unlock some useful insights. So let's explore it a bit further, and let's see where does it lead us!

The Multidimensional Space of Possible Governments

A government is a type of social structure responding to some rules, influenced by the environment (the population is governing upon, but also other governments), and shaping the environment itself. Let's think of the government as an instance of all the possible governments within a multidimensional space. Is multidimensional in the sense that it has a vast number of variables and one of the important ones describes the goodness of the government. Let's not spend several thousands words defining what I mean for good, and feel free to take whatever definition it works for you, as my reasoning should make sense anyway.

A government is always moving within this space, and is almost always moving along the axes of goodness: sometime a new law is passed, sometime a new party enters the game, sometime the population interact in such a way to push the government in a certain direction, etc. Generally these movements are slow and strongly correlated with previous positions: if we know the position of the government in the multidimensional space some time before one of these slow events happen, we can predict with high confidence where is position is going to be some time after it.

Now, think about what happens when a new governmental system is born after a period of strong social instability (a revolution, a riot, with a strong minority taking power and so on). This corresponds to a moment where the preceding governmental structure is most disconnected with the new one: the movement in the space is fast and weakly correlated with the previous positions. The new people in power have now the occasion to re-organize the governmental structure. When they draft the Constitution, they are effectively placing their government in this multidimensional space of possible governments - as new - mostly disregarding the previous position. Every time this happens, we can consider this point a new starting point for the government.

Most importantly they are placing the government at a certain distance from two things that must exist in the space: the best and the worst possible governments, according to your preferred metric. We will call them the Utopia and Hell-On-Earth.

The set of rules written in the Constitution5 not only places the government in space - it also defines a  trajectory. To be more precise, given that there is variability in the process, they define a stochastic trajectory in this multidimensional space of governments. This trajectory answers the important question: how is this government going to evolve in time?

With a very good simulation, if we had *enough* information about the environment, we may actually predict what direction this social structure will likely go given the initial set of rules it was responding to. So, now, the important question is: where are we going? -sorry, I meant: where are we most likely going? First, let me say something more about the idea of attractors.

Utopia, Hell, and Attractors

An attractor is a state of the governmental structure which other governmental structures tend to evolve towards. If a government is close enough to an attractor, it will more likely get even closer to it, even if perturbed. Let's imagine that the space occupied by Hell-on-Earth (the worst possible government) is actually an attractor, and a government is close enough to it. The population can try to oppose the government becoming Hell-on-Earth. Heck, even the politicians themselves can try to oppose it. But if the rules are set up in a certain way, after a certain point in space it just becomes very likely that the government will be drawn towards the Hell Attractor. And from there, who knows what's gonna happen (almost by definition: chaos, murderers, hell on earth, etc.)

Let's be less pessimistic. Of course in the space there is a point with the best possible government, our Utopia type of government. Like the Hell-on-Earth, this may or may not be an attractor. If it is an attractor, a government close enough to it will just be sucked in, and utopia forever ensues.

We are in a pretty symmetric situation. We don't know how distant is our government compared to the best and worst governments, and we don't know if these two prototypical governments are really attractors or not.

If there anything else to say about attractors? Yes: we don't have to be naive about it. Hell-on-Earth and Utopia are obvious candidates to be attractors, but they are not the only ones, and they may not even be the most likely attractors. A very average, grey, bland government, with some good and some bad characteristics may be an attractor. There are some reason to believe that this types of attractors are more likely than Utopia or Hell-on-Earth.  The important thing is that: once we are in there, is impossible to get out; and, if you are close enough, you'll get in there whatever you do.

Why is this framework important?

Because it allows us to express some concept faster and clearer. One thing is to say "our government is bad", and another one is "our government is very close to a bad attractor". We can express new concepts, such us "the starting point has been badly designed", or "the current perturbation (for example, a truck strike) may increase the variability of our movement". Try to express one of these concept without using this new framework, and see for yourself how difficult it is.

Another important point, maybe a bit more psychological: with this framework, we get away from the human considerations that pollute politics at all time, and start thinking about the system itself, how does it evolve with its rules, and how we can design it. We stop thinking about who to blame for this or that, and we stop thinking about individual problems for individual governments, and start thinking about the general good rules for any government, within a given environment. Solving individual problems is almost surely irrelevant, if they don't modify the trajectory. And if we don't think in terms of trajectories and attractions, we will only focus on individual problems.

Finally, let's go back to be really pessimistic mood, considering this point: Some stuff are not going to work whatever we do. Utopia may exist, but it may be unreachable: the condition for getting in that point in space may be too narrow, or it's attraction may be to dim, so that we are just not going to get there. We may ask ourselves what is the distribution of starting points in this multidimensional space, and find that in most of the cases, we are going to be close enough to the Hell-on-Earth attractor, and almost never to the other one. We have to be willing to face one possibility of our investigation: there is no good solution to our problem. In this case, we may want to change the problem altogether2

This is obviously unreasonably pessimistic, because I have not motivated my pessimism, but I will certainly try to do that in the next posts. Armed with this concept of multidimensional space of government, we are going to see where are we, where are we going, and how do we get to Utopia (if we can).

Programming Projects

This is a list of side-projects regarding programming.
(If the list doesn't load, click here)

Getting started with Conditional Random Fields

This is the first of a series of post that I am going to write about Conditional Random Fields. I am recently following the excellent Coursera Specialization on Probabilistic Graphical Models (the videos for each course are freely accessible), and I found the topic really interesting. However, I felt that the time dedicated to Conditional Random Fields (CRF from now on) was decisively short, considering that this (and the evolution of this) model is been using in several applications nowadays: apart from the classic Part-of-speech tagging, it is used in phone recognition and gesture recognition (note that in this work Hidden Conditional Random Fields are used, a model which we will talk about in a future post)

Of course, this is not the only introduction on the internet that you can find on CRF: Sutton and McCallum wrote a excellent article (here), and another introduction can be found in Edwin Chen blog (here). However, Sutton and McCallum article appeared to me to be too hard for a novice that just started in the field; Edwin Chen blog assumes some knowledge of graphical models, and does not provide any code. Conversely, this gentle introduction will not assume any knowledge on probabilistic graphical model (but it does assume some basic of Probabilistic Analysis), and will provide some code for solving a simple problem that we all face: how can we tell if our cat is happy? By using Conditional Random Fields, of course!

I will use Python as a language of choice, with the nice pgmpy library. Installing pgmpy was completely painless, (just follow the instructions!). This guide is not going to be a formal introduction (for that you have many other sources you can dig in), but an informal, aimed more to convey the insight behind the model than to be mathematically precise.

So, what are Conditional Random Fields? 

Let's make some classification. Conditional Random Fields is a type of Markov Network. Markov Networks are models in which the connection between events are defined by a graphical structure, as shown in the next Figure.

Each node represents a random variable, and the edges between nodes represent dependency. With Markov Networks is very convenient to describe these dependencies using factors (\phi). Factors are positive values describing the strength of associations between variables. I know what you are thinking: are these probabilities? Or conditional probabilities? Nope, nope, they are not*, and in fact they are not bounded from 0 to 1, but can take any positive value (in some cases they could be equivalent to conditional probabilities, but this is in general not true).

CRF are defined as discriminative model. These are models used when we have a set of unobserved random variables Y and a set of observation X, and we want to know the probability of Y given X. Quite often we want to ask a different question: what are the assignment to the Y variables that most likely generated the observations X? In particular, CRF are very useful when we want to do classification by taking into account neighboring samples. For example, let's say that we want to classify sections of a musical piece. You know that different sections are related, as there is a very strong chance that the exposition will come after the introduction. Well, CRF are perfect for doing that, as you can incorporate these knowledge in them. Not only! You can also incorporate arbitrary feature describing information that you know exists in the data. For example, you know that after a certain chord progression, you are quite likely going into the development section. Well, you can introduce that info as well in your model.

But for now, let's drop this music nonsense, and let's talk about something unquestionably more stringent:

Is your cat happy?

Everybody loves cats. They are cute, fluffy, and adorable. Not everybody knows how to tell if a cat is sad or happy. Well, you will soon be able to, with a little help from your friendly Conditional Random Fields!

Let's say that you have a very simple cat, Felix. He won't experience complex emotions such as anger, joy, or disgust. He will just be "happy" or "sad". You read on your favourite cats magazine that cats express contentedness with "purrs". However, some time, "purrs" may be expression of tensions. Felix, begin a simple cat, doesn't know what those words mean, and he uses "purrs" for expressing happiness or sadness.

We are going to observe Felix several times a day, and every time we are going to notice if he is purring or not. This will be our observation vector O. We know that our observation can tell us something about Felix's internal state at each time t (let's call this set F). So, if we observe that Felix is purring at this moment, we can more or less confidently say that Felix is happy. Pretty simple, right? Now is where it gets interesting.

The important concept about Conditional Random Fields is that we can also specify dependencies between unobserved states. For example, we may know that our Felix is a pretty relaxed cat: his emotional states are quite stable, and we know that if at some point in time he was happy (or sad), he will most likely be happy (or sad) at the next point in time. On the other hand, we may know that our Felix is an histrionic diva, changing his mind at the blink of an eye. We can model this relationship as well.

With this information, we can build our graph:

CRF_blog_cat

As explained, O are the observation, and each O_i is connected with Felix internal state (F_i) at time t_i. The internal states are also connected in pair. In this and the following example we are going to observe Felix 3 times ( t_1, t_2, t_3). Notice that each state is only affected by the previous one (our Felix doesn't seem to care about his internal state long time ago, but he is only affected by what he was feeling recently). This simple network is called a linear CRF.

Now we also have to come up with some idea about the relationship between Felix's purring and his level of happiness. Let's say that we come up with this table:cat1

The values are in an arbitrary scale, and they indicate the "strength" of the relationship. Note that these are not probabilities. To emphasise that, I didn't scale those from 0 to 1 (but your are free to do so). From the table we can see that Felix can express his emotion more strongly when he is purring. If he is purring, we can be say quite surely that he is happy. If he is not purring, we can say that he is sad, but the strength of this belief is not so strong: we are convinced '25' [whatever scale that is] that he is sad, and '15' that he is happy. We will call this factor \phi_{purr}(O_i,F_i)
Now we have to specify the transition probability: if Felix is happy now, what's the chance that he is going to be happy later on? For the time being, we want to ignore this dependency and just get everything to work, so we will build this uninformative transitional matrixcat2

(notice that the value can be anything, as far as it's the same for each entry in the table). The row variables F_i indicates the emotional state at time t_i; the F_{i+1} the emotional state at time t_{i+1}. We will call this factor \phi_{transition}(F_i,F_{i+1}). This is how the network looks like with the factors:cat3

Let's see how we can implement this in Python. We are going take 3 measures for now, so our network will only have 3 unobserved states and 3 observed states:

[to run this code you need Python2.7, pgmpy, and numpy]

from pgmpy.models import MarkovModel
from pgmpy.factors.discrete import DiscreteFactor
import numpy as np 
from pgmpy.inference import BeliefPropagation

 
 
def toVal(string):
    if string=='purr': 
        return 0; 
    else:
        return 1; 
    
catState=['happy','sad']
MM=MarkovModel();
# non existing nodes will be automatically created 
MM.add_edges_from([('f1', 'f2'), ('f2', 'f3'),('o1','f1'),('o2','f2'),('o3','f3')])

#NO DEPENDENCY
transition=np.array([10, 10, 10, 10]); 
#RELAXED CAT
#transition=np.array([90,10,10,90]); 
#DIVA CAT
#transition=np.array([10, 90, 90, 10]); 


purr_happy=80; 
purr_sad=20
noPurr_happy=15; 
noPurr_sad=25; 

factorObs1= DiscreteFactor(['o1','f1'],cardinality=[2, 2], \
values=np.array([purr_happy, purr_sad, noPurr_happy,noPurr_sad]))
factorObs2= DiscreteFactor(['o2','f2'],cardinality=[2, 2], \
values=np.array([purr_happy, purr_sad, noPurr_happy,noPurr_sad]))
factorObs3= DiscreteFactor(['o3','f3'],cardinality=[2, 2], \
values=np.array([purr_happy, purr_sad, noPurr_happy, noPurr_sad]))

factorH1= DiscreteFactor(['f1','f2'],cardinality=[2, 2], values=transition)
factorH2= DiscreteFactor(['f2','f3'],cardinality=[2, 2], values=transition)


#factor.values
MM.add_factors(factorH1)
MM.add_factors(factorH2)
MM.add_factors(factorObs1)
MM.add_factors(factorObs2)
MM.add_factors(factorObs3)
 

Note the peculiar way of inserting the entry for each factor. We take the conditional probability table, starting from the first row, and we proceed from left to right before going to the next row. We can have a more friendly representation by calling factorObs1.values(). If you are wondering what does cardinality means, it's just the number of values that each variable in the factor can take. In our case, we are working with binary variables, so cardinality will always be 2.

Now we are going to do some magic. By using an inference algorithm called Belief Propagation we will be able to estimate the probability of each internal state. However, we are not exactly interested in that, but we want to know what is the most likely internal state given our purring observations. The procedure for this is called MAP, and there are different ways of implementing that. We are going to use the MAP included in the Belief Propagation algorithm. Notice how in the code we specify what are the values of our observed variables (evidence=...). The function toVal maps "purr" to the entry 0 and "no_purr" to the entry 1.

belief_propagation = BeliefPropagation(MM)
ymax=belief_propagation.map_query(variables=['f1','f2','f3'],\
evidence={'o1' : toVal('purr'), 'o2' : toVal('no_purr'), 'o3' : toVal('purr')})
#ymax=belief_propagation.map_query(variables=['f1','f2','f3'],\
#evidence={'o1' : toVal('purr'), 'o2' : toVal('no_purr'), 'o3' : toVal('no_purr')})
#ymax=belief_propagation.map_query(variables=['f1','f2','f3'], \
#evidence={'o1' : toVal('purr'), 'o2' : toVal('purr'), 'o3' : toVal('purr')})

print('f1: ' + str(ymax['f1']) + ' = ' + catState[ymax['f1']]);
print('f2: ' + str(ymax['f2']) + ' = ' + catState[ymax['f2']]);
print('f3: ' + str(ymax['f3']) + ' = ' + catState[ymax['f3']]);

 

In our case, we observed a sequence of purr, no_purr, and purr again. As we have not specified any dependencies between states, the network is going to tell us that our cat was happy, sad, and happy again. Is this the case, though? Let's take a more realistic approach to this fluffy problem.

Felix as a cool, relaxed cat

Let's consider the first case: Felix is a cool dude, and when he is in a state, he is probably going to stay there for quite long time. We are going to use this transition factor table:cat4

Which clearly tell us that there is a strong connection between being happy (or sad) at time t, and being happy (or sad) at time t_{i+1}. To generate this network, uncomment line 22 (from the first snippet), and run the script again. What is going to happen now? All the three states  are now classified as "happy"! The lack of purr in the middle state, being less strongly connected with his level of happiness, has been overcome by the neighbour states, which more surely indicates that Felix was happy.

Try instead to set the sequence purr\rightarrow no purr\rightarrow no purr (uncomment line 4 and 5 from the second snippet). What's happening now? Think about what the network is computing, and check if it makes sense.

Felix as a diva 

Let's consider the situation where Felix is a diva, and he is more likely to change his mood along different observations. We can represent this by using the following transition table:cat5

Which favours transition from sad to happy or from happy to sad. Now let's say that we observe our Felix purring all the time. Being him a crazy diva that he is, can we infer that he is indeed happy all the time? No way! In fact, if you uncomment line 25 from the first snippet and line 6 and 7 from the second snippet, and you will see that even though the observed sequence is purr\rightarrow purr\rightarrow purr, the most likely state appears to be happy\rightarrow sad\rightarrow happy. Oh Felix, you are driving me crazy!

This simple example shows only the basics of the potential of CRF. This is only the beginning. We may want to automatically calculate the transition table, given the observation. Or we may want to include more complicated observation, or generate a non-linear graph. We can do all of this and much more with CRF.

In the next post I hope to use more complicated models to show other cool features of this approach. Hope you enjoyed it!

How to use dlib to QtCreator on Windows/Linux

Dlib is a nice C++ library for machine learning that also includes some good implementation of computer  vision algorithm like the real-time pose estimation, that OpenCV strangely does not have.

Using Dlib is not so difficult, as is a "header-only" library and then you don't need to compile it. However, as it took me some time to figure out the correct steps to integrate it with OpenCV for Windows and Linux, I write this short post as a quick-rough guide to get it started.

Download Dlib and extract the folder somewhere. In windows, I used C:\, in Linux /home/username on Linux. Open Qt Creator and select "Qt Console Application". Once the files are generated, go to the .pro file of your project, and add


win32{
INCLUDEPATH += C:\dlib-18.18
LIBS+= -lgdi32 -lcomctl32 -luser32 -lwinmm -lws2_32
}

linux{
INCLUDEPATH += /home/<em>username</em>/dlib-18.18
LIBS += -pthread
CONFIG += link_pkgconfig
PKGCONFIG += x11
}

[extra libraries are required only for the gcc compiler, the others should add them automatically]

Then, right click on the "Sources" folder of your project, and select "Add Existing File", and add dlib-18.18/dlib/all/source.cpp

That's it! Now let's have some random dlib code and see if it works:

In main.cpp, add  #include <dlib/image_processing/frontal_face_detector.h> on top. Comment everything inside the main and add, instead

dlib::frontal_face_detector detector = dlib::get_frontal_face_detector();

this doesn't mean much, but it's just to try and see if it works. Now run qmake, and build everything. It should build without any error. You can run it, and use all the Dlib capability for your software.

Head Tilting Detection - Automatic Page Turner for Pianists

Head Tilting Detection (or Automatic Page Turner for Pianists, still undecided about the name 😛 ) is a simple software that emits a Page Down/Up keypress when the user is tilting the head.Photo-3

Since I started learning piano, more than 10 years ago, I had the problem of turning pages. Turning pages is one of the most annoying thing for a pianist: it forces you to waste seconds, interrupt the flow of music, and it affects the way we learn the pieces (e.g. by making the connection between pages really poor, since we usually stop from a page to the next). Several alternatives exist for  turning pages automatically, but they are clumsy and inefficient. Recently I thought about applying my programming knowledge to this purpose.

As more and more pianists are switching from paper to digital scores, it is possible to use a machine learning approach. I designed a simple software that detects when you are tilting your head right/left, and "scroll" down the page accordingly by simulating a page down/up keypress, with in most software will scroll the window down/up - in Adobe PDF, if you "fill one full-page to window", you will turn a whole page (to next/previous one).

Update 28/03/2016: New version for Linux released. You can now rotate/flip your webcam viewer. The sound now works on Windows machine.

WINDOWS 32/64bit (tested on Windows 7, Windows 8, Windows 8.1)

Download Head Tilting Detection (no setup required) - for Windows

LINUX (tested on Ubuntu 15.10)

Download Head Tilting Detection (no setup required) - for Linux

open folder in terminal and type sudo ./HeadTiltingDetection.sh. This bash file will download and install xdotool if not installed already) and the software will be executed.

Instructions

Open the software, wait for the your face to be detected (a green rectangle around your face should appear), then wait for your eyes to be detected (red squares).
At this point select the application you want to send the keypress (for example, a pdf file).
When you tilt the head on one side, the corresponding arrow key signal will be emitted. A green circle should appear on the corresponding side of the camera view, and you should hear a specific sound for the direction you are tilting your head (you can disable the sound).

Adjust the threshold from the slider, disable the sound, or pause the detection from the user interface.

Key points

  • The software is designed for pianists, so with more or less constant level of light, by taking into account slow/random movement, and by considering that pages are usually not turned so often. Plus, instead of just "turning" your page, it will scroll down on the page, with in most of the cases is what you want.
  • For now, the software does not work if you wear glasses. I will maybe work on this in the future.
  • The software uses a Haar feature-based cascade classifiers, with template matching for tracking the eyes, and several heuristics to increase accuracy. A cubic function of the two eyes' degree is used to emit the keypress. The threshold for the keypress can be adjusted.
  • The software has been tested on Windows 7, 8, 8.1. For now there is only a Window version, but I may develop it for other platform in the future. (28/03/16) Linux: Ubuntu version released

Future work

The software is not perfect. Unfortunately, I do not have many hours to dedicate to it. I plan to keep work on it, but not consistently. I'll probably add to this post any minor updates, and create new posts only if there is a major improvement. If you want to collaborate with me for this project, feel free to contact me!

The software uses OpenCV (for the computer vision part) and Qt. It was very nice working with Qt again after so long, and discovering OpenCV was also very interesting. I have a very good opinion about both in terms of usability and capability. Managing to merge both systems was an excellent experience for me.

plotLineHist: when summary statistics are not enough.

Today I am going to show one of my function that I am using quite extensively in my work. It's plotLineHist and that's what it produces:

plotLineHistWell, what the heck is that? PlotLineHist take a cell array C, a matrix M, a function handle F, and several optional arguments. The cell array C is a nxm. You have to think of each row as being an experimental condition for one factor, and each column as the experimental condition for a second (optional) factor. The matrix M specifies the values for the row's factor ( λ in the figure). In the figure, each column's factor is represented by different colours.

PlotLineHist execute the function F (for example, @mean) for each cell of the cell array, plotting the resulting value and connecting the row's  (continuous lines in the figure). It also calculates the standard error for each cell (vertical line on each marker).

However, the novelty is that it also plot a frequency distribution itself for each condition, and aligns it vertically on each row's factor. The distribution's value corresponds now to the vertical axis of the  figure.

Each row distribution has a different colour. The first row's distribution is plotted on the left side, the others on the right (of course, having more than 3 row's condition will make the plot difficult to read, but it may still be useful in those cases).

If this sounds convoluted, I am sure you will get a better idea by a simple example.

Let's say that we are measuring the effect of a drug on cortisol level on a sample of 8 participants. We have just 1 factor, the horizontal one, which is drug dose: let's say 100mg, 200mg and 300mg. The cortisol level is found to be:

Patient N.100mg200mg300mg
1101520
2121623
3131415
4131415
5201720
6503021
7121519
8141423

Since we have only one factor, we need to put all the data in the first row of the cell array feed into the plotLineHist function:

p{1,2}=[15 16 14 17 30 15 14 16];
p{1,3}=[20 23 15 20 21 19 23 23];
plotLineHist(p, drugDose,fun);

 

plotLineHist2

The blue line with the markers indicates the mean response for each condition (with standard error). But then we also have a nice plot on the distribution for each condition! This allows us to spot some clear outliers in the first two distributions.

Now, let's say that we have two factor. For example, sometime the drug is taken together with another substance, and sometime is not.The drug is still given in 3 different dosages, 100, 200 and 300mg.

%example 2
%without substance A
p{1,1}=[10 12 13 20 50 12 14 12];
p{1,2}=[15 16 14 17 30 15 14 16];
p{1,3}=[20 23 15 20 21 19 23 23];
%with substance A
p{2,1}=[12 12 12 21 60 13 11 15];
p{2,2}=[16 16 14 18 31 19 18 17];
p{2,3}=[22 23 17 20 21 20 23 23];
drugDose=[100 200 300];
fun=@nanmean;
plotLineHist(p, drugDose,fun)

 

plotLineHist3

You can see how the second row condition distribution is positioned on the right side, so that the two distributions can be easily compared. Note how must of the nitty-gritty details are automatically calculated by the functions, such as bin width, scaling factors, etc. However, you have the chance to change it manually by using the optional arguments.

For me, this has been a very useful plot to summarize different types of information at the same time, without having a lot of figures.

Try it out and let me know!

DOWNLOAD PLOTLINEHIST HERE

 

LISP neural network library

This is a neural network library that I coded in Lisp as a toy-project around 2008. I post it here just for historical purposes. I do not plan to mantain the library in any way.

Here

Diffusion Music

This is a little thing that I have done some time ago and never really got time to polish and publish the way I wanted, but it is still nice to put it out there.

This scripts generates X decision processes with some parameters and transform them in music. This just answer to the question that, for sure, every one of you is asking himself: how does a diffusion process sound like? And the answer is: horribly! As expected 😉

To code it, I used the nice library of Ken Schutte to read and write MIDI in MATLAB.

In the code, you can change the number of "voices" used (the num. of diff. processes), or the distribution of drift rate, starting tone , starting time, and length for each voice. Play around with this parameter and try to see if you can come up with anything reasonable. I couldn't!

The script also generate a figure representing the resulting voices (still thanks to Ken library), that may look like this:

Untitled

As usual, you can download the file from here.