zondag 23 november 2014

Everything is OK!! REALLY!! You must believe me! All is OK.

"Failing ICT projects at Government are unnecessary" eenvandaag.nl 4 August 2011
"Why do ICT projects at Governments fail so often and so badly?" kafkabrigade.nl 29 May 2012
"Again a ICT project fails at the Government" bnr.nl 25 June 2013
"One third of ICT projects fail within Government with system decommission as a result" - tweakers.net 25 April 2014
"ICT-projects at Government: nobody is hold accountable for failing" - ftm.nl 14 May 2014
"ICT and Government; most often the quality levels are pathetic" - fd.nl 13 October 2014

Just a few online headlines that pop up when you Google "Government ICT projects fail" (in Dutch). It does not paint a nice picture about the success rate of ICT projects within the Dutch Government. That this is not a typical Dutch problem is made clear by the list on Wikipedia called "List of failed and overbudget custom software projects". Apparently government bodies and technology projects are not the best match.

Not only newspapers noticed a mismatch. In 2014 an official governmental task force was put in place to in investigate all recent ICT projects conducted. Their findings were quite shocking. In total 36% of the larger icy-projects within the Netherlands, with a budget more than 7.5 million euro fail so badly that the system to be implemented is decommissioned even before it is fully operational. For 57% of these major projects, no decommission takes place, but are more expensive than budgeted or do not deliver the results as required. On a yearly basis this leads to a loss of 4 to 5 million euro (1).

In just two words: not good. Luckily for us normal citizens, there is a website created by the Dutch Government where we can find the status of all ICT projects currently being undertaken. This Dashboard shows two main KPIs: status on Budget and on Delivery Time. The Status can have three colors:

Green: Normal
Amber: Attention needed
Red: Action needed

The displayed color indicates the state of affairs of the project on the reference date. Given the headlines mentioned, one would expect a lot of Amber and Green in this KPI dashboard. So let's have a closer look at the Defense Department, spending the most money on ICT (6 projects with a total of 364 mln euro).

rijksictdashboard.nl - Defense Department

At the Defense Department everything is ok! All the misery is probably in some of the other departments. So I had a look at the others and to my complete surprise ALL departments report GREEN on both delivery time and costs!

HUH???!
How is it, that the KPIs are green for all governmental departments while everybody is screaming that ICT projects are failing by the dozen? Why would a government go all te way to build a website showing their results and clearly present performance brighter than even their own task-force found. And more importantly for this blog. What can we learn from it?

Almost all companies have to present their performance in one way or another to external stakeholders. In the Netherlands the Government is obliged to report all progress and they've chosen to do this via an easy to find and simple website. In itself this is nobel and can only be cheered. However one should present the truth in order to come across as being honest. Of course it easy to construct two simple KPIs in such a way that they almost certainly turn GREEN.  The thing is that this will in the end bite itself in the tail. Best is to be clear about your performance. Don't make it nicer than it really is. Not only for your credibility towards the outside world, but also towards your internal employees. They see the "real" performance and might start questioning the integrity of senior management if external KPIs present a different story. And maybe even more important; use the same KPIs within your organization as the ones that you present to your external stakeholders. Don't create specific KPIs for the outside world. In this case, it is almost impossible to believe that these two simple KPIs are the only ones that the government is using themselves. And if your stakeholders expect you to manage your performance with certain KPIs (like most regulators do), make them your internal KPIs. Furthermore it is wise to explain how your KPIs are constructed. What do you measure your success against? What are your thresholds? What do you measure (e.g. budget or actual money spent)? How do you incorporate setbacks? What risks were already incorporated in your budget and will not effect your view on performance? What tolerance did you agreed upon?

But above all be transparent. If things go wrong, be honest about it and tell everybody how you think you can manage the issues at hand. That is what good governance is about!

(1) Parlementair onderzoek naar ICT-projecten bij de overheid (See here the Endreport in Dutch)


donderdag 13 november 2014

Crisis in the world of Science

"In science it often happens that scientists say, "You know that's a really good argument; my position is mistaken," and then they would actually change their minds and you never hear that old view from them again. They really do it. It doesn't happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day."

Carl Sagan (1987)

Making mistakes and accepting them is one of the reasons why we made so much progress in science in the past decades. It is painful when it happens, but every time we learn from these scientific mistakes. That is what "scientific innovation" is all about. One of the mechanisms to discover mistakes and make sure that scientists stay focused is the concept of peer-review. When you want to publish your results in a (well known) scientific magazine, you make sure it is read by knowledgeable peers (if only to prevent being shamed when mistakes are discovered after publication). In other words; scientists purposely seek critique and want to be challenged. If you can withstand the pushback of your colleagues, your hypothesis is one step closer to being true. 

However, the last couple of years this "self-imposed" critique seeking process is crumbling down, endangering the progress in the scientific world. Before we try to find out why this is, we go back to the summer of 2011. In that year the scientific world was shaken by the discovery of one of the largest fraud cases. Diederik Stapel, at that time still a professor of Social Psychology at Tilburg University, confessed having falsified several data-sets used in his studies. An extensive report investigated all of Stapel's 130 articles and 24 book chapters. According to the first findings, on the first batch of 20 publications, 12 were falsified and three contributions to books were also fraudulent. How was it possible that over all these years, no one discovered or even suspected this? No co-authors, students, peers, or any one else.

Many have argued that this was a unique case. But that is to be seen. Of course the discovery of such a large scale fraud is seldom seen, but several investigations have shown that "photoshopping" results is not uncommon in the scientific world. A study already published in 2004 in BMC Medical Research Methodology claimed that a high proportion of papers published in leading scientific journals contained statistical errors. Not all of these errors led to erroneous conclusions, but the authors found that some of them may have caused non-significant findings to be misrepresented as being significant (1).

The different ways to manipulate results is for another blog, but here I'm more interested in how it is possible that so many "mistakes" are not seen during peer-review. The answer is actually very simple; because peers don't see read the articles. Today the primary focus of a scientist is to produce papers, not to review those of others. Have a look at the infograpic shown here.
If you were to print out just the first page of every item indexed in Web of Science, the stack of paper would reach almost to the top of Mount Kilimanjaro. This graph also shows that only the top meter and a half would have received 1,000 citations or more (2).

Research has become Publication Driven. Universities compete on research money and students. In order to reach their goals they have set productivity goals. The KPIs set encourage the production of many papers with high visibility (in order to reach the top of the pile and thus being noticed). Publication driven KPIs promote a calculating behavior: what topic brings me money or gets me students? Assessments are therefor not based on quality but on quantity (3).

Frits van Oostrum (President of Dutch Royal Acadamy of Science from 2005 till 2008) said it like this in 2007:

"Especially where the (in itself noble) principle to measure is to know entered into a covenant with the fear for substantive judgment, it has led to the glorification of the number, and preferably the large and growing one. And what is not countable, does not count. It leads to putting means (measurement) over goal (quality).
These are obviously very insidious mechanisms, with a high probability of perversion, as we all know. Because researchers must of course be productive, but none of us will propose that someone who produces thirty articles per per year is a better researcher or scholar than someone with three; or that the teacher who dutifully adheres to the study guide and passes 90% is a better teacher than the one who regularly improvises and rejects 30%. But monetizing, measuring and quantifying lead naturally to the dream of more so-called benefits and for less costs" (4). (see the full speech here - in Dutch)

I'm not claiming that the publication KPI is single-handedly responsible for a crisis in the science world. But many researchers are dispirited to review their colleagues because of the fact the they are rated on production not on reading. Furthermore, it always has been very difficult to publish research that showed no significant effects (in it self, that is an important finding - so researchers know what not to research in the future). The publication KPI is not helping either in that respect. Only noticeable papers can count on being published, so better to keep the NO SIGNIFICANT RESULT papers in the drawer and continue the search for "real" findings. And what happens if those results won't come quick enough.....

Next time we'll have a look at KPIs in Government



(1) Emili Garcia-Berthou and Carles Alcaraz, Statistical Errors, BMC Medical Research Methodology 2004, 4:13
(2) see here for more details on the "paper mountain"
(3) This paragraph was based on a presentation from R. Abma (Scholar General Social Sciences at the University of Utrecht and author of De publicatiefabriek). The presentation was given during the Skepsis Congres 2014.
(4) Vooral waar het op zichzelf nobele beginsel meten is weten een monsterverbond aanging met schrik voor het inhoudelijke oordeel heeft dit geleid tot de verheerlijking van het getal, en liefst het grote en het groeiende. En wat niet telbaar is, telt niet. Het leidt ten diepste tot het overschaduwen van doel (kwaliteit) door middel (meting).
Dit zijn natuurlijk zeer verraderlijke mechanismen, met een hoge kans op pervertering, zoals wij allen weten. Want onderzoekers moeten uiteraard wel produktief zijn, maar niemand onder ons zal ook maar een moment staande houden dat iemand die dertig artikelen per jaar produceert daarom een betere onderzoeker laat staan geleerde is dan iemand met drie; of dat de docent die braaf de studiewijzer aanhoudt en bij wie 90% slaagt een betere leraar is dan wie geregeld improviseert en 30% afwijst. Maar monetariseren, meten en becijferen leiden als vanzelf tot de wensdroom van meer zogenaamde baten voor minder zogenoemde kosten. 

maandag 10 november 2014

Best flow chart ever: management styles and KPIs

"When the top level guys look down they only see shitheads. When bottom level guys look up they only see assholes"

Good chance you have seen the cartoon before. The bird on the top looking satisfied, while the birds on the lower levels look more and more miserable. Does it feel familiar? Hopefully not, but chance is that you do recognize the gist of it.

The cartoon triggered me to try to find the relation between certain types of managers and their usage of KPIs. This raised the question of what different types of managers are distinguished in literature.
Problem however with that question is that one could probably fill a large library with all the books written about the topic. And to make things worse, I personally have a problem with management books that talk about personality traits. They suggest that personality or behavior can be described by just a few labels. Of course that is not the case. All research combined on human behavior (e.g. Neurology, Biology, Psychology, Sociology) did not yet find definite, unique, separate and unambiguous personality types. But being skeptical has its practical limits so in the end I settled with this short presentation that summarizes the different management styles most encountered.
  1. Autocratic style
  2. Bureaucratic style
  3. Democratic style
  4. Laissez fair style
Just to make clear. When I use these four styles, I'm not saying that these are the only four or that managers cannot apply a mix of these. The rest of this blog is (just) a personal view on the usage of KPIs and their link in to management styles in a general sense (it is as unscientific as most other stuff available, but at least I'm honest about it ;-)

Autocratic
For these type of managers KPIs give them a sense of control over the situation and their people. They probably don't have many KPIs, but they make sure that the ones they have are known by all employees involved. The threshold is set strict and not to much flexibility is allowed. They will make sure that what got them here, will get them there (again). "Green" traffic lights are expected and an "Amber" status  is already being frowned upon (an explanation is expected to say the least). Their adagium is very simple: KPIs are being met. Always.

Bureaucratic

Last weeks blog was about the United Nations and their KPI usages for the millennium goals. To many, to long of a horizon, vague goals and diffuse responsibility. KPIs are uses because "everybody uses them". KPIs are not the result of a creative process where all stakeholders were involved. They are merely copies of the ones that were used in the past. People don't really believe in the KPI concept as everybody can point to each other once things go wrong. In other words; KPIs are just another thing you are supposed to have as manager.

Democratic
Everybody is involved in the creation of the KPI-set in a democratic organization. And with everybody I mean really everybody. All input is gathered and taken into account when the KPI is created. The creation process is more important than the end-result. KPIs are not very specific (because everybody has to recognize their own input in the outcome). Thresholds are being set, but so high making it difficult to reach the "Amber" or "Red" status. The KPIs are not fixed and changes are made frequently. Everybody is allowed to put the current KPIs to debate.

Laissez-fair
Why measuring stuff if you can rely on the intrinsic motivation of your employees? Make sure that you emphasize the responsibility of the individual and the sum will be greater than the individual parts. No KPIs are needed, and if they are there it's more for the outside world than to use them internally. Progress is being measured just by looking at the results and it is expected of employees to give a signal when things go wrong.

Next time: Crisis in the world of Science

zaterdag 1 november 2014

What we can learn from the UN Millenium Goals

More than 14 years ago, the United Nations Millennium Declaration was signed by leaders of 189 different countries. They committed themselves to 8 goals to be accomplished in 2015. For each of the eight Millennium Goals several KPIs were set to measure progress and succes. (see this link for a full list).


This is what US President Barack Obama had to say in 2010 about the progress at that time.

"Nor can anyone deny the progress that has been made toward achieving certain Millennium Development Goals. The doors of education have been opened to tens of millions of children, boys and girls. New cases of HIV/AIDS and malaria and tuberculosis are down. Access to clean drinking water is up. Around the world, hundreds of millions of people have been lifted from extreme poverty. That is all for the good, and it’s a testimony to the extraordinary work that’s been done both within countries and by the international community.

Yet we must also face the fact that progress towards other goals that were set has not come nearly fast enough. Not for the hundreds of thousands of women who lose their lives every year simply giving birth.  Not for the millions of children who die from agony of malnutrition.  Not for the nearly one billion people who endure the misery of chronic hunger.

This is the reality we must face -- that if the international community just keeps doing the same things the same way, we may make some modest progress here and there, but we will miss many development goals.  That is the truth.  With 10 years down and just five years before our development targets come due, we must do better."
(see for the full transcript here)

Now, with just one year to go, it doesn't look much better. Don't get me wrong. I really do think the work that has been done is extremely important. The United Nations might not be the perfect institution, but it is I think the only way to get so many countries set to do something. All eight goals do address serious issues that should get our attention and deserve to be solved.

My focus here lies on the things we can learn not only from the goals set, but also from the KPIs chosen to measure their success. 

Too many Goals
It is difficult to choose which one of the eight goals is more important. Fighting poverty? universal primary education? reduce child mortality, improve maternal health? You name it. Even if it is difficult or just almost impossible it is wise to choose a maximum of three. More is not only too ambitious, it is also a recipe for disaster. One cannot focus on eight different goals. It will take a tremendous amount of money to make all eight of them successful. Think of the task the UN faces explaining why so many goals failed and how difficult it will be to find 200 countries that again want to participate. 

Was ROI taken into account?
When you set these kind of goals and this many countries commit to them, you are sure that money will be spent. But these goals look like they were just chosen because the sounded good. Question is whether a good cost-benefit analysis was done. Of course things will change for the better. Question is whether things will also help in the long run. You want goals that help prevent setting new goals in the future.

Too many KPIs
In total there are 60 KPIs to measure the success of the eight goals set. The good thing is that concrete KPIs were set with concrete thresholds. Downside is that there are so many that is is just impossible to meet them all. If you have 60 targets, you really have none. With all negative consequences as a result, the least of which is the erosion of credibility.

KPIs are smart, but therefor not easy to reach
The world is a complex place. Many factors come into play when it comes to social issues. Politics, religious conflicts, war, corruption, failing economies, etc. complicate accomplishing the things you want to reach.

Long Timeframe
Fifteen years is a long time. It is difficult for people (let alone countries) to stay focused for so long. If you read the rest of the speech of Barack Obama you will notice that most of his speech is directed to the richer countries that seem to loose focus.

Diffuse responsibilities
Who is actually accountable for the reaching the goals? 189 countries? A country is not someone you can hold accountable. And even if you did find a person in 2000 that committed him/herself, he or she will probably not be around in 2015 to speak to that.

To summarize I want to quote Bjorn Lomburg who runs the Copenhagen Consensus Center. This center involves economists from all over the globe to think about the worlds biggest issues. More importantly they help selecting those issues with the highest ROI. This is what he had to say in an interview with Freakonomics Radio.

"There was actually no good cost and benefit analysis, it was just a number of targets that all sound really good. And generally I also think they really are very good. But now the U.N. is going to redo the targets from 2015 and fifteen years onwards. And this time, instead of having a very closed argument, it was basically a few guys around Kofi Annan who set out these targets back in 2000. And then everybody adopted them. This time they have said we want to hear everybody’s input. Of course that’s very laudable, but not surprisingly, it’s also meant that we probably have about 1,400 potential targets on the table. And so we need to make sure that we don’t just end up with a whole long list of Christmas trees as they call them in the U.N. jargon – you know, you just have everything and you wish for all good things, because they’re not likely to be as effective." (see here for full interview)

Hopefully next year the UN choses and develops their KPIs more wisely. If done right the promises could actually be fulfilled this time.