A few blogs back I wrote about when KPIs fail (see here for the blog) and the blog How to lie with your KPI? was about deliberate manipulations of KPIs and their outcome. But there are more reasons why KPIs in the end fail.
Over and over again I emphasized the fact that KPIs are meant to set change in motion when necessary. If no change is initiated, the set goals will not be met and the performance was measured for nothing. Here is a list of possible things that prevent you to act upon the outcome of a KPI.
1. Bad data quality
KPIs run on data. Without good information the KPI cannot be created. Data drives your KPI. That's why correct data is of upmost importance. Unfortunately all sorts of things can go wrong with you data. Especially when data was originally not created for the purpose you are using it for in your KPI. Or because default values can be entered (e.g. '999999'). This can be especially tricky with Financial KPIs. But also with commercial KPIs this can be bothersome. Can you really ensure that all the data you use is correct? A small error in your underlying data can have huge impact. This topic asks for a separate blog.
2. People don't see the importance
Without people knowing and understanding the importance of the specific KPI it won't fly at all. Not only must people be aware of them, they also have to see the consequences of not meeting the thresholds set.
3. People don't understand
In one of the first blogs I discussed the complexity of KPIs (Keeping them Simple and Stupid). This wasn't for no reason. People will not easily admit that they didn't understand the complex and complicated KPI you showed them. They will say they did, but that's only because they don't want to look stupid. Inevitably this will lead them to ignore the KPI as much as possible (to prevent looking stupid again). Or their actions are less effective as they could have been, had they understood the KPI better.
4. People are not interested
As a result of item 2 and 3 or because of other factors, people just might drop out. Some people just hate being measured and KPIs is the manifestation of this. Others just don't desire a product like KPIs because they don't see what's in it for them. And there is always a group of people that lose interest as soon as numbers are involved.
5. People were not involved
I think this is an underlying factor for people to be skeptical about the usefulness of the KPI. It is the "Not-Invented-Here" principle. People do like to have an influence, especially when it concerns their future and how they will be assessed.
6. After creation, the process was stopped
Creation is just the first phase when using KPIs (see my first few blogs on the creation process). There are three more phases that are just as (or maybe even more) important. These are Communicate, Consult and Control (together with the Create this is what I would call the Four C model). Item 2 -5 of the list above are a direct result if you do not communicate. But also "Consultancy" and advice on how to implement and use KPIs is important. It is not enough to just tell people that your KPI exists and how important it is. And last but not least, you have to check whether people adhere to the agreed actions. Are due-dates met and was the work done sufficient and correct?
#KPI - Key Performance Illusions
Why we should reconsider the usage of Key Performance Indicators
dinsdag 23 december 2014
donderdag 11 december 2014
Why do (almost) all projects fail?
You
might find the question in the title a little bit dishonest as it suggests that most projects fail. And in that sense you are
right, it is a wrong a question. But it is also wrong for a less more obvious reason. The problem with the question is that it does not tell you what is meant
by "fail". If I would ask you to define project failing I guess you would
come up with something like "delivered above budget" or "not
delivered on time". It is true that project performance most often is
measured via these two basic KPIs (see for example my blog on IT projects within Government).
But let's not ignore the fact that many projects do indeed fail to deliver on budget and on time. When was the last time you were involved in a project that was either on
time or on budget (let alone both)? So even when it is the essence of the
Project Managers job to keep their projects within the GREEN, it almost never happens. In my opinion this is because we are measuring
the wrong things. My (maybe bold) statement is that these two KPIs are
useless to measure project performance. Of course it does say something about
the progress of the
project but not about the actual performance.
Focussing on just these two KPIs is like the mouse that is staring in the two
headlights of a car. It blinds you from the real "danger". So what
are these "real" dangers that we should focus on when executing
projects?
For starters, it is safe to assume that your budget
estimate was wrong in the first place. We are masters in
"short-term-predictions". That's what our brains are doing all day
long. However when it comes to long term predictions we are just terrible. In
general we are biased towards optimism (optimism bias), we ignore obvious
warning signals (confirmation bias), take previous events out of context
(context effect) and tend to remember things more positively (Egocentric bias).
Secondly, the actual risks that materialize during
your project (endangering your delivery time) are not the ones that you summed
beforehand. The ones that you can think of beforehand, are the ones that were
probably copied from the previous project starting documents and are most often
already taken into account when defining the time window of the project. The
real issue was already addressed in the blog "When KPIs fail". It's the
problem of Black Swans, who are always unexpected but impactful.
So using time and budget as your KPIs is a recipy for failure. Unfortunately this is not without far-reaching consequences. Especially when the GREEN status of these KPIs
become the goal. Requirements
are de-scoped or time goes before quality.
But apart from these hidden sides regarding budget and
time there are more dangers that lurk in the dark (if not measured properly).
When focussing on budget and time we tend to forget that the real "performance"
of the project is measured by the quality of the thing it is implementing. How
often is a Business Case drafted in the beginning of the project and never
checked during the project? Or when it is checked it is altered to fit the new
timelines and budget. Even the Business Case itself is most often as
"light as a feather", presenting three "scenarios" to
choose from: Doing Nothing, Doing Everything, Doing the Halfway Solution.
Furthermore most Business Cases do not take into account things like
it-debt, increased complexity, maintenance costs, imbedding in Business As
Usual, governance aspects, etc.
Projects implement change and change has an effect on
people. All sorts of behavioural effects can take place (both inside and
outside the project) that have an impact on the project results. Don't
underestimate the behavioural effect. Coping with change is one of the hardest
things to do for everybody. People can (directly or indirectly) sabotage the
projects. Early adapters might lose interest (and the project loses a sponsor).
Quality might go down when people feel the pressure to deliver. People within
or outside the project do not believe in the change (even when you have a
communication professional). People might mistrust the external people you
hired. And so forth. But most importantly, people won't admit that they were
wrong and keep on doing what they were doing, believing and assuming it is the
right thing. This is especially true for sponsor, project manager, and project
members as they have invested the most. Of course people will deny all of the
above when you ask them.
So next time you do a project. Please make sure your
KPIs are set on measuring the Business Case on a frequent basis and not only
listen to the people involved, but observe what they actually do. And be brave:
dare to stop projects.
donderdag 4 december 2014
How to lie with your KPI
There are lies, damn lies and KPIs
Often KPIs are used to manage and prioritize activities within the organization. It is expected that employees act upon the KPI outcome. If in the end the KPI does not show any progress, it might have an effect on the performance assessment of the people involved. Especially in an autocratic lead company these consequences of KPIs turning RED might be harsh (see also my blog on management styles and KPIs).
5. Leave out certain data
6. Aggregate your KPIs
Remember the cartoon from the "Manager styles and KPIs" blog? The one with the birds? What does the bird at the top see? Not much really. If there is a KPI set for each of the departments below the bird on the top, most likely the overall status will turn GREEN every time. This is because aggregating three values (RAG) will do that. Look at the picture on the left. Even if there is are many AMBER and RED departments throughout the organization, the top level KPI is green. The chance of the top level KPI turning RED is very small. This is because working with the RAG structure makes you limited in the way you aggregate them upwards.
(1) recommended reads on manipulation with graphs: "Van tofu krijg je geheugenverlies" - Coen de Bruijn and "How to lie with charts" - G.E. Jones
Often KPIs are used to manage and prioritize activities within the organization. It is expected that employees act upon the KPI outcome. If in the end the KPI does not show any progress, it might have an effect on the performance assessment of the people involved. Especially in an autocratic lead company these consequences of KPIs turning RED might be harsh (see also my blog on management styles and KPIs).
A too rigorous usage of KPIs in relation to people management might lead to a culture of fear. Employees will do their best to avoid the RED status. Most of them will make sure that the KPI is moving up (or down) by just doing their best (hoping the KPI turns or stays GREEN). Others might go a little bit further to keep the KPI in status GREEN.
Manipulating techniques are not that hard. Knowing a few of them might even come in handy. Not to use them yourself of course! No, just to recognize them when encountered. In the end: it takes a thief to catch one. I've listed six techniques here. They are subtle and are meant to create a smokescreen around the real results. In other words, they help presenting the results better than they really are.
1. Play with thresholds
This trick was already mentioned several times in my previous blogs. It is very easy to manipulate the thresholds above (or below) which you indicator-status turns amber or red. As long as you stretch the threshold long enough, your status will stay green. See here an example I found depicted in an article called "Why Red Amber and Green (RAG)?" at intrafocus.com.
It doesn't take much fantasy to see that one could easily make the green area larger by increasing the lower threshold.
2. Lie with your graph
Playing with your threshold is not the only technique. There are many tricks you can play with the way you present the results. Especially line graphs are easy to manipulate by altering the Y-axis. Consider your starting point, end point and scale of your axis. Use suggestive labels or add chart junk. Use two Y-axis to confuse your readers (if you can't convince them, confuse them). This short blog however isn't the place to discuss all these different techniques. There are some really nice books that I can recommend (1) (among which is my own book on the misuse of statistics ;-)
This trick was already mentioned several times in my previous blogs. It is very easy to manipulate the thresholds above (or below) which you indicator-status turns amber or red. As long as you stretch the threshold long enough, your status will stay green. See here an example I found depicted in an article called "Why Red Amber and Green (RAG)?" at intrafocus.com.
It doesn't take much fantasy to see that one could easily make the green area larger by increasing the lower threshold.
2. Lie with your graph
Playing with your threshold is not the only technique. There are many tricks you can play with the way you present the results. Especially line graphs are easy to manipulate by altering the Y-axis. Consider your starting point, end point and scale of your axis. Use suggestive labels or add chart junk. Use two Y-axis to confuse your readers (if you can't convince them, confuse them). This short blog however isn't the place to discuss all these different techniques. There are some really nice books that I can recommend (1) (among which is my own book on the misuse of statistics ;-)
3. Work with percentages
The percentage is a very popular statistic. That's because almost everybody above 10 has a basic understanding of what it stands for. The Dutch author J. Bakker once said: "percentages are like bikinis. They bring you to all sorts of ideas, but hide the essence". That's probably why percentages are used in commercials all the time (31% less wrinkles! 70% less fat!). Most often they are as hollow as the claims they support. This is because only the percentage does not say anything. It's the absolute figures behind them that really count. An increase of 200% sounds very impressive, but could mean everything or nothing (going from 1 to 2 is also an increase of 200%).
4. Choose your average wisely
Let's say you launched a new website and you want to see how successful it is. Your KPI is wisely chosen. Not the number of hits, but the average time people stay at your website is your performance indicator. The longer the better. The picture below shows three possible outcomes on how many people stayed a certain amount of minutes on your website.
Now have a look at the average-measurement most often used: the mean. Depending on the "skewness" of the results your mean could be lower or higher. So let's say that most people only stay a short time on your website (represented by the graph on the right). Using the "mean" as your type of measurement however gives you the impression that that the amount is higher. This is because the few "fans" that stay at your site a long time, push the mean upward.
The percentage is a very popular statistic. That's because almost everybody above 10 has a basic understanding of what it stands for. The Dutch author J. Bakker once said: "percentages are like bikinis. They bring you to all sorts of ideas, but hide the essence". That's probably why percentages are used in commercials all the time (31% less wrinkles! 70% less fat!). Most often they are as hollow as the claims they support. This is because only the percentage does not say anything. It's the absolute figures behind them that really count. An increase of 200% sounds very impressive, but could mean everything or nothing (going from 1 to 2 is also an increase of 200%).
4. Choose your average wisely
Let's say you launched a new website and you want to see how successful it is. Your KPI is wisely chosen. Not the number of hits, but the average time people stay at your website is your performance indicator. The longer the better. The picture below shows three possible outcomes on how many people stayed a certain amount of minutes on your website.
Now have a look at the average-measurement most often used: the mean. Depending on the "skewness" of the results your mean could be lower or higher. So let's say that most people only stay a short time on your website (represented by the graph on the right). Using the "mean" as your type of measurement however gives you the impression that that the amount is higher. This is because the few "fans" that stay at your site a long time, push the mean upward.
5. Leave out certain data
KPIs don't like extremes or outliers. These incidents might influence your indicator and result in a (temporary) RED or AMBER status. So one of the most tricks used is to just name these extremes "incidents" or "a coincidence" and remove them from your graph.
Remember the cartoon from the "Manager styles and KPIs" blog? The one with the birds? What does the bird at the top see? Not much really. If there is a KPI set for each of the departments below the bird on the top, most likely the overall status will turn GREEN every time. This is because aggregating three values (RAG) will do that. Look at the picture on the left. Even if there is are many AMBER and RED departments throughout the organization, the top level KPI is green. The chance of the top level KPI turning RED is very small. This is because working with the RAG structure makes you limited in the way you aggregate them upwards.
(1) recommended reads on manipulation with graphs: "Van tofu krijg je geheugenverlies" - Coen de Bruijn and "How to lie with charts" - G.E. Jones
zondag 23 november 2014
Everything is OK!! REALLY!! You must believe me! All is OK.
"Failing ICT projects at Government are unnecessary" eenvandaag.nl 4 August 2011
"Why do ICT projects at Governments fail so often and so badly?" kafkabrigade.nl 29 May 2012
"Again a ICT project fails at the Government" bnr.nl 25 June 2013
"One third of ICT projects fail within Government with system decommission as a result" - tweakers.net 25 April 2014
"ICT-projects at Government: nobody is hold accountable for failing" - ftm.nl 14 May 2014
"ICT and Government; most often the quality levels are pathetic" - fd.nl 13 October 2014
Just a few online headlines that pop up when you Google "Government ICT projects fail" (in Dutch). It does not paint a nice picture about the success rate of ICT projects within the Dutch Government. That this is not a typical Dutch problem is made clear by the list on Wikipedia called "List of failed and overbudget custom software projects". Apparently government bodies and technology projects are not the best match.
Not only newspapers noticed a mismatch. In 2014 an official governmental task force was put in place to in investigate all recent ICT projects conducted. Their findings were quite shocking. In total 36% of the larger icy-projects within the Netherlands, with a budget more than 7.5 million euro fail so badly that the system to be implemented is decommissioned even before it is fully operational. For 57% of these major projects, no decommission takes place, but are more expensive than budgeted or do not deliver the results as required. On a yearly basis this leads to a loss of 4 to 5 million euro (1).
In just two words: not good. Luckily for us normal citizens, there is a website created by the Dutch Government where we can find the status of all ICT projects currently being undertaken. This Dashboard shows two main KPIs: status on Budget and on Delivery Time. The Status can have three colors:
Green: Normal
Amber: Attention needed
Red: Action needed
The displayed color indicates the state of affairs of the project on the reference date. Given the headlines mentioned, one would expect a lot of Amber and Green in this KPI dashboard. So let's have a closer look at the Defense Department, spending the most money on ICT (6 projects with a total of 364 mln euro).
At the Defense Department everything is ok! All the misery is probably in some of the other departments. So I had a look at the others and to my complete surprise ALL departments report GREEN on both delivery time and costs!
How is it, that the KPIs are green for all governmental departments while everybody is screaming that ICT projects are failing by the dozen? Why would a government go all te way to build a website showing their results and clearly present performance brighter than even their own task-force found. And more importantly for this blog. What can we learn from it?
Almost all companies have to present their performance in one way or another to external stakeholders. In the Netherlands the Government is obliged to report all progress and they've chosen to do this via an easy to find and simple website. In itself this is nobel and can only be cheered. However one should present the truth in order to come across as being honest. Of course it easy to construct two simple KPIs in such a way that they almost certainly turn GREEN. The thing is that this will in the end bite itself in the tail. Best is to be clear about your performance. Don't make it nicer than it really is. Not only for your credibility towards the outside world, but also towards your internal employees. They see the "real" performance and might start questioning the integrity of senior management if external KPIs present a different story. And maybe even more important; use the same KPIs within your organization as the ones that you present to your external stakeholders. Don't create specific KPIs for the outside world. In this case, it is almost impossible to believe that these two simple KPIs are the only ones that the government is using themselves. And if your stakeholders expect you to manage your performance with certain KPIs (like most regulators do), make them your internal KPIs. Furthermore it is wise to explain how your KPIs are constructed. What do you measure your success against? What are your thresholds? What do you measure (e.g. budget or actual money spent)? How do you incorporate setbacks? What risks were already incorporated in your budget and will not effect your view on performance? What tolerance did you agreed upon?
But above all be transparent. If things go wrong, be honest about it and tell everybody how you think you can manage the issues at hand. That is what good governance is about!
(1) Parlementair onderzoek naar ICT-projecten bij de overheid (See here the Endreport in Dutch)
"Why do ICT projects at Governments fail so often and so badly?" kafkabrigade.nl 29 May 2012
"Again a ICT project fails at the Government" bnr.nl 25 June 2013
"One third of ICT projects fail within Government with system decommission as a result" - tweakers.net 25 April 2014
"ICT-projects at Government: nobody is hold accountable for failing" - ftm.nl 14 May 2014
"ICT and Government; most often the quality levels are pathetic" - fd.nl 13 October 2014
Just a few online headlines that pop up when you Google "Government ICT projects fail" (in Dutch). It does not paint a nice picture about the success rate of ICT projects within the Dutch Government. That this is not a typical Dutch problem is made clear by the list on Wikipedia called "List of failed and overbudget custom software projects". Apparently government bodies and technology projects are not the best match.
Not only newspapers noticed a mismatch. In 2014 an official governmental task force was put in place to in investigate all recent ICT projects conducted. Their findings were quite shocking. In total 36% of the larger icy-projects within the Netherlands, with a budget more than 7.5 million euro fail so badly that the system to be implemented is decommissioned even before it is fully operational. For 57% of these major projects, no decommission takes place, but are more expensive than budgeted or do not deliver the results as required. On a yearly basis this leads to a loss of 4 to 5 million euro (1).
In just two words: not good. Luckily for us normal citizens, there is a website created by the Dutch Government where we can find the status of all ICT projects currently being undertaken. This Dashboard shows two main KPIs: status on Budget and on Delivery Time. The Status can have three colors:
Green: Normal
Amber: Attention needed
Red: Action needed
The displayed color indicates the state of affairs of the project on the reference date. Given the headlines mentioned, one would expect a lot of Amber and Green in this KPI dashboard. So let's have a closer look at the Defense Department, spending the most money on ICT (6 projects with a total of 364 mln euro).
rijksictdashboard.nl - Defense Department |
At the Defense Department everything is ok! All the misery is probably in some of the other departments. So I had a look at the others and to my complete surprise ALL departments report GREEN on both delivery time and costs!
HUH???! |
Almost all companies have to present their performance in one way or another to external stakeholders. In the Netherlands the Government is obliged to report all progress and they've chosen to do this via an easy to find and simple website. In itself this is nobel and can only be cheered. However one should present the truth in order to come across as being honest. Of course it easy to construct two simple KPIs in such a way that they almost certainly turn GREEN. The thing is that this will in the end bite itself in the tail. Best is to be clear about your performance. Don't make it nicer than it really is. Not only for your credibility towards the outside world, but also towards your internal employees. They see the "real" performance and might start questioning the integrity of senior management if external KPIs present a different story. And maybe even more important; use the same KPIs within your organization as the ones that you present to your external stakeholders. Don't create specific KPIs for the outside world. In this case, it is almost impossible to believe that these two simple KPIs are the only ones that the government is using themselves. And if your stakeholders expect you to manage your performance with certain KPIs (like most regulators do), make them your internal KPIs. Furthermore it is wise to explain how your KPIs are constructed. What do you measure your success against? What are your thresholds? What do you measure (e.g. budget or actual money spent)? How do you incorporate setbacks? What risks were already incorporated in your budget and will not effect your view on performance? What tolerance did you agreed upon?
But above all be transparent. If things go wrong, be honest about it and tell everybody how you think you can manage the issues at hand. That is what good governance is about!
(1) Parlementair onderzoek naar ICT-projecten bij de overheid (See here the Endreport in Dutch)
donderdag 13 november 2014
Crisis in the world of Science
"In science it often happens that scientists say, "You know that's a really good argument; my position is mistaken," and then they would actually change their minds and you never hear that old view from them again. They really do it. It doesn't happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day."
Carl Sagan (1987)
Making mistakes and accepting them is one of the reasons why we made so much progress in science in the past decades. It is painful when it happens, but every time we learn from these scientific mistakes. That is what "scientific innovation" is all about. One of the mechanisms to discover mistakes and make sure that scientists stay focused is the concept of peer-review. When you want to publish your results in a (well known) scientific magazine, you make sure it is read by knowledgeable peers (if only to prevent being shamed when mistakes are discovered after publication). In other words; scientists purposely seek critique and want to be challenged. If you can withstand the pushback of your colleagues, your hypothesis is one step closer to being true.
However, the last couple of years this "self-imposed" critique seeking process is crumbling down, endangering the progress in the scientific world. Before we try to find out why this is, we go back to the summer of 2011. In that year the scientific world was shaken by the discovery of one of the largest fraud cases. Diederik Stapel, at that time still a professor of Social Psychology at Tilburg University, confessed having falsified several data-sets used in his studies. An extensive report investigated all of Stapel's 130 articles and 24 book chapters. According to the first findings, on the first batch of 20 publications, 12 were falsified and three contributions to books were also fraudulent. How was it possible that over all these years, no one discovered or even suspected this? No co-authors, students, peers, or any one else.
Many have argued that this was a unique case. But that is to be seen. Of course the discovery of such a large scale fraud is seldom seen, but several investigations have shown that "photoshopping" results is not uncommon in the scientific world. A study already published in 2004 in BMC Medical Research Methodology claimed that a high proportion of papers published in leading scientific journals contained statistical errors. Not all of these errors led to erroneous conclusions, but the authors found that some of them may have caused non-significant findings to be misrepresented as being significant (1).
The different ways to manipulate results is for another blog, but here I'm more interested in how it is possible that so many "mistakes" are not seen during peer-review. The answer is actually very simple; because peers don't see read the articles. Today the primary focus of a scientist is to produce papers, not to review those of others. Have a look at the infograpic shown here.
If you were to print out just the first page of every item indexed in Web of Science, the stack of paper would reach almost to the top of Mount Kilimanjaro. This graph also shows that only the top meter and a half would have received 1,000 citations or more (2).
Research has become Publication Driven. Universities compete on research money and students. In order to reach their goals they have set productivity goals. The KPIs set encourage the production of many papers with high visibility (in order to reach the top of the pile and thus being noticed). Publication driven KPIs promote a calculating behavior: what topic brings me money or gets me students? Assessments are therefor not based on quality but on quantity (3).
Frits van Oostrum (President of Dutch Royal Acadamy of Science from 2005 till 2008) said it like this in 2007:
The different ways to manipulate results is for another blog, but here I'm more interested in how it is possible that so many "mistakes" are not seen during peer-review. The answer is actually very simple; because peers don't see read the articles. Today the primary focus of a scientist is to produce papers, not to review those of others. Have a look at the infograpic shown here.
If you were to print out just the first page of every item indexed in Web of Science, the stack of paper would reach almost to the top of Mount Kilimanjaro. This graph also shows that only the top meter and a half would have received 1,000 citations or more (2).
Research has become Publication Driven. Universities compete on research money and students. In order to reach their goals they have set productivity goals. The KPIs set encourage the production of many papers with high visibility (in order to reach the top of the pile and thus being noticed). Publication driven KPIs promote a calculating behavior: what topic brings me money or gets me students? Assessments are therefor not based on quality but on quantity (3).
Frits van Oostrum (President of Dutch Royal Acadamy of Science from 2005 till 2008) said it like this in 2007:
"Especially where the (in itself noble) principle to measure is to know entered into a covenant with the fear for substantive judgment, it has led to the glorification of the number, and preferably the large and growing one. And what is not countable, does not count. It leads to putting means (measurement) over goal (quality).
These are obviously very insidious mechanisms, with a high probability of perversion, as we all know. Because researchers must of course be productive, but none of us will propose that someone who produces thirty articles per per year is a better researcher or scholar than someone with three; or that the teacher who dutifully adheres to the study guide and passes 90% is a better teacher than the one who regularly improvises and rejects 30%. But monetizing, measuring and quantifying lead naturally to the dream of more so-called benefits and for less costs" (4). (see the full speech here - in Dutch)
These are obviously very insidious mechanisms, with a high probability of perversion, as we all know. Because researchers must of course be productive, but none of us will propose that someone who produces thirty articles per per year is a better researcher or scholar than someone with three; or that the teacher who dutifully adheres to the study guide and passes 90% is a better teacher than the one who regularly improvises and rejects 30%. But monetizing, measuring and quantifying lead naturally to the dream of more so-called benefits and for less costs" (4). (see the full speech here - in Dutch)
I'm not claiming that the publication KPI is single-handedly responsible for a crisis in the science world. But many researchers are dispirited to review their colleagues because of the fact the they are rated on production not on reading. Furthermore, it always has been very difficult to publish research that showed no significant effects (in it self, that is an important finding - so researchers know what not to research in the future). The publication KPI is not helping either in that respect. Only noticeable papers can count on being published, so better to keep the NO SIGNIFICANT RESULT papers in the drawer and continue the search for "real" findings. And what happens if those results won't come quick enough.....
Next time we'll have a look at KPIs in Government
(1) Emili Garcia-Berthou and Carles Alcaraz, Statistical Errors, BMC Medical Research Methodology 2004, 4:13
(2) see here for more details on the "paper mountain"
(3) This paragraph was based on a presentation from R. Abma (Scholar General Social Sciences at the University of Utrecht and author of De publicatiefabriek). The presentation was given during the Skepsis Congres 2014.
(4) Vooral waar het op zichzelf nobele beginsel meten is weten een monsterverbond
aanging met schrik voor het inhoudelijke oordeel heeft dit geleid tot de verheerlijking van het
getal, en liefst het grote en het groeiende. En wat niet telbaar is, telt niet. Het leidt ten diepste
tot het overschaduwen van doel (kwaliteit) door middel (meting).(3) This paragraph was based on a presentation from R. Abma (Scholar General Social Sciences at the University of Utrecht and author of De publicatiefabriek). The presentation was given during the Skepsis Congres 2014.
Dit zijn natuurlijk zeer verraderlijke mechanismen, met een hoge kans op pervertering,
zoals wij allen weten. Want onderzoekers moeten uiteraard wel produktief zijn, maar niemand
onder ons zal ook maar een moment staande houden dat iemand die dertig artikelen per jaar
produceert daarom een betere onderzoeker laat staan geleerde is dan iemand met drie; of dat
de docent die braaf de studiewijzer aanhoudt en bij wie 90% slaagt een betere leraar is dan
wie geregeld improviseert en 30% afwijst. Maar monetariseren, meten en becijferen leiden als
vanzelf tot de wensdroom van meer zogenaamde baten voor minder zogenoemde kosten.
maandag 10 november 2014
Best flow chart ever: management styles and KPIs
"When the top level guys look down they only see shitheads. When bottom level guys look up they only see assholes"
Good chance you have seen the cartoon before. The bird on the top looking satisfied, while the birds on the lower levels look more and more miserable. Does it feel familiar? Hopefully not, but chance is that you do recognize the gist of it.
The cartoon triggered me to try to find the relation between certain types of managers and their usage of KPIs. This raised the question of what different types of managers are distinguished in literature.
Problem however with that question is that one could probably fill a large library with all the books written about the topic. And to make things worse, I personally have a problem with management books that talk about personality traits. They suggest that personality or behavior can be described by just a few labels. Of course that is not the case. All research combined on human behavior (e.g. Neurology, Biology, Psychology, Sociology) did not yet find definite, unique, separate and unambiguous personality types. But being skeptical has its practical limits so in the end I settled with this short presentation that summarizes the different management styles most encountered.
Autocratic
For these type of managers KPIs give them a sense of control over the situation and their people. They probably don't have many KPIs, but they make sure that the ones they have are known by all employees involved. The threshold is set strict and not to much flexibility is allowed. They will make sure that what got them here, will get them there (again). "Green" traffic lights are expected and an "Amber" status is already being frowned upon (an explanation is expected to say the least). Their adagium is very simple: KPIs are being met. Always.
Bureaucratic
Last weeks blog was about the United Nations and their KPI usages for the millennium goals. To many, to long of a horizon, vague goals and diffuse responsibility. KPIs are uses because "everybody uses them". KPIs are not the result of a creative process where all stakeholders were involved. They are merely copies of the ones that were used in the past. People don't really believe in the KPI concept as everybody can point to each other once things go wrong. In other words; KPIs are just another thing you are supposed to have as manager.
Democratic
Everybody is involved in the creation of the KPI-set in a democratic organization. And with everybody I mean really everybody. All input is gathered and taken into account when the KPI is created. The creation process is more important than the end-result. KPIs are not very specific (because everybody has to recognize their own input in the outcome). Thresholds are being set, but so high making it difficult to reach the "Amber" or "Red" status. The KPIs are not fixed and changes are made frequently. Everybody is allowed to put the current KPIs to debate.
Laissez-fair
Why measuring stuff if you can rely on the intrinsic motivation of your employees? Make sure that you emphasize the responsibility of the individual and the sum will be greater than the individual parts. No KPIs are needed, and if they are there it's more for the outside world than to use them internally. Progress is being measured just by looking at the results and it is expected of employees to give a signal when things go wrong.
Next time: Crisis in the world of Science
The cartoon triggered me to try to find the relation between certain types of managers and their usage of KPIs. This raised the question of what different types of managers are distinguished in literature.
Problem however with that question is that one could probably fill a large library with all the books written about the topic. And to make things worse, I personally have a problem with management books that talk about personality traits. They suggest that personality or behavior can be described by just a few labels. Of course that is not the case. All research combined on human behavior (e.g. Neurology, Biology, Psychology, Sociology) did not yet find definite, unique, separate and unambiguous personality types. But being skeptical has its practical limits so in the end I settled with this short presentation that summarizes the different management styles most encountered.
- Autocratic style
- Bureaucratic style
- Democratic style
- Laissez fair style
Autocratic
For these type of managers KPIs give them a sense of control over the situation and their people. They probably don't have many KPIs, but they make sure that the ones they have are known by all employees involved. The threshold is set strict and not to much flexibility is allowed. They will make sure that what got them here, will get them there (again). "Green" traffic lights are expected and an "Amber" status is already being frowned upon (an explanation is expected to say the least). Their adagium is very simple: KPIs are being met. Always.
Bureaucratic
Last weeks blog was about the United Nations and their KPI usages for the millennium goals. To many, to long of a horizon, vague goals and diffuse responsibility. KPIs are uses because "everybody uses them". KPIs are not the result of a creative process where all stakeholders were involved. They are merely copies of the ones that were used in the past. People don't really believe in the KPI concept as everybody can point to each other once things go wrong. In other words; KPIs are just another thing you are supposed to have as manager.
Democratic
Everybody is involved in the creation of the KPI-set in a democratic organization. And with everybody I mean really everybody. All input is gathered and taken into account when the KPI is created. The creation process is more important than the end-result. KPIs are not very specific (because everybody has to recognize their own input in the outcome). Thresholds are being set, but so high making it difficult to reach the "Amber" or "Red" status. The KPIs are not fixed and changes are made frequently. Everybody is allowed to put the current KPIs to debate.
Laissez-fair
Why measuring stuff if you can rely on the intrinsic motivation of your employees? Make sure that you emphasize the responsibility of the individual and the sum will be greater than the individual parts. No KPIs are needed, and if they are there it's more for the outside world than to use them internally. Progress is being measured just by looking at the results and it is expected of employees to give a signal when things go wrong.
Next time: Crisis in the world of Science
zaterdag 1 november 2014
What we can learn from the UN Millenium Goals
More than 14 years ago, the United Nations Millennium Declaration was signed by leaders of 189 different countries. They committed themselves to 8 goals to be accomplished in 2015. For each of the eight Millennium Goals several KPIs were set to measure progress and succes. (see this link for a full list).
This is what US President Barack Obama had to say in 2010 about the progress at that time.
"Nor can anyone deny the progress that has been made toward achieving certain Millennium Development Goals. The doors of education have been opened to tens of millions of children, boys and girls. New cases of HIV/AIDS and malaria and tuberculosis are down. Access to clean drinking water is up. Around the world, hundreds of millions of people have been lifted from extreme poverty. That is all for the good, and it’s a testimony to the extraordinary work that’s been done both within countries and by the international community.
Yet we must also face the fact that progress towards other goals that were set has not come nearly fast enough. Not for the hundreds of thousands of women who lose their lives every year simply giving birth. Not for the millions of children who die from agony of malnutrition. Not for the nearly one billion people who endure the misery of chronic hunger.
This is the reality we must face -- that if the international community just keeps doing the same things the same way, we may make some modest progress here and there, but we will miss many development goals. That is the truth. With 10 years down and just five years before our development targets come due, we must do better."
(see for the full transcript here)
Now, with just one year to go, it doesn't look much better. Don't get me wrong. I really do think the work that has been done is extremely important. The United Nations might not be the perfect institution, but it is I think the only way to get so many countries set to do something. All eight goals do address serious issues that should get our attention and deserve to be solved.
My focus here lies on the things we can learn not only from the goals set, but also from the KPIs chosen to measure their success.
Too many Goals
It is difficult to choose which one of the eight goals is more important. Fighting poverty? universal primary education? reduce child mortality, improve maternal health? You name it. Even if it is difficult or just almost impossible it is wise to choose a maximum of three. More is not only too ambitious, it is also a recipe for disaster. One cannot focus on eight different goals. It will take a tremendous amount of money to make all eight of them successful. Think of the task the UN faces explaining why so many goals failed and how difficult it will be to find 200 countries that again want to participate.
Was ROI taken into account?
When you set these kind of goals and this many countries commit to them, you are sure that money will be spent. But these goals look like they were just chosen because the sounded good. Question is whether a good cost-benefit analysis was done. Of course things will change for the better. Question is whether things will also help in the long run. You want goals that help prevent setting new goals in the future.
Too many KPIs
In total there are 60 KPIs to measure the success of the eight goals set. The good thing is that concrete KPIs were set with concrete thresholds. Downside is that there are so many that is is just impossible to meet them all. If you have 60 targets, you really have none. With all negative consequences as a result, the least of which is the erosion of credibility.
KPIs are smart, but therefor not easy to reach
The world is a complex place. Many factors come into play when it comes to social issues. Politics, religious conflicts, war, corruption, failing economies, etc. complicate accomplishing the things you want to reach.
Long Timeframe
Fifteen years is a long time. It is difficult for people (let alone countries) to stay focused for so long. If you read the rest of the speech of Barack Obama you will notice that most of his speech is directed to the richer countries that seem to loose focus.
Diffuse responsibilities
Who is actually accountable for the reaching the goals? 189 countries? A country is not someone you can hold accountable. And even if you did find a person in 2000 that committed him/herself, he or she will probably not be around in 2015 to speak to that.
To summarize I want to quote Bjorn Lomburg who runs the Copenhagen Consensus Center. This center involves economists from all over the globe to think about the worlds biggest issues. More importantly they help selecting those issues with the highest ROI. This is what he had to say in an interview with Freakonomics Radio.
"There was actually no good cost and benefit analysis, it was just a number of targets that all sound really good. And generally I also think they really are very good. But now the U.N. is going to redo the targets from 2015 and fifteen years onwards. And this time, instead of having a very closed argument, it was basically a few guys around Kofi Annan who set out these targets back in 2000. And then everybody adopted them. This time they have said we want to hear everybody’s input. Of course that’s very laudable, but not surprisingly, it’s also meant that we probably have about 1,400 potential targets on the table. And so we need to make sure that we don’t just end up with a whole long list of Christmas trees as they call them in the U.N. jargon – you know, you just have everything and you wish for all good things, because they’re not likely to be as effective." (see here for full interview)
Hopefully next year the UN choses and develops their KPIs more wisely. If done right the promises could actually be fulfilled this time.
This is what US President Barack Obama had to say in 2010 about the progress at that time.
"Nor can anyone deny the progress that has been made toward achieving certain Millennium Development Goals. The doors of education have been opened to tens of millions of children, boys and girls. New cases of HIV/AIDS and malaria and tuberculosis are down. Access to clean drinking water is up. Around the world, hundreds of millions of people have been lifted from extreme poverty. That is all for the good, and it’s a testimony to the extraordinary work that’s been done both within countries and by the international community.
Yet we must also face the fact that progress towards other goals that were set has not come nearly fast enough. Not for the hundreds of thousands of women who lose their lives every year simply giving birth. Not for the millions of children who die from agony of malnutrition. Not for the nearly one billion people who endure the misery of chronic hunger.
This is the reality we must face -- that if the international community just keeps doing the same things the same way, we may make some modest progress here and there, but we will miss many development goals. That is the truth. With 10 years down and just five years before our development targets come due, we must do better."
(see for the full transcript here)
My focus here lies on the things we can learn not only from the goals set, but also from the KPIs chosen to measure their success.
Too many Goals
It is difficult to choose which one of the eight goals is more important. Fighting poverty? universal primary education? reduce child mortality, improve maternal health? You name it. Even if it is difficult or just almost impossible it is wise to choose a maximum of three. More is not only too ambitious, it is also a recipe for disaster. One cannot focus on eight different goals. It will take a tremendous amount of money to make all eight of them successful. Think of the task the UN faces explaining why so many goals failed and how difficult it will be to find 200 countries that again want to participate.
Was ROI taken into account?
When you set these kind of goals and this many countries commit to them, you are sure that money will be spent. But these goals look like they were just chosen because the sounded good. Question is whether a good cost-benefit analysis was done. Of course things will change for the better. Question is whether things will also help in the long run. You want goals that help prevent setting new goals in the future.
Too many KPIs
In total there are 60 KPIs to measure the success of the eight goals set. The good thing is that concrete KPIs were set with concrete thresholds. Downside is that there are so many that is is just impossible to meet them all. If you have 60 targets, you really have none. With all negative consequences as a result, the least of which is the erosion of credibility.
KPIs are smart, but therefor not easy to reach
The world is a complex place. Many factors come into play when it comes to social issues. Politics, religious conflicts, war, corruption, failing economies, etc. complicate accomplishing the things you want to reach.
Long Timeframe
Fifteen years is a long time. It is difficult for people (let alone countries) to stay focused for so long. If you read the rest of the speech of Barack Obama you will notice that most of his speech is directed to the richer countries that seem to loose focus.
Diffuse responsibilities
Who is actually accountable for the reaching the goals? 189 countries? A country is not someone you can hold accountable. And even if you did find a person in 2000 that committed him/herself, he or she will probably not be around in 2015 to speak to that.
To summarize I want to quote Bjorn Lomburg who runs the Copenhagen Consensus Center. This center involves economists from all over the globe to think about the worlds biggest issues. More importantly they help selecting those issues with the highest ROI. This is what he had to say in an interview with Freakonomics Radio.
"There was actually no good cost and benefit analysis, it was just a number of targets that all sound really good. And generally I also think they really are very good. But now the U.N. is going to redo the targets from 2015 and fifteen years onwards. And this time, instead of having a very closed argument, it was basically a few guys around Kofi Annan who set out these targets back in 2000. And then everybody adopted them. This time they have said we want to hear everybody’s input. Of course that’s very laudable, but not surprisingly, it’s also meant that we probably have about 1,400 potential targets on the table. And so we need to make sure that we don’t just end up with a whole long list of Christmas trees as they call them in the U.N. jargon – you know, you just have everything and you wish for all good things, because they’re not likely to be as effective." (see here for full interview)
Hopefully next year the UN choses and develops their KPIs more wisely. If done right the promises could actually be fulfilled this time.
Abonneren op:
Posts (Atom)