A few blogs back I wrote about when KPIs fail (see here for the blog) and the blog How to lie with your KPI? was about deliberate manipulations of KPIs and their outcome. But there are more reasons why KPIs in the end fail.
Over and over again I emphasized the fact that KPIs are meant to set change in motion when necessary. If no change is initiated, the set goals will not be met and the performance was measured for nothing. Here is a list of possible things that prevent you to act upon the outcome of a KPI.
1. Bad data quality
KPIs run on data. Without good information the KPI cannot be created. Data drives your KPI. That's why correct data is of upmost importance. Unfortunately all sorts of things can go wrong with you data. Especially when data was originally not created for the purpose you are using it for in your KPI. Or because default values can be entered (e.g. '999999'). This can be especially tricky with Financial KPIs. But also with commercial KPIs this can be bothersome. Can you really ensure that all the data you use is correct? A small error in your underlying data can have huge impact. This topic asks for a separate blog.
2. People don't see the importance
Without people knowing and understanding the importance of the specific KPI it won't fly at all. Not only must people be aware of them, they also have to see the consequences of not meeting the thresholds set.
3. People don't understand
In one of the first blogs I discussed the complexity of KPIs (Keeping them Simple and Stupid). This wasn't for no reason. People will not easily admit that they didn't understand the complex and complicated KPI you showed them. They will say they did, but that's only because they don't want to look stupid. Inevitably this will lead them to ignore the KPI as much as possible (to prevent looking stupid again). Or their actions are less effective as they could have been, had they understood the KPI better.
4. People are not interested
As a result of item 2 and 3 or because of other factors, people just might drop out. Some people just hate being measured and KPIs is the manifestation of this. Others just don't desire a product like KPIs because they don't see what's in it for them. And there is always a group of people that lose interest as soon as numbers are involved.
5. People were not involved
I think this is an underlying factor for people to be skeptical about the usefulness of the KPI. It is the "Not-Invented-Here" principle. People do like to have an influence, especially when it concerns their future and how they will be assessed.
6. After creation, the process was stopped
Creation is just the first phase when using KPIs (see my first few blogs on the creation process). There are three more phases that are just as (or maybe even more) important. These are Communicate, Consult and Control (together with the Create this is what I would call the Four C model). Item 2 -5 of the list above are a direct result if you do not communicate. But also "Consultancy" and advice on how to implement and use KPIs is important. It is not enough to just tell people that your KPI exists and how important it is. And last but not least, you have to check whether people adhere to the agreed actions. Are due-dates met and was the work done sufficient and correct?
dinsdag 23 december 2014
donderdag 11 december 2014
Why do (almost) all projects fail?
You
might find the question in the title a little bit dishonest as it suggests that most projects fail. And in that sense you are
right, it is a wrong a question. But it is also wrong for a less more obvious reason. The problem with the question is that it does not tell you what is meant
by "fail". If I would ask you to define project failing I guess you would
come up with something like "delivered above budget" or "not
delivered on time". It is true that project performance most often is
measured via these two basic KPIs (see for example my blog on IT projects within Government).
But let's not ignore the fact that many projects do indeed fail to deliver on budget and on time. When was the last time you were involved in a project that was either on
time or on budget (let alone both)? So even when it is the essence of the
Project Managers job to keep their projects within the GREEN, it almost never happens. In my opinion this is because we are measuring
the wrong things. My (maybe bold) statement is that these two KPIs are
useless to measure project performance. Of course it does say something about
the progress of the
project but not about the actual performance.
Focussing on just these two KPIs is like the mouse that is staring in the two
headlights of a car. It blinds you from the real "danger". So what
are these "real" dangers that we should focus on when executing
projects?
For starters, it is safe to assume that your budget
estimate was wrong in the first place. We are masters in
"short-term-predictions". That's what our brains are doing all day
long. However when it comes to long term predictions we are just terrible. In
general we are biased towards optimism (optimism bias), we ignore obvious
warning signals (confirmation bias), take previous events out of context
(context effect) and tend to remember things more positively (Egocentric bias).
Secondly, the actual risks that materialize during
your project (endangering your delivery time) are not the ones that you summed
beforehand. The ones that you can think of beforehand, are the ones that were
probably copied from the previous project starting documents and are most often
already taken into account when defining the time window of the project. The
real issue was already addressed in the blog "When KPIs fail". It's the
problem of Black Swans, who are always unexpected but impactful.
So using time and budget as your KPIs is a recipy for failure. Unfortunately this is not without far-reaching consequences. Especially when the GREEN status of these KPIs
become the goal. Requirements
are de-scoped or time goes before quality.
But apart from these hidden sides regarding budget and
time there are more dangers that lurk in the dark (if not measured properly).
When focussing on budget and time we tend to forget that the real "performance"
of the project is measured by the quality of the thing it is implementing. How
often is a Business Case drafted in the beginning of the project and never
checked during the project? Or when it is checked it is altered to fit the new
timelines and budget. Even the Business Case itself is most often as
"light as a feather", presenting three "scenarios" to
choose from: Doing Nothing, Doing Everything, Doing the Halfway Solution.
Furthermore most Business Cases do not take into account things like
it-debt, increased complexity, maintenance costs, imbedding in Business As
Usual, governance aspects, etc.
Projects implement change and change has an effect on
people. All sorts of behavioural effects can take place (both inside and
outside the project) that have an impact on the project results. Don't
underestimate the behavioural effect. Coping with change is one of the hardest
things to do for everybody. People can (directly or indirectly) sabotage the
projects. Early adapters might lose interest (and the project loses a sponsor).
Quality might go down when people feel the pressure to deliver. People within
or outside the project do not believe in the change (even when you have a
communication professional). People might mistrust the external people you
hired. And so forth. But most importantly, people won't admit that they were
wrong and keep on doing what they were doing, believing and assuming it is the
right thing. This is especially true for sponsor, project manager, and project
members as they have invested the most. Of course people will deny all of the
above when you ask them.
So next time you do a project. Please make sure your
KPIs are set on measuring the Business Case on a frequent basis and not only
listen to the people involved, but observe what they actually do. And be brave:
dare to stop projects.
donderdag 4 december 2014
How to lie with your KPI
There are lies, damn lies and KPIs
Often KPIs are used to manage and prioritize activities within the organization. It is expected that employees act upon the KPI outcome. If in the end the KPI does not show any progress, it might have an effect on the performance assessment of the people involved. Especially in an autocratic lead company these consequences of KPIs turning RED might be harsh (see also my blog on management styles and KPIs).
5. Leave out certain data
6. Aggregate your KPIs
Remember the cartoon from the "Manager styles and KPIs" blog? The one with the birds? What does the bird at the top see? Not much really. If there is a KPI set for each of the departments below the bird on the top, most likely the overall status will turn GREEN every time. This is because aggregating three values (RAG) will do that. Look at the picture on the left. Even if there is are many AMBER and RED departments throughout the organization, the top level KPI is green. The chance of the top level KPI turning RED is very small. This is because working with the RAG structure makes you limited in the way you aggregate them upwards.
(1) recommended reads on manipulation with graphs: "Van tofu krijg je geheugenverlies" - Coen de Bruijn and "How to lie with charts" - G.E. Jones
Often KPIs are used to manage and prioritize activities within the organization. It is expected that employees act upon the KPI outcome. If in the end the KPI does not show any progress, it might have an effect on the performance assessment of the people involved. Especially in an autocratic lead company these consequences of KPIs turning RED might be harsh (see also my blog on management styles and KPIs).
A too rigorous usage of KPIs in relation to people management might lead to a culture of fear. Employees will do their best to avoid the RED status. Most of them will make sure that the KPI is moving up (or down) by just doing their best (hoping the KPI turns or stays GREEN). Others might go a little bit further to keep the KPI in status GREEN.
Manipulating techniques are not that hard. Knowing a few of them might even come in handy. Not to use them yourself of course! No, just to recognize them when encountered. In the end: it takes a thief to catch one. I've listed six techniques here. They are subtle and are meant to create a smokescreen around the real results. In other words, they help presenting the results better than they really are.
1. Play with thresholds
This trick was already mentioned several times in my previous blogs. It is very easy to manipulate the thresholds above (or below) which you indicator-status turns amber or red. As long as you stretch the threshold long enough, your status will stay green. See here an example I found depicted in an article called "Why Red Amber and Green (RAG)?" at intrafocus.com.
It doesn't take much fantasy to see that one could easily make the green area larger by increasing the lower threshold.
2. Lie with your graph
Playing with your threshold is not the only technique. There are many tricks you can play with the way you present the results. Especially line graphs are easy to manipulate by altering the Y-axis. Consider your starting point, end point and scale of your axis. Use suggestive labels or add chart junk. Use two Y-axis to confuse your readers (if you can't convince them, confuse them). This short blog however isn't the place to discuss all these different techniques. There are some really nice books that I can recommend (1) (among which is my own book on the misuse of statistics ;-)
This trick was already mentioned several times in my previous blogs. It is very easy to manipulate the thresholds above (or below) which you indicator-status turns amber or red. As long as you stretch the threshold long enough, your status will stay green. See here an example I found depicted in an article called "Why Red Amber and Green (RAG)?" at intrafocus.com.
It doesn't take much fantasy to see that one could easily make the green area larger by increasing the lower threshold.
2. Lie with your graph
Playing with your threshold is not the only technique. There are many tricks you can play with the way you present the results. Especially line graphs are easy to manipulate by altering the Y-axis. Consider your starting point, end point and scale of your axis. Use suggestive labels or add chart junk. Use two Y-axis to confuse your readers (if you can't convince them, confuse them). This short blog however isn't the place to discuss all these different techniques. There are some really nice books that I can recommend (1) (among which is my own book on the misuse of statistics ;-)
3. Work with percentages
The percentage is a very popular statistic. That's because almost everybody above 10 has a basic understanding of what it stands for. The Dutch author J. Bakker once said: "percentages are like bikinis. They bring you to all sorts of ideas, but hide the essence". That's probably why percentages are used in commercials all the time (31% less wrinkles! 70% less fat!). Most often they are as hollow as the claims they support. This is because only the percentage does not say anything. It's the absolute figures behind them that really count. An increase of 200% sounds very impressive, but could mean everything or nothing (going from 1 to 2 is also an increase of 200%).
4. Choose your average wisely
Let's say you launched a new website and you want to see how successful it is. Your KPI is wisely chosen. Not the number of hits, but the average time people stay at your website is your performance indicator. The longer the better. The picture below shows three possible outcomes on how many people stayed a certain amount of minutes on your website.
Now have a look at the average-measurement most often used: the mean. Depending on the "skewness" of the results your mean could be lower or higher. So let's say that most people only stay a short time on your website (represented by the graph on the right). Using the "mean" as your type of measurement however gives you the impression that that the amount is higher. This is because the few "fans" that stay at your site a long time, push the mean upward.
The percentage is a very popular statistic. That's because almost everybody above 10 has a basic understanding of what it stands for. The Dutch author J. Bakker once said: "percentages are like bikinis. They bring you to all sorts of ideas, but hide the essence". That's probably why percentages are used in commercials all the time (31% less wrinkles! 70% less fat!). Most often they are as hollow as the claims they support. This is because only the percentage does not say anything. It's the absolute figures behind them that really count. An increase of 200% sounds very impressive, but could mean everything or nothing (going from 1 to 2 is also an increase of 200%).
4. Choose your average wisely
Let's say you launched a new website and you want to see how successful it is. Your KPI is wisely chosen. Not the number of hits, but the average time people stay at your website is your performance indicator. The longer the better. The picture below shows three possible outcomes on how many people stayed a certain amount of minutes on your website.
Now have a look at the average-measurement most often used: the mean. Depending on the "skewness" of the results your mean could be lower or higher. So let's say that most people only stay a short time on your website (represented by the graph on the right). Using the "mean" as your type of measurement however gives you the impression that that the amount is higher. This is because the few "fans" that stay at your site a long time, push the mean upward.
5. Leave out certain data
KPIs don't like extremes or outliers. These incidents might influence your indicator and result in a (temporary) RED or AMBER status. So one of the most tricks used is to just name these extremes "incidents" or "a coincidence" and remove them from your graph.
Remember the cartoon from the "Manager styles and KPIs" blog? The one with the birds? What does the bird at the top see? Not much really. If there is a KPI set for each of the departments below the bird on the top, most likely the overall status will turn GREEN every time. This is because aggregating three values (RAG) will do that. Look at the picture on the left. Even if there is are many AMBER and RED departments throughout the organization, the top level KPI is green. The chance of the top level KPI turning RED is very small. This is because working with the RAG structure makes you limited in the way you aggregate them upwards.
(1) recommended reads on manipulation with graphs: "Van tofu krijg je geheugenverlies" - Coen de Bruijn and "How to lie with charts" - G.E. Jones
zondag 23 november 2014
Everything is OK!! REALLY!! You must believe me! All is OK.
"Failing ICT projects at Government are unnecessary" eenvandaag.nl 4 August 2011
"Why do ICT projects at Governments fail so often and so badly?" kafkabrigade.nl 29 May 2012
"Again a ICT project fails at the Government" bnr.nl 25 June 2013
"One third of ICT projects fail within Government with system decommission as a result" - tweakers.net 25 April 2014
"ICT-projects at Government: nobody is hold accountable for failing" - ftm.nl 14 May 2014
"ICT and Government; most often the quality levels are pathetic" - fd.nl 13 October 2014
Just a few online headlines that pop up when you Google "Government ICT projects fail" (in Dutch). It does not paint a nice picture about the success rate of ICT projects within the Dutch Government. That this is not a typical Dutch problem is made clear by the list on Wikipedia called "List of failed and overbudget custom software projects". Apparently government bodies and technology projects are not the best match.
Not only newspapers noticed a mismatch. In 2014 an official governmental task force was put in place to in investigate all recent ICT projects conducted. Their findings were quite shocking. In total 36% of the larger icy-projects within the Netherlands, with a budget more than 7.5 million euro fail so badly that the system to be implemented is decommissioned even before it is fully operational. For 57% of these major projects, no decommission takes place, but are more expensive than budgeted or do not deliver the results as required. On a yearly basis this leads to a loss of 4 to 5 million euro (1).
In just two words: not good. Luckily for us normal citizens, there is a website created by the Dutch Government where we can find the status of all ICT projects currently being undertaken. This Dashboard shows two main KPIs: status on Budget and on Delivery Time. The Status can have three colors:
Green: Normal
Amber: Attention needed
Red: Action needed
The displayed color indicates the state of affairs of the project on the reference date. Given the headlines mentioned, one would expect a lot of Amber and Green in this KPI dashboard. So let's have a closer look at the Defense Department, spending the most money on ICT (6 projects with a total of 364 mln euro).
At the Defense Department everything is ok! All the misery is probably in some of the other departments. So I had a look at the others and to my complete surprise ALL departments report GREEN on both delivery time and costs!
How is it, that the KPIs are green for all governmental departments while everybody is screaming that ICT projects are failing by the dozen? Why would a government go all te way to build a website showing their results and clearly present performance brighter than even their own task-force found. And more importantly for this blog. What can we learn from it?
Almost all companies have to present their performance in one way or another to external stakeholders. In the Netherlands the Government is obliged to report all progress and they've chosen to do this via an easy to find and simple website. In itself this is nobel and can only be cheered. However one should present the truth in order to come across as being honest. Of course it easy to construct two simple KPIs in such a way that they almost certainly turn GREEN. The thing is that this will in the end bite itself in the tail. Best is to be clear about your performance. Don't make it nicer than it really is. Not only for your credibility towards the outside world, but also towards your internal employees. They see the "real" performance and might start questioning the integrity of senior management if external KPIs present a different story. And maybe even more important; use the same KPIs within your organization as the ones that you present to your external stakeholders. Don't create specific KPIs for the outside world. In this case, it is almost impossible to believe that these two simple KPIs are the only ones that the government is using themselves. And if your stakeholders expect you to manage your performance with certain KPIs (like most regulators do), make them your internal KPIs. Furthermore it is wise to explain how your KPIs are constructed. What do you measure your success against? What are your thresholds? What do you measure (e.g. budget or actual money spent)? How do you incorporate setbacks? What risks were already incorporated in your budget and will not effect your view on performance? What tolerance did you agreed upon?
But above all be transparent. If things go wrong, be honest about it and tell everybody how you think you can manage the issues at hand. That is what good governance is about!
(1) Parlementair onderzoek naar ICT-projecten bij de overheid (See here the Endreport in Dutch)
"Why do ICT projects at Governments fail so often and so badly?" kafkabrigade.nl 29 May 2012
"Again a ICT project fails at the Government" bnr.nl 25 June 2013
"One third of ICT projects fail within Government with system decommission as a result" - tweakers.net 25 April 2014
"ICT-projects at Government: nobody is hold accountable for failing" - ftm.nl 14 May 2014
"ICT and Government; most often the quality levels are pathetic" - fd.nl 13 October 2014
Just a few online headlines that pop up when you Google "Government ICT projects fail" (in Dutch). It does not paint a nice picture about the success rate of ICT projects within the Dutch Government. That this is not a typical Dutch problem is made clear by the list on Wikipedia called "List of failed and overbudget custom software projects". Apparently government bodies and technology projects are not the best match.
Not only newspapers noticed a mismatch. In 2014 an official governmental task force was put in place to in investigate all recent ICT projects conducted. Their findings were quite shocking. In total 36% of the larger icy-projects within the Netherlands, with a budget more than 7.5 million euro fail so badly that the system to be implemented is decommissioned even before it is fully operational. For 57% of these major projects, no decommission takes place, but are more expensive than budgeted or do not deliver the results as required. On a yearly basis this leads to a loss of 4 to 5 million euro (1).
In just two words: not good. Luckily for us normal citizens, there is a website created by the Dutch Government where we can find the status of all ICT projects currently being undertaken. This Dashboard shows two main KPIs: status on Budget and on Delivery Time. The Status can have three colors:
Green: Normal
Amber: Attention needed
Red: Action needed
The displayed color indicates the state of affairs of the project on the reference date. Given the headlines mentioned, one would expect a lot of Amber and Green in this KPI dashboard. So let's have a closer look at the Defense Department, spending the most money on ICT (6 projects with a total of 364 mln euro).
rijksictdashboard.nl - Defense Department |
At the Defense Department everything is ok! All the misery is probably in some of the other departments. So I had a look at the others and to my complete surprise ALL departments report GREEN on both delivery time and costs!
HUH???! |
Almost all companies have to present their performance in one way or another to external stakeholders. In the Netherlands the Government is obliged to report all progress and they've chosen to do this via an easy to find and simple website. In itself this is nobel and can only be cheered. However one should present the truth in order to come across as being honest. Of course it easy to construct two simple KPIs in such a way that they almost certainly turn GREEN. The thing is that this will in the end bite itself in the tail. Best is to be clear about your performance. Don't make it nicer than it really is. Not only for your credibility towards the outside world, but also towards your internal employees. They see the "real" performance and might start questioning the integrity of senior management if external KPIs present a different story. And maybe even more important; use the same KPIs within your organization as the ones that you present to your external stakeholders. Don't create specific KPIs for the outside world. In this case, it is almost impossible to believe that these two simple KPIs are the only ones that the government is using themselves. And if your stakeholders expect you to manage your performance with certain KPIs (like most regulators do), make them your internal KPIs. Furthermore it is wise to explain how your KPIs are constructed. What do you measure your success against? What are your thresholds? What do you measure (e.g. budget or actual money spent)? How do you incorporate setbacks? What risks were already incorporated in your budget and will not effect your view on performance? What tolerance did you agreed upon?
But above all be transparent. If things go wrong, be honest about it and tell everybody how you think you can manage the issues at hand. That is what good governance is about!
(1) Parlementair onderzoek naar ICT-projecten bij de overheid (See here the Endreport in Dutch)
donderdag 13 november 2014
Crisis in the world of Science
"In science it often happens that scientists say, "You know that's a really good argument; my position is mistaken," and then they would actually change their minds and you never hear that old view from them again. They really do it. It doesn't happen as often as it should, because scientists are human and change is sometimes painful. But it happens every day."
Carl Sagan (1987)
Making mistakes and accepting them is one of the reasons why we made so much progress in science in the past decades. It is painful when it happens, but every time we learn from these scientific mistakes. That is what "scientific innovation" is all about. One of the mechanisms to discover mistakes and make sure that scientists stay focused is the concept of peer-review. When you want to publish your results in a (well known) scientific magazine, you make sure it is read by knowledgeable peers (if only to prevent being shamed when mistakes are discovered after publication). In other words; scientists purposely seek critique and want to be challenged. If you can withstand the pushback of your colleagues, your hypothesis is one step closer to being true.
However, the last couple of years this "self-imposed" critique seeking process is crumbling down, endangering the progress in the scientific world. Before we try to find out why this is, we go back to the summer of 2011. In that year the scientific world was shaken by the discovery of one of the largest fraud cases. Diederik Stapel, at that time still a professor of Social Psychology at Tilburg University, confessed having falsified several data-sets used in his studies. An extensive report investigated all of Stapel's 130 articles and 24 book chapters. According to the first findings, on the first batch of 20 publications, 12 were falsified and three contributions to books were also fraudulent. How was it possible that over all these years, no one discovered or even suspected this? No co-authors, students, peers, or any one else.
Many have argued that this was a unique case. But that is to be seen. Of course the discovery of such a large scale fraud is seldom seen, but several investigations have shown that "photoshopping" results is not uncommon in the scientific world. A study already published in 2004 in BMC Medical Research Methodology claimed that a high proportion of papers published in leading scientific journals contained statistical errors. Not all of these errors led to erroneous conclusions, but the authors found that some of them may have caused non-significant findings to be misrepresented as being significant (1).
The different ways to manipulate results is for another blog, but here I'm more interested in how it is possible that so many "mistakes" are not seen during peer-review. The answer is actually very simple; because peers don't see read the articles. Today the primary focus of a scientist is to produce papers, not to review those of others. Have a look at the infograpic shown here.
If you were to print out just the first page of every item indexed in Web of Science, the stack of paper would reach almost to the top of Mount Kilimanjaro. This graph also shows that only the top meter and a half would have received 1,000 citations or more (2).
Research has become Publication Driven. Universities compete on research money and students. In order to reach their goals they have set productivity goals. The KPIs set encourage the production of many papers with high visibility (in order to reach the top of the pile and thus being noticed). Publication driven KPIs promote a calculating behavior: what topic brings me money or gets me students? Assessments are therefor not based on quality but on quantity (3).
Frits van Oostrum (President of Dutch Royal Acadamy of Science from 2005 till 2008) said it like this in 2007:
The different ways to manipulate results is for another blog, but here I'm more interested in how it is possible that so many "mistakes" are not seen during peer-review. The answer is actually very simple; because peers don't see read the articles. Today the primary focus of a scientist is to produce papers, not to review those of others. Have a look at the infograpic shown here.
If you were to print out just the first page of every item indexed in Web of Science, the stack of paper would reach almost to the top of Mount Kilimanjaro. This graph also shows that only the top meter and a half would have received 1,000 citations or more (2).
Research has become Publication Driven. Universities compete on research money and students. In order to reach their goals they have set productivity goals. The KPIs set encourage the production of many papers with high visibility (in order to reach the top of the pile and thus being noticed). Publication driven KPIs promote a calculating behavior: what topic brings me money or gets me students? Assessments are therefor not based on quality but on quantity (3).
Frits van Oostrum (President of Dutch Royal Acadamy of Science from 2005 till 2008) said it like this in 2007:
"Especially where the (in itself noble) principle to measure is to know entered into a covenant with the fear for substantive judgment, it has led to the glorification of the number, and preferably the large and growing one. And what is not countable, does not count. It leads to putting means (measurement) over goal (quality).
These are obviously very insidious mechanisms, with a high probability of perversion, as we all know. Because researchers must of course be productive, but none of us will propose that someone who produces thirty articles per per year is a better researcher or scholar than someone with three; or that the teacher who dutifully adheres to the study guide and passes 90% is a better teacher than the one who regularly improvises and rejects 30%. But monetizing, measuring and quantifying lead naturally to the dream of more so-called benefits and for less costs" (4). (see the full speech here - in Dutch)
These are obviously very insidious mechanisms, with a high probability of perversion, as we all know. Because researchers must of course be productive, but none of us will propose that someone who produces thirty articles per per year is a better researcher or scholar than someone with three; or that the teacher who dutifully adheres to the study guide and passes 90% is a better teacher than the one who regularly improvises and rejects 30%. But monetizing, measuring and quantifying lead naturally to the dream of more so-called benefits and for less costs" (4). (see the full speech here - in Dutch)
I'm not claiming that the publication KPI is single-handedly responsible for a crisis in the science world. But many researchers are dispirited to review their colleagues because of the fact the they are rated on production not on reading. Furthermore, it always has been very difficult to publish research that showed no significant effects (in it self, that is an important finding - so researchers know what not to research in the future). The publication KPI is not helping either in that respect. Only noticeable papers can count on being published, so better to keep the NO SIGNIFICANT RESULT papers in the drawer and continue the search for "real" findings. And what happens if those results won't come quick enough.....
Next time we'll have a look at KPIs in Government
(1) Emili Garcia-Berthou and Carles Alcaraz, Statistical Errors, BMC Medical Research Methodology 2004, 4:13
(2) see here for more details on the "paper mountain"
(3) This paragraph was based on a presentation from R. Abma (Scholar General Social Sciences at the University of Utrecht and author of De publicatiefabriek). The presentation was given during the Skepsis Congres 2014.
(4) Vooral waar het op zichzelf nobele beginsel meten is weten een monsterverbond
aanging met schrik voor het inhoudelijke oordeel heeft dit geleid tot de verheerlijking van het
getal, en liefst het grote en het groeiende. En wat niet telbaar is, telt niet. Het leidt ten diepste
tot het overschaduwen van doel (kwaliteit) door middel (meting).(3) This paragraph was based on a presentation from R. Abma (Scholar General Social Sciences at the University of Utrecht and author of De publicatiefabriek). The presentation was given during the Skepsis Congres 2014.
Dit zijn natuurlijk zeer verraderlijke mechanismen, met een hoge kans op pervertering,
zoals wij allen weten. Want onderzoekers moeten uiteraard wel produktief zijn, maar niemand
onder ons zal ook maar een moment staande houden dat iemand die dertig artikelen per jaar
produceert daarom een betere onderzoeker laat staan geleerde is dan iemand met drie; of dat
de docent die braaf de studiewijzer aanhoudt en bij wie 90% slaagt een betere leraar is dan
wie geregeld improviseert en 30% afwijst. Maar monetariseren, meten en becijferen leiden als
vanzelf tot de wensdroom van meer zogenaamde baten voor minder zogenoemde kosten.
maandag 10 november 2014
Best flow chart ever: management styles and KPIs
"When the top level guys look down they only see shitheads. When bottom level guys look up they only see assholes"
Good chance you have seen the cartoon before. The bird on the top looking satisfied, while the birds on the lower levels look more and more miserable. Does it feel familiar? Hopefully not, but chance is that you do recognize the gist of it.
The cartoon triggered me to try to find the relation between certain types of managers and their usage of KPIs. This raised the question of what different types of managers are distinguished in literature.
Problem however with that question is that one could probably fill a large library with all the books written about the topic. And to make things worse, I personally have a problem with management books that talk about personality traits. They suggest that personality or behavior can be described by just a few labels. Of course that is not the case. All research combined on human behavior (e.g. Neurology, Biology, Psychology, Sociology) did not yet find definite, unique, separate and unambiguous personality types. But being skeptical has its practical limits so in the end I settled with this short presentation that summarizes the different management styles most encountered.
Autocratic
For these type of managers KPIs give them a sense of control over the situation and their people. They probably don't have many KPIs, but they make sure that the ones they have are known by all employees involved. The threshold is set strict and not to much flexibility is allowed. They will make sure that what got them here, will get them there (again). "Green" traffic lights are expected and an "Amber" status is already being frowned upon (an explanation is expected to say the least). Their adagium is very simple: KPIs are being met. Always.
Bureaucratic
Last weeks blog was about the United Nations and their KPI usages for the millennium goals. To many, to long of a horizon, vague goals and diffuse responsibility. KPIs are uses because "everybody uses them". KPIs are not the result of a creative process where all stakeholders were involved. They are merely copies of the ones that were used in the past. People don't really believe in the KPI concept as everybody can point to each other once things go wrong. In other words; KPIs are just another thing you are supposed to have as manager.
Democratic
Everybody is involved in the creation of the KPI-set in a democratic organization. And with everybody I mean really everybody. All input is gathered and taken into account when the KPI is created. The creation process is more important than the end-result. KPIs are not very specific (because everybody has to recognize their own input in the outcome). Thresholds are being set, but so high making it difficult to reach the "Amber" or "Red" status. The KPIs are not fixed and changes are made frequently. Everybody is allowed to put the current KPIs to debate.
Laissez-fair
Why measuring stuff if you can rely on the intrinsic motivation of your employees? Make sure that you emphasize the responsibility of the individual and the sum will be greater than the individual parts. No KPIs are needed, and if they are there it's more for the outside world than to use them internally. Progress is being measured just by looking at the results and it is expected of employees to give a signal when things go wrong.
Next time: Crisis in the world of Science
The cartoon triggered me to try to find the relation between certain types of managers and their usage of KPIs. This raised the question of what different types of managers are distinguished in literature.
Problem however with that question is that one could probably fill a large library with all the books written about the topic. And to make things worse, I personally have a problem with management books that talk about personality traits. They suggest that personality or behavior can be described by just a few labels. Of course that is not the case. All research combined on human behavior (e.g. Neurology, Biology, Psychology, Sociology) did not yet find definite, unique, separate and unambiguous personality types. But being skeptical has its practical limits so in the end I settled with this short presentation that summarizes the different management styles most encountered.
- Autocratic style
- Bureaucratic style
- Democratic style
- Laissez fair style
Autocratic
For these type of managers KPIs give them a sense of control over the situation and their people. They probably don't have many KPIs, but they make sure that the ones they have are known by all employees involved. The threshold is set strict and not to much flexibility is allowed. They will make sure that what got them here, will get them there (again). "Green" traffic lights are expected and an "Amber" status is already being frowned upon (an explanation is expected to say the least). Their adagium is very simple: KPIs are being met. Always.
Bureaucratic
Last weeks blog was about the United Nations and their KPI usages for the millennium goals. To many, to long of a horizon, vague goals and diffuse responsibility. KPIs are uses because "everybody uses them". KPIs are not the result of a creative process where all stakeholders were involved. They are merely copies of the ones that were used in the past. People don't really believe in the KPI concept as everybody can point to each other once things go wrong. In other words; KPIs are just another thing you are supposed to have as manager.
Democratic
Everybody is involved in the creation of the KPI-set in a democratic organization. And with everybody I mean really everybody. All input is gathered and taken into account when the KPI is created. The creation process is more important than the end-result. KPIs are not very specific (because everybody has to recognize their own input in the outcome). Thresholds are being set, but so high making it difficult to reach the "Amber" or "Red" status. The KPIs are not fixed and changes are made frequently. Everybody is allowed to put the current KPIs to debate.
Laissez-fair
Why measuring stuff if you can rely on the intrinsic motivation of your employees? Make sure that you emphasize the responsibility of the individual and the sum will be greater than the individual parts. No KPIs are needed, and if they are there it's more for the outside world than to use them internally. Progress is being measured just by looking at the results and it is expected of employees to give a signal when things go wrong.
Next time: Crisis in the world of Science
zaterdag 1 november 2014
What we can learn from the UN Millenium Goals
More than 14 years ago, the United Nations Millennium Declaration was signed by leaders of 189 different countries. They committed themselves to 8 goals to be accomplished in 2015. For each of the eight Millennium Goals several KPIs were set to measure progress and succes. (see this link for a full list).
This is what US President Barack Obama had to say in 2010 about the progress at that time.
"Nor can anyone deny the progress that has been made toward achieving certain Millennium Development Goals. The doors of education have been opened to tens of millions of children, boys and girls. New cases of HIV/AIDS and malaria and tuberculosis are down. Access to clean drinking water is up. Around the world, hundreds of millions of people have been lifted from extreme poverty. That is all for the good, and it’s a testimony to the extraordinary work that’s been done both within countries and by the international community.
Yet we must also face the fact that progress towards other goals that were set has not come nearly fast enough. Not for the hundreds of thousands of women who lose their lives every year simply giving birth. Not for the millions of children who die from agony of malnutrition. Not for the nearly one billion people who endure the misery of chronic hunger.
This is the reality we must face -- that if the international community just keeps doing the same things the same way, we may make some modest progress here and there, but we will miss many development goals. That is the truth. With 10 years down and just five years before our development targets come due, we must do better."
(see for the full transcript here)
Now, with just one year to go, it doesn't look much better. Don't get me wrong. I really do think the work that has been done is extremely important. The United Nations might not be the perfect institution, but it is I think the only way to get so many countries set to do something. All eight goals do address serious issues that should get our attention and deserve to be solved.
My focus here lies on the things we can learn not only from the goals set, but also from the KPIs chosen to measure their success.
Too many Goals
It is difficult to choose which one of the eight goals is more important. Fighting poverty? universal primary education? reduce child mortality, improve maternal health? You name it. Even if it is difficult or just almost impossible it is wise to choose a maximum of three. More is not only too ambitious, it is also a recipe for disaster. One cannot focus on eight different goals. It will take a tremendous amount of money to make all eight of them successful. Think of the task the UN faces explaining why so many goals failed and how difficult it will be to find 200 countries that again want to participate.
Was ROI taken into account?
When you set these kind of goals and this many countries commit to them, you are sure that money will be spent. But these goals look like they were just chosen because the sounded good. Question is whether a good cost-benefit analysis was done. Of course things will change for the better. Question is whether things will also help in the long run. You want goals that help prevent setting new goals in the future.
Too many KPIs
In total there are 60 KPIs to measure the success of the eight goals set. The good thing is that concrete KPIs were set with concrete thresholds. Downside is that there are so many that is is just impossible to meet them all. If you have 60 targets, you really have none. With all negative consequences as a result, the least of which is the erosion of credibility.
KPIs are smart, but therefor not easy to reach
The world is a complex place. Many factors come into play when it comes to social issues. Politics, religious conflicts, war, corruption, failing economies, etc. complicate accomplishing the things you want to reach.
Long Timeframe
Fifteen years is a long time. It is difficult for people (let alone countries) to stay focused for so long. If you read the rest of the speech of Barack Obama you will notice that most of his speech is directed to the richer countries that seem to loose focus.
Diffuse responsibilities
Who is actually accountable for the reaching the goals? 189 countries? A country is not someone you can hold accountable. And even if you did find a person in 2000 that committed him/herself, he or she will probably not be around in 2015 to speak to that.
To summarize I want to quote Bjorn Lomburg who runs the Copenhagen Consensus Center. This center involves economists from all over the globe to think about the worlds biggest issues. More importantly they help selecting those issues with the highest ROI. This is what he had to say in an interview with Freakonomics Radio.
"There was actually no good cost and benefit analysis, it was just a number of targets that all sound really good. And generally I also think they really are very good. But now the U.N. is going to redo the targets from 2015 and fifteen years onwards. And this time, instead of having a very closed argument, it was basically a few guys around Kofi Annan who set out these targets back in 2000. And then everybody adopted them. This time they have said we want to hear everybody’s input. Of course that’s very laudable, but not surprisingly, it’s also meant that we probably have about 1,400 potential targets on the table. And so we need to make sure that we don’t just end up with a whole long list of Christmas trees as they call them in the U.N. jargon – you know, you just have everything and you wish for all good things, because they’re not likely to be as effective." (see here for full interview)
Hopefully next year the UN choses and develops their KPIs more wisely. If done right the promises could actually be fulfilled this time.
This is what US President Barack Obama had to say in 2010 about the progress at that time.
"Nor can anyone deny the progress that has been made toward achieving certain Millennium Development Goals. The doors of education have been opened to tens of millions of children, boys and girls. New cases of HIV/AIDS and malaria and tuberculosis are down. Access to clean drinking water is up. Around the world, hundreds of millions of people have been lifted from extreme poverty. That is all for the good, and it’s a testimony to the extraordinary work that’s been done both within countries and by the international community.
Yet we must also face the fact that progress towards other goals that were set has not come nearly fast enough. Not for the hundreds of thousands of women who lose their lives every year simply giving birth. Not for the millions of children who die from agony of malnutrition. Not for the nearly one billion people who endure the misery of chronic hunger.
This is the reality we must face -- that if the international community just keeps doing the same things the same way, we may make some modest progress here and there, but we will miss many development goals. That is the truth. With 10 years down and just five years before our development targets come due, we must do better."
(see for the full transcript here)
My focus here lies on the things we can learn not only from the goals set, but also from the KPIs chosen to measure their success.
Too many Goals
It is difficult to choose which one of the eight goals is more important. Fighting poverty? universal primary education? reduce child mortality, improve maternal health? You name it. Even if it is difficult or just almost impossible it is wise to choose a maximum of three. More is not only too ambitious, it is also a recipe for disaster. One cannot focus on eight different goals. It will take a tremendous amount of money to make all eight of them successful. Think of the task the UN faces explaining why so many goals failed and how difficult it will be to find 200 countries that again want to participate.
Was ROI taken into account?
When you set these kind of goals and this many countries commit to them, you are sure that money will be spent. But these goals look like they were just chosen because the sounded good. Question is whether a good cost-benefit analysis was done. Of course things will change for the better. Question is whether things will also help in the long run. You want goals that help prevent setting new goals in the future.
Too many KPIs
In total there are 60 KPIs to measure the success of the eight goals set. The good thing is that concrete KPIs were set with concrete thresholds. Downside is that there are so many that is is just impossible to meet them all. If you have 60 targets, you really have none. With all negative consequences as a result, the least of which is the erosion of credibility.
KPIs are smart, but therefor not easy to reach
The world is a complex place. Many factors come into play when it comes to social issues. Politics, religious conflicts, war, corruption, failing economies, etc. complicate accomplishing the things you want to reach.
Long Timeframe
Fifteen years is a long time. It is difficult for people (let alone countries) to stay focused for so long. If you read the rest of the speech of Barack Obama you will notice that most of his speech is directed to the richer countries that seem to loose focus.
Diffuse responsibilities
Who is actually accountable for the reaching the goals? 189 countries? A country is not someone you can hold accountable. And even if you did find a person in 2000 that committed him/herself, he or she will probably not be around in 2015 to speak to that.
To summarize I want to quote Bjorn Lomburg who runs the Copenhagen Consensus Center. This center involves economists from all over the globe to think about the worlds biggest issues. More importantly they help selecting those issues with the highest ROI. This is what he had to say in an interview with Freakonomics Radio.
"There was actually no good cost and benefit analysis, it was just a number of targets that all sound really good. And generally I also think they really are very good. But now the U.N. is going to redo the targets from 2015 and fifteen years onwards. And this time, instead of having a very closed argument, it was basically a few guys around Kofi Annan who set out these targets back in 2000. And then everybody adopted them. This time they have said we want to hear everybody’s input. Of course that’s very laudable, but not surprisingly, it’s also meant that we probably have about 1,400 potential targets on the table. And so we need to make sure that we don’t just end up with a whole long list of Christmas trees as they call them in the U.N. jargon – you know, you just have everything and you wish for all good things, because they’re not likely to be as effective." (see here for full interview)
Hopefully next year the UN choses and develops their KPIs more wisely. If done right the promises could actually be fulfilled this time.
zondag 26 oktober 2014
This is when KPIs fail
The effect of unexpected events can be devastating to the usability of KPIs. We've talked about it before, but now we'll zoom in a little bit more. In the end KPIs are used to make sure you take the right action at the right time (that is, hopefully before disaster strikes). In theory the threshold is chosen wisely and the indicator shows you what way it is going, so you can take appropriate action when needed.
"In theory", because in practice many things can happen. Let's say there is a very special turkey living in US that read "How to measure anything", a bestseller on KPIs by Douglas Hubbard. The turkey defines a KPI that measures his general well being from day to day. This is what his dashboard would look like.
"In theory", because in practice many things can happen. Let's say there is a very special turkey living in US that read "How to measure anything", a bestseller on KPIs by Douglas Hubbard. The turkey defines a KPI that measures his general well being from day to day. This is what his dashboard would look like.
In his book "The Black Swan" Nassim Taleb uses this example in order to explain the effect of unexpected events. For those unfamiliar with the idea of Black Swans, here's a small list of elements that make an event a Black Swan (based on the criteria as stated by Taleb himself).
- The event is a surprise (to the observer).
- The event has a major effect.
- After the first recorded instance of the event, it is rationalized by hindsight, as if it could have been expected; that is, the relevant data were available but unaccounted for in risk mitigation programs. The same is true for the personal perception by individuals.
The thing with these Black Swans is that they can really mess up your business strategy (which progress was nicely being measured by KPIs). In Banking for example a regulator can unexceptionally ask you to comply to new regulation after they themselves reacted to an unexpected financial Black Swan. Or your customer satisfaction KPI drops because of a negative story on you company exploded on Twitter. Black Swans most often have a negative effect on your results (destroying the predictive power of your KPIs).
Taleb mentions several reasons why we tend to miss these kind of events.
1. We are terrible in predicting the future
2. If you think the likelihood of something happening is very low; it probably isn't
3. Don't ask the expert, he or she doesn't know either
4. The world is complex, don't think it isn't because you have a predictive model
5. Your intuition is very bad in statistics; don't let your strategy depend on it
6. Just believing a fact true doesn't make it so (see also previous blog on confirmation bias)
How to cope with Black Swans? One really effective way is doing a so-called Pre-Mortem. In this exercise you ask several people to imagine themselves in the future (e.g one year from now). Now draft the future situation where everything went terribly wrong. Whatever you tried to accomplish did not happen. Even worse, your work is perceived by the whole company as a complete disaster. The atmosphere among all people involved is terrible. Everybody is blaming each other and nobody is talking to each other (out of disappointment or anger). You even consider quitting your job because you can't cope with the shame every time you meet senior management.
Now aks everybody in the room to write down what went wrong. No pausing, just put whatever comes to mind on paper. Think of the most unexpected events that made it an disaster (remember the turkey!). Try to think beyond the "normal" things (e.g. death, viruses financial crisis, people getting fired, fights, etc).
Collect all output and write them down on a large poster. Share the poster with everybody in the organization Make sure that from now on you start looking for signs that show some event on the poster is happening.
Next time: what can we learn from the Millennium Goals?
woensdag 22 oktober 2014
7 Cognitive Biases that influence the usage of KPIs
Lists apparently do well in blog titles. A total of 865 times my blogs were read (not counting my own clicks). The last blog called The five most overrated KPIs broke a record and was viewed 60 times (I say viewed because of course I can't tell whether they were actually read). I don't count the number of views as a KPI, but just to be sure I again present a list today
Remember one of my first blogs "Why we (really) use KPIs"? I talked about the workings of our brains and the two systems that make it operate. There was the "automatic" pilot governing our behavior most of the time (making the brain the efficient and effective organ it is). But when we have a more difficult taks to fulfill the non-automatic system jumps in (using it is tiresome though).
1. Cognitive Dissonance
We tend to ignore, ridicule or downplay information that conflicts with our beliefs or convictions. The stronger the belief, the stronger the effect. It takes a lot of effort to objectively look at conflicting information. This can have an effect on the usage of KPIs on several levels. Especially when "red flags" are ignored because we think it will turn green again soon.
2. Risk Aversion
In general our decisions tend to be risk averse, especially if we have to make them in a split second. This effect is studied thoroughly and is found in many social situations. With regards to KPIs this could result in choosing thresholds that are too low (playing safe) resulting in many false positives. But Risk Aversion can also lead to thresholds that are set too high avoiding the risk of getting the status red (and the risk of tough discussions with your management)
3. Confidence Bias
Our fast and automatic brain system is not prone on doubt. It hates doubt and will construct a story that makes what it sees true and coherent. So even when a KPI is indicating to an obvious wrong number, our over-enthousiastic brain will at first try to make it true. Only with effort we are able to see the wrongness for what it really is.
4. Causation Bias
Our hasty brain sees patterns all around us (even when there are no there). One them is the causation pattern. When we some events that correlate, we tend to apply some causal thinking. In the past this has led to many wrong assumptions and mistakes. When creating KPIs this can lead (among others) to selecting useless indicators, as it is wrongly assumed that they measure the underlying causal mechanism for performance.
5. Availability bias
Make a list of three situations where you showed assertiveness. Next, evaluate how assertive you are. Of course you are biased answering the second question. The three situations might come easy and therefor might give you the impression that you are assertive indeed (I'm not saying you're not). But the easier you can come up with a long list of something, the more it will affect your judgements later on. For KPIs this could mean that you will choose those indicators that easily come to mind and think that they must be good because of the fact that they came to mind that easy.
Remember one of my first blogs "Why we (really) use KPIs"? I talked about the workings of our brains and the two systems that make it operate. There was the "automatic" pilot governing our behavior most of the time (making the brain the efficient and effective organ it is). But when we have a more difficult taks to fulfill the non-automatic system jumps in (using it is tiresome though).
The problem is that we tend to think that we make our decisions deliberately and after solid reasoning. Unfortunately the brain system that we use most takes shortcuts, is lazy, loves stereotypes, and likes to go on as soon as possible. It is the price we pay for the enormous task we set the brain to do.
Researchers already know for years that we make mistakes all the time without even noticing. Cognitive biases are the tendencies of our brain to make all sorts of mistakes. Mostly we are not aware of these biases. And nobody is immune to them. We cannot turn of our automatic pilot and therefor we cannot be completely bias free.
Here is a list of 10 cognitive biases that might diminish the effectiveness of KPIs*:
1. Cognitive Dissonance
We tend to ignore, ridicule or downplay information that conflicts with our beliefs or convictions. The stronger the belief, the stronger the effect. It takes a lot of effort to objectively look at conflicting information. This can have an effect on the usage of KPIs on several levels. Especially when "red flags" are ignored because we think it will turn green again soon.
2. Risk Aversion
In general our decisions tend to be risk averse, especially if we have to make them in a split second. This effect is studied thoroughly and is found in many social situations. With regards to KPIs this could result in choosing thresholds that are too low (playing safe) resulting in many false positives. But Risk Aversion can also lead to thresholds that are set too high avoiding the risk of getting the status red (and the risk of tough discussions with your management)
3. Confidence Bias
Our fast and automatic brain system is not prone on doubt. It hates doubt and will construct a story that makes what it sees true and coherent. So even when a KPI is indicating to an obvious wrong number, our over-enthousiastic brain will at first try to make it true. Only with effort we are able to see the wrongness for what it really is.
4. Causation Bias
Our hasty brain sees patterns all around us (even when there are no there). One them is the causation pattern. When we some events that correlate, we tend to apply some causal thinking. In the past this has led to many wrong assumptions and mistakes. When creating KPIs this can lead (among others) to selecting useless indicators, as it is wrongly assumed that they measure the underlying causal mechanism for performance.
5. Availability bias
Make a list of three situations where you showed assertiveness. Next, evaluate how assertive you are. Of course you are biased answering the second question. The three situations might come easy and therefor might give you the impression that you are assertive indeed (I'm not saying you're not). But the easier you can come up with a long list of something, the more it will affect your judgements later on. For KPIs this could mean that you will choose those indicators that easily come to mind and think that they must be good because of the fact that they came to mind that easy.
6. Anchoring
How many calories are there in MacDonald's Big Mac Burger (7.6 oz)**? Just take a guess and formulate your answer before you read on.
Is your answer around 850 or maybe 60? Then you where the victim of Anchoring. I deliberately added the numbers 856 and 60 in the intro of my blog. These numbers tend to stick for a while and they influence decisions later on.
How many calories are there in MacDonald's Big Mac Burger (7.6 oz)**? Just take a guess and formulate your answer before you read on.
Is your answer around 850 or maybe 60? Then you where the victim of Anchoring. I deliberately added the numbers 856 and 60 in the intro of my blog. These numbers tend to stick for a while and they influence decisions later on.
Our brain has to deal with tons of information every minute (even when we are asleep). In order not to get completely insane our brain starts from the default position that the world outside in general makes sense and information we receive is coherent an unambiguous. Because we think we know the past, we assume that we know the future. And with a great deal of "I knew it all along" attitude we arrogantly think we understand it all. We are in general ignorant of our own ignorance. KPIs in general are based on our past experience and by applying them think that we can control or predict future events.
Next time we'll zoom in on this last illusion by talking about the effect of unexpected events.
* The list and some of the text is retrieved from the book Thinking Fast and Slow by Daniel Kahneman
**The number of calories in a Big Mac is about 550 (calorieking.com).
Next time we'll zoom in on this last illusion by talking about the effect of unexpected events.
* The list and some of the text is retrieved from the book Thinking Fast and Slow by Daniel Kahneman
**The number of calories in a Big Mac is about 550 (calorieking.com).
woensdag 1 oktober 2014
The five most overrated KPIs
There are
KPIs for
Accounting,
Sustainability, Corporate Services, Finance, Governance, Compliance, Risk,
Human Resources, Information Technology, Knowledge & Innovation,
Management, Marketing & Communications, eCommerce, Project Management,
Portfolio Management, Commerce, Production Management, Quality Management,
Sales & Customer Service, Supply Chain, Procurement, Distribution, SHOP BY
INDUSTRY, Agriculture, Arts & Culture, Construction & Capital Works, Education
& Training, Financial Institutions, Government, Local Government, Healthcare, Hospitality & Tourism,
Infrastructure Operations, Manufacturing, Media, Non-profit / Non-governmental,
Postal & Courier Services, Professional Services, Publishing, Real Estate /
Property, Resources, Retail, Sport Management, Sports, Telecommunications /
Call Center, Transportation, and Utilities*.
Of course
this is a non-limitative list. There are many different types of KPIs
but fortunately many KPI experts already made some choices for you as to which
ones are the BEST. Just Google KPI and you'll find some gurus telling you the
TOP 5 KPIs everybody should use. That triggered me to list the TOP 5 most
overrated KPIs. Here they are.
5. School grades
Schoolchildren
are constantly assessed throughout the year by their teachers, and
report cards are issued to parents at varying intervals. Generally the scores
for individual assignments and tests are recorded for each student in a grade
book, along with the maximum number of points for each assignment. In the US
most often these scores are translated to a letter grade. In other countries
(in Europe) the 1 to 10 scale is used.
At the end
of the year most often an average is calculated to give an indication on the
average performance of the kid. It is all too easy to assume that aggregate or
average marks give a reliable assessment of overall performance or that the
process is as objective as counting. Fortunately many teachers will tell
parents this. They know that it is only an indication or a "photo"
and is not telling anything on future performance. It is however difficult for
parents to not see these grades and think that their kids are either
"doomed" or "future professors". Especially in high-school
much depends on these grades (status within the group, development of future
plans, possibilities for universities). The system is hard on children that
bloom on a late age.
4. Net Promoter Score
Would you
recommend our company to a friend or colleague? That is the question many
companies will ask their customers on a regular basis. Why? Because the answer
is apparently telling you all about your customers feelings towards your
company or products. A Net Promoter Score is generated based on this question,
ranging from 1 to 10. The resulting score is supposed to indicate whether there
is a huge risk of losing customers or whether there are loyal. The NPS has
become one of the most important drivers in the area of customer intimacy
strategy.
You might
remember the blog on the APGAR score that suggested to make your KPI as stupid
and simple as possible. The NPS is indeed simple and easy to understand.
However the performance it is trying to measure is by far too complex to capture via this simple score. Especially when it is used for complex strategic choices which on their turn might effect future results. Furthermore the score is most often bases on what a sample of customers is
saying. I won't go into detail but many issues arise when sampling your
customer base.
In a White
Paper called “The “Net-Net” on the Net Promoter Score” the authors surmise that
“the NPS approach is incomplete at best, and potentially misleading at worse.
It is unwise to rely solely on one survey item (likelihood to recommend) to
establish customer loyalty strategies. While [the creators of NPS] provide
sound advice on some aspects of customer loyalty measurement and management, he
seriously overstates the case for relying on that “one number” to grow a
business.”
3. Key Risk Indicators
When people
are asked to give three examples of the most disruptive innovation of the last
decades they come up with Computer, Internet, or the Mobile phone. All
these innovations had a huge impact, but were all unpredicted,
unplanned and their impact was underestimated at the time.
Same goes for most manifestations of risks. When we try to predict risks we use
risk models to predict likelihood of occurrence and the impact the particular risk will have if it actually manifests itself. Unfortunately all actual
impactful events of the past decades were most often not predicted and if
someone was lucky enough to have mentioned them, the impact was underestimated at the time. When we were in the midst of them the impact was not recognized by
experts. Consider for example the latest project you were involved in (could
be any type of project). Of course things went wrong, they always do. Would you
have been able to predict them upfront? In other words: the gross of (impactful) risks
come from outside the predictive models.
2. Employee Satisfaction
People can
be satisfied with their jobs for several reasons. Asking for these reasons is a
valid and useful thing to do. However using these results to measure performance
is risky, especially when questionnaires are used. Even if
the survey anonymous, employees might not wish to reveal the information or they
might think that they will not benefit from responding (thinking perhaps even to be
penalised by giving their real opinion). Even if employees fill in the survey
honestly, you are still measuring individual opinions and not really their behaviour.
It is already difficult enough to objectively understand and know your own intentions and motivations, let alone answering questions about them from someone who is paying your salary. Furthermore an anonymous survey is likely to reveal warts and all. Management should be prepared for discovering that the top down view can differ from the bottom up view.
It is already difficult enough to objectively understand and know your own intentions and motivations, let alone answering questions about them from someone who is paying your salary. Furthermore an anonymous survey is likely to reveal warts and all. Management should be prepared for discovering that the top down view can differ from the bottom up view.
1. Stock price
The overall
idea is that the stock price of a certain company is telling you something
about how that company is doing. This is because it is assumed that investors
digest all possible information about a company and this will be reflected in
the price. This is what is known as the Efficient Market Hypothesis (EMH). The
EMH assumes that all investors perceive all available information in precisely
the same manner. However this is of course not the case. Furthermore it is
impossible to say what information is already incorporated into the price. Not
all information is available and the most impactful events are difficult to
predict and therefor a surprise for everybody (see also the Key Risk Indicator
paragraph).
Investopedia.com summarized it as follow “Companies live and die by their stock price, yet for the most part they don't actively participate in trading their shares within the market. If performance of its stock is ignored, the life of the company and its management may be threatened with adverse consequences, such as the unhappiness of individual investors and future difficulties in raising capital”.
Investopedia.com summarized it as follow “Companies live and die by their stock price, yet for the most part they don't actively participate in trading their shares within the market. If performance of its stock is ignored, the life of the company and its management may be threatened with adverse consequences, such as the unhappiness of individual investors and future difficulties in raising capital”.
Abonneren op:
Posts (Atom)