vrijdag 22 augustus 2014

Step 4: Raising the bar and avoiding pitfalls


Every large company has a department that deals with internal and external fraud cases. Depending on the business, the type of customers and the size of the company there can be many or just a few cases to investigate each month. Companies gather all sorts of data relating fraud. Number of (reported, investigated and solved) cases, the money (potentially) lost and recovered, number of customers involved (both as fraudsters and as victim). From a management perspective each and one of these cases is special, but for an investigator a case can be one of many and even "routine". The fact that people can have different perspectives on the impact of a single case is important when building KPIs.

Let's look at an example. You work at a Fraud Department at a large insurance company. Your job is to report the number of fraud cases per month to senior management and indicate whether the increase/decrease should result in "immediate action" (red), "monitor closely" (amber), or "business as usual" green. At some point in time you report a small increase for three months in a row. There is no certain reason why it increases and the cases are no different in modus operandi as in the past. Because of this you cannot say whether this is a trend or coincidence. What RAG-status will you choose? From the investigation departments view it is "business as usual". But from a senior management perspective this is the least likely status they expect when fraud cases are increasing. If you report green (as you probably should), you are own senior management a good explanation. From experience I know that it is very hard to explain however that even something like fraud can be "business as usual". If you report amber (playing safe), you will only delay the discussion to next month (what will you do if the number increases again by a few cases?). And as soon as you report red (as management might expect), you know for certain that the next question will be: what actions will you take? Problem is that you will not be able to indicate any mitigation actions, as you don't have a proper cause identified for the increase. If you report amber (safest choice), you will only pros pone the discussion to next month (what will you do if the number increases again by a few cases?).

KPIs are meant to keep you awake and alert. They should make sure that you take appropriate actions whenever they indicate that performance is declining. This leads to a very crucial question you have to answer: when is the appropriate time? You don't want to be in panic-mode every day, but on the other hand you don't want to miss important signs either. If you set the bar too high, you will be dulled a sleep. If it too low, you will never reach a "business as usual" status. The Red-Amber-Green coloring is the most used way of indicating the status of an indicator. In later blogs we will discuss what I would call "Green Field Management" where all (good and bad) measures are taken to stay at a green status (including altering the threshold or ignoring/including certain outliers). But for now we will focus on the pitfalls in choosing the thresholds in the first place.

Pitfall 1: Predicting is hard, especially the future
It is good to realise that setting thresholds above or below which certain behaviour is expected is like predicting the future. When you start off with your fresh KPI and start collecting data points, you don't know where the statistics will lead over time. You might have an idea where it ideally should lead, but it is not to say that it will. That makes setting thresholds especially difficult.

Pitfall 2: No thresholds are set upfront Because of pitfall 1 this is happens more often than not. Because it is very hard to determine the RAG-thresholds upfront, it is done as soon as new data comes along (like in the fraud example). This might lead to numerous problems like opportunistic and ad-hoc decisions. On the other hand don’t be too rigid. You should set you thresholds upfront, but make sure they are not carved in stone. If you change them however, this should be well done after close consideration of all data, the goals involved and well documented.

Pitfall 3: What goes up, must go down
Every series of increases is inevitably followed (at some point in time) by a decrease. You have to consider this upfront and what “fluctuation” you tolerate before setting a threshold.

Pitfall 4: Regression to the mean
This phenomenon results directly from pitfall 3. In statistics, regression toward (or to) the mean is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement—and, paradoxically, if it is extreme on its second measurement, it will tend to have been closer to the average on its first. To avoid making incorrect inferences, regression toward the mean must be considered when interpreting data (Wikipedia). In the case of a KPI it is therefore important to know what the baseline (or average) is. And whether movement towards this baseline is “good” or “bad”.

Pitfall 5: (again) what got you here, won’t get you there
I could also have called this pitfall, the Copy-Paste pitfall. Over and over again companies copy-paste old thresholds into new ones, reasoning that we have been using them for years. Or KPI’s (including their thresholds) are copied, because everybody in the industry is using them.

Bernard Marr, a lead expert on KPIs, said it like this: “A lot of companies fall into the trap of thinking they can just use their existing metrics and “retro fit” their objectives around them. This is dangerous for many reasons and will often leave you without real insight into the things that matter the most. Success involves effort and you should be willing to spend time thinking carefully about what information you need, where to find it, how to gather it and why it will be of benefit”.

Next time the final step: Implementing the KPI

Geen opmerkingen:

Een reactie posten