Friday, October 28, 2011

Is it a lag or a lead indicator?

Lag and lead indicators both have their place, but ultimately it's more important to build lead indicators. So what's the difference?  Lag indicators tell tell you 'after the fact' if your risk treatments have been effective.
Lag versus lead KPIs
By way of example, consider Lost Time Injury Frequency (LTIF) data. They are important to track but ultimately are only telling you what has happened in the past. That's good to know, but if your safety initiatives aren't working (or are making things worse), you'll only find out after you've injured several more people than you might have otherwise. By which time of course, you're already in the hurt locker and no wiser about how to fix things.  You can probably think of (and are probably using) any number of lag indicators but here are a few very common ones. Quarterly profit statements, house fire data, motor vehicle accident rates, cancer cases, return on equity... and the list goes on.

Far better to look at lead indicators that tell you if things are working and can give you some information about the future. In the safety example, you might like to track the level of hazard reporting, or incidents of non-compliance with safety procedures. If hazard reporting is increasing, and/or non-compliance is dropping, then (other factors being equal) it's likely that will see a reduction in lost time injuries.  Some other examples of lead indicators might include speeding tickets, hours per year of driver training, forward sales, blood sugar levels, etc.

The difference between lag and lead indicators is crucial, yet sadly too often overlooked. You might have an organizational objective of making a profit (not unusual in modern business) but if you only track profit, it can be way too late to do anything about things, if you have an unprofitable year. Some lead indicators that might help would include:
  • customer satisfaction
  • employee turnover
  • gross margins on new contracts
  • number of contracts won
  • risk exposures
The trick of course is to define which ones are relevant to you - and then refine that number downward. As a general rule, the less KPIs you're tracking the better. You might be able to identify 100 KPIs that are relevant but if one single KPI can give you the information you need, then stick with that one - but by now, I'm hoping you'll be a fan of making it a lead KPI.

Wednesday, October 26, 2011

Performance Benchmarking and Attribution Bias

One of the challenges for measuring the effectiveness of risk management (or any type of management system for that matter) is a little glitch in human perception known as attribution bias.   Attribution bias is simply our tendency to invent explanations and to attribute things to a particular cause (whether real or imagined).  These attributions serve to help us understand the world and give us reasons for a particular event. 

For example, let’s say that Bill gets sacked from his job. He will attribute his sacking to his boss being an asshole and that he is better off not working for that company anyway. He can thus make sense of his misfortune without having to accept any responsibility. The fact that Bill has been sacked from a series of jobs for poor performance can be easily overlooked.  Bill goes even further and attributes having a series of bad bosses to bad luck - despite the fact that he is the only common denominator each time. Without the attributional explanations, Bill will be very embarrassed and discomforted to believe that his performance at work has been the cause of his sacking(s).

Psychologists describe attributional biases as a class of cognitive errors which are triggered when people evaluate the dispositions or qualities of others based on incomplete evidence. The key element here is that biases are 'errors' that are hardwired into us. There are many good reasons for us to have a range of biases. Attribution bias for example, allows us to maintain self-esteem and a sense of purpose without having to face our own behavior. When it comes to risk management benchmarking however, attribution bias is simply another way to introduce errors. 

Attribution bias and risk management performance benchmarking
When XYZ corporation reviews the annual safety statistics and finds that Lost Time Injuries (LTIs) have fallen, they attribute this to 'greater management focus' on safety, or the introduction of the 'new safety induction scheme'. Maybe they are right - or maybe they were just lucky last year and the drop in LTIs is due to sheer random luck.  Or maybe the drop in LTIs is due to some completely different factor. 

For example, when that crime in the United States started to decline in 1992, in complete contrast to the projections of criminologists, many politicians were quick to attribute this drop to their policies of 'zero tolerance' policies or extra police, etc.  Steven Levitt of the University of Chicago and John Donohue of Yale University meanwhile, suggest that the drop was due to a completely different and unexpected cause. 

They suggest that the the cause was the landmark legal case of Roe v. Wade, in which the United States Supreme Court controversially legalized abortion. Levitt and Donohue persuasively argue with detailed statistical analysis, that the absence of unwanted aborted children, following legalization in 1973, led to a reduction in crime 18 years later.  The years from 18 to 25 would have been the peak crime-committing years of the unborn children and hence resulted in lower crime.  For many years criminologists and politicians alike took credit for various risk management treatments as being the cause of this drop in crime. It probably led to a lot of wasted resources and unwarranted bonuses or at lease congratulatory back-slapping. 

The inference however is clear - attribution bias can easily have you wasting your precious risk management resources. At the very least you'll end up with a risk management benchmarking which is based on the wrong performance indicators and at the very worst, you can make things worse.






Monday, October 24, 2011

One thing in common...

The Fukushima Daiichi nuclear disaster, Greek currency crisis, AIG’s bailout and the US debt crisis all have something in common.  They all operate in highly regulated industry sectors and have sophisticated risk management programs. What they have in common however goes further than that - those risk management systems were manifestly inadequate.

The volatility of recent years has rightfully called into question the effectiveness of current risk management processes. Investors, regulators and the public are all seeking greater visibility into the positive and the negative impacts of risk management initiatives. It’s no longer enough to build a robust risk management system - ISO31000 and any number of other risk frameworks, offer us insight into how to do that - the challenge is to tell if your risk management system is actually effective. Is it delivering the outcomes that it was designed to deliver, and are those outcomes supporting organizational objectives?  
Figure 1: Linking risk management to organizational objectives

Every year we spend billions of dollars on risk management initiatives – most without any subsequent assessment of their effectiveness. How do you tell if the million dollars spent last year on risk mitigation actually delivered benefit and/or reduced risk? How do we tell how an organization is performing against its peers in terms of ROI on risk mitigations?

According to Wikipedia:
"The late-2000s financial crisis (...Global Financial Crisis ...) is considered by many economists to be the worst financial crisis since the Great Depression of the 1930s. It resulted in the collapse of large financial institutions, the bailout of banks by national governments, and downturns in stock markets around the world. In many areas, the housing market had also suffered, resulting in numerous evictions, foreclosures and prolonged vacancies. It contributed to the failure of key businesses, declines in consumer wealth estimated in the trillions of U.S. dollars, and a significant decline in economic activity, leading to a severe global economic recession in 2008."
It's easy with hindsight to see the downstream impacts of poor risk management practices but just how do we tell with foresight?  That's for another article, but you'll find the first clue in Figure 1 above.




Sunday, October 23, 2011

About this blog... (aka. Why benchmark?)

The volatility of the recent years has rightfully called into question the effectiveness of current risk management processes. Investors, regulators, managers and the general public are all seeking greater accountability into both the positive and the negative impact of risk on organizational and societal future performance. The downstream impacts of the Japanese tsunami on the nuclear power industry, the Greek currency crisis, volatility of global financial markets and rise of China/India are all examples of issues that require nothing but the very best risk management decisions that we can possibly make.  Every year, we spend billions of dollars on risk management initiatives – most of that, without any subsequent assessment of the effectiveness of those risk treatments.
Figure 1: The reasons why we bother with Risk Performance Benchmarking
Managers and leaders at all levels are accordingly becoming increasingly accountable to their stakeholders for the delivery of key outcomes in the most effective and efficient manner and risk management is no different in this respect.  Being successful in terms of achieving objectives requires thoughtful and consistent optimization of resources and... an ongoing process of improvement.  Key Performance Indicators and performance scorecards are indispensable tools in this quest to establish, measure and achieve the best possible performance.

The recent economic downturn has provided numerous examples of poor risk management performance and the speed with which some major corporations met their demise has rocked the confidence of investors. In many cases, the public had little if any, warning prior to the collapse of these organizations. Only when the impact of management mistakes had crystallized on the balance sheet (and when it was far too late to do anything) did the full magnitude of the problems become clear.

The release of ISO31000 Risk Management Standard in 2009 heralded in a new era of potential standardization which among other things, can help organizations measure the quality of their risk management and to do ‘apples for apples’ comparison from year to year or across industry sectors.  The first step however is to build a risk management performance scorecard and unfortunately, ISO31000 standard is relatively silent on exactly how to do that.

This blog (the precursor to a book on the subject) addresses a number of methods, examples and guiding principles for developing effective KPIs and scorecards to support and optimize the management of risks. It builds on our 50 years of experience in building risk management performance in a range of industries. Between us (Julian Talbot and Miles Jakeman) we have built risk performance tools for resources companies in Africa, the aviation sector in Asia, the $30 billion Australian Department of Defence and a host of other organizations. The tips, tools and lessons from these experiences have been condensed into a single practical book on risk performance benchmarking.