Thursday, December 15, 2011

Use Existing Reporting Frameworks


Wherever possible, KPIs should be aligned with organizational outcomes and the underlying management systems used to track and report these.  Indeed there are few cases where they would not be.

In the absence of an existing framework, however, the Balanced Score Card (BSC) by Kaplan and Norton is a useful measurement framework.  Robert S. Kaplan and David P. Norton introduced the BSC in 1992 to help determine whether the activities of an organization were meeting its objectives or not. It focuses not just on financial measures, but also the human issues, and helps provide a comprehensive view of an organization’s working in order to ensure alignment of their activities with the achievement of stated goals.

Not only is it ideal for our purposes, but there is also a well established body of literature to guide risk management professionals in the adaptation and application of this concept to their environment. The graphic in the blog entry "One thing in common..." illustrates at a simple level, how the Balanced ScoreCard can be aligned with other KPIs and KRAs.

You can do it in much more depth however if you look at the full breadth of organizational management systems. An enterprise level scorecard should include basically as many systems as possible to generate a detailed picture.
Figure 1: Example of linking risk management systems to a BSC Framework
At its simplest level, BSC provides a framework of Learning & Growth, Internal Processes, Customer Satisfaction and Financial metrics to identify, analyze, improve and report on organizational performance.  Part of the underlying principle is that appropriate learning, growth and staff development will support the correct application of internal processes that in turn leads to happy customers, thus providing financial success.  It is of course a much richer framework than this simple overview provides but readers are encouraged to seek out the books and articles on this topic for more detailed information, particularly those but Kaplan & Norton.

Tuesday, November 15, 2011

ALARP as a risk benchmarking target?

ALARP (As Low As Reasonably Practicable) is a fundamental concept in risk management. In essence, it represents the concept that we should mitigate threats (negative risks) down to the level where the expenditure of resources is balanced against the benefit.   As the saying goes, a picture paints a thousand words so this concept is probably best illustrated in Figure 1 below.

FIgure 1: ALARP - the point where risk and resources achieve optimal balance
Understanding the concept and measuring it in practice are however two entirely different things.  There are a number of challenges that can conspire against us in this mission, the main two being:

  • You need a consistent performance scorecard with at least 2 years of historical data. Unless there is some barometer that can give a quantitative or semi-quantitative level of risk, there is no way to objectively measure risk levels 
  • Much of what we spend on risk management is 'hidden' from view. It's relatively easy to quantify the cost of insurance, hedging, risk management training budgets etc but that is just the tip of the iceberg. A simple example of this would be an organization that relocates its office for security or safety reasons. It's easy enough to say that the organization spent $x on relocating and an extra $y/mth in rent but few organizations actually track this difference, much less allocate it to risk management.

If we want to understand when we are in the ALARP region (and it's more accurate to think of it as a 'region' rather than a point value), then the first step is to do the analysis that is required in order to build a performance scorecard. More on that later but here are some SIMPLE questions to consider:

  • Scope - What is the scope of our scorecard? The whole organization? Or do we want to separate out each division, each geographic location or each separate facility? Maybe just the safety and health program, or perhaps Health, Safety, Environment and Security?
  • Indicators - What are the specific indicators that we care about?  Are we more interested in lag or lead indicators? Do we have enough information already and if not, what would we need to track?
  • Measures - How do we measure success? At what point do we read the optimal resource/risk balance? Is there a specific number? If not, can we use client (internal or external) surveys to provide a semi-quantitative measure as a proxy for risk?
  • Performance - What long term performance are we seeking? How will our ALARP strategies and measures contribute to organizational performance?
  • Longitudinal data - What period of time do we want to consider? Will we measure monthly, quarterly, annually? What is the lifespan of our performance benchmarking tool? If we align it to our risk management framework, it will need to be revised when the framework is revised. Will we need to modify historical data to keep our long term trend information meaningful?
  • Excellence - How will we achieve excellence? How will we know when our benchmarking is excellent? Is it when the CEO or Board signs it off? Will we consider it excellent when it's working to plan? Or when external independent auditors sign it off? Equally, what do we need to do to achieve true excellence? How will we track hidden risk management costs? Eg: Will we create cost codes to track the hidden risk treatment costs such as the extra cost of rent as in the example above?

As you can probably see, SIMPLE doesn't mean simple, but lets start with the concept of ALARP as being important to our risk performance benchmarking. If we're really serious we also need to think about how to measure the AHARP concept (As High As Reasonably Practicable) for positive risks (aka. opportunity realization). More on AHLARP at this link but that's an idea for another day.

Friday, November 11, 2011

Three fundamentals of KPIs

Measuring performance seems complex (and it is) but let's not lose sight of the basics. The trick with key performance indicators (KPIs) comes down to three questions:
  • What is KEY? What are the long term objectives of the enterprise?
  • What is PERFORMANCE? What results do you aim for, what is success, what is failure, what are the acceptable ranges? 
  • What are INDICATORS? What metrics define desired performance, at reasonable cost? 
KEY
What do you really care about? You could apply this question to your personal preferences in life but what we are talking about here is what is important to your organization.  Organizations don't have a single identity so often you're trying to amalgamate a variety of disparate views, values and opinions. Certainly, you can be guided by policy statements, organizational objectives and strategy documents, and they are probably the first place to start looking for answers. In the end however, someone needs to be accountable to define what is actually 'important'.  Depending on the scope of your study, this could be the CEO, the budgetholder, Chairman of the Board or simply the line manager responsible for that specific area.

Here are a few questions that might guide your investigations when establishing what is 'key':

  • What is the scope of this KPI?  The whole organization? A small project? My workgroup?
  • What couldn't the organization survive without? You could take a 'red team' approach to this by getting a team together to ask "How would I attack or cause this organization to fail?"
  • What are we trying to achieve here? Ie. What are the objectives of this organization/group/project?


PERFORMANCE
As the adage goes "what gets measured, gets managed".  It's essential to have some simple and clearly defined view of what performance means.  That means you either need to ask binary questions (Eg: Is our gross profit greater or less than 10% of turnover?) or measure it quantitatively (Eg: What is our nett profit?). 

In some cases, you might also choose to use some semi-quantitative questions. These aren't quite as good as pure quantitative data but can at least provide some sort of ordinal measure (Eg: Is the organization improving our score each year?).  A simple example of a semiquantitative score can be found in the figure below. This simple example looks at converting some simple 'word pictures' into an ordinal scale. One of the limitations of this scale is that it's primarily subjective. That doesn't mean that it's not useful however. If you can think of 10 or more areas to measure, and are rigorous with developing the criteria for each of the four grades, you'll end up with a fairly good idea as to where your weaknesses are and whether or not the organizations performance is increasing over time.
Figure 1: Example of a Semi-Quantitative Scale

Here are a few questions that might guide your investigations when establishing how to measure 'performance':

  • What is the timeframe that I'm interested in?
  • What would we consider as catastrophically bad?
  • What would we like to be able to report to the Board/boss/shareholders? 
  • Would data would we need?
  • What information/data/statistics do we already have?
  • How much would it cost to get ALL the data that we'd need and how would we go about acquiring it?



INDICATORS

What are the metrics that can help us measure our performance? This question above all others is entirely dependent on context. An insurance company might have a wealth of data about it's clients risks but if it is developing KPIs for financial performance

Here are a few questions that might help with developing 'Indicators'':

  • Would data would we already have?
  • What information would would we like that we don't already?
  • How much would it cost to get all the data that we'd need and how would we go about acquiring it?
  • Are there industry standards or legislated reporting requirements that we can piggyback on?
  • What compliance framework do we already have?
  • What systems (accounting, safety, incident reporting, sales, etc) do we have in place? How could I modify or integrate those systems to get a better result?
  • Which are lead and which are lag indicators?


LAST BUT NOT LEAST...
Keep at it. Ask lots of people for help, input and go with the simple questions. What's important to you? How would you measure it? What information would help you understand your business? Once you can get to these answers, (which is every bit as hard as it sounds) you have the exact set of KPI's your organization needs. More is waste, less is mismanagement.

Friday, October 28, 2011

Is it a lag or a lead indicator?

Lag and lead indicators both have their place, but ultimately it's more important to build lead indicators. So what's the difference?  Lag indicators tell tell you 'after the fact' if your risk treatments have been effective.
Lag versus lead KPIs
By way of example, consider Lost Time Injury Frequency (LTIF) data. They are important to track but ultimately are only telling you what has happened in the past. That's good to know, but if your safety initiatives aren't working (or are making things worse), you'll only find out after you've injured several more people than you might have otherwise. By which time of course, you're already in the hurt locker and no wiser about how to fix things.  You can probably think of (and are probably using) any number of lag indicators but here are a few very common ones. Quarterly profit statements, house fire data, motor vehicle accident rates, cancer cases, return on equity... and the list goes on.

Far better to look at lead indicators that tell you if things are working and can give you some information about the future. In the safety example, you might like to track the level of hazard reporting, or incidents of non-compliance with safety procedures. If hazard reporting is increasing, and/or non-compliance is dropping, then (other factors being equal) it's likely that will see a reduction in lost time injuries.  Some other examples of lead indicators might include speeding tickets, hours per year of driver training, forward sales, blood sugar levels, etc.

The difference between lag and lead indicators is crucial, yet sadly too often overlooked. You might have an organizational objective of making a profit (not unusual in modern business) but if you only track profit, it can be way too late to do anything about things, if you have an unprofitable year. Some lead indicators that might help would include:
  • customer satisfaction
  • employee turnover
  • gross margins on new contracts
  • number of contracts won
  • risk exposures
The trick of course is to define which ones are relevant to you - and then refine that number downward. As a general rule, the less KPIs you're tracking the better. You might be able to identify 100 KPIs that are relevant but if one single KPI can give you the information you need, then stick with that one - but by now, I'm hoping you'll be a fan of making it a lead KPI.

Wednesday, October 26, 2011

Performance Benchmarking and Attribution Bias

One of the challenges for measuring the effectiveness of risk management (or any type of management system for that matter) is a little glitch in human perception known as attribution bias.   Attribution bias is simply our tendency to invent explanations and to attribute things to a particular cause (whether real or imagined).  These attributions serve to help us understand the world and give us reasons for a particular event. 

For example, let’s say that Bill gets sacked from his job. He will attribute his sacking to his boss being an asshole and that he is better off not working for that company anyway. He can thus make sense of his misfortune without having to accept any responsibility. The fact that Bill has been sacked from a series of jobs for poor performance can be easily overlooked.  Bill goes even further and attributes having a series of bad bosses to bad luck - despite the fact that he is the only common denominator each time. Without the attributional explanations, Bill will be very embarrassed and discomforted to believe that his performance at work has been the cause of his sacking(s).

Psychologists describe attributional biases as a class of cognitive errors which are triggered when people evaluate the dispositions or qualities of others based on incomplete evidence. The key element here is that biases are 'errors' that are hardwired into us. There are many good reasons for us to have a range of biases. Attribution bias for example, allows us to maintain self-esteem and a sense of purpose without having to face our own behavior. When it comes to risk management benchmarking however, attribution bias is simply another way to introduce errors. 

Attribution bias and risk management performance benchmarking
When XYZ corporation reviews the annual safety statistics and finds that Lost Time Injuries (LTIs) have fallen, they attribute this to 'greater management focus' on safety, or the introduction of the 'new safety induction scheme'. Maybe they are right - or maybe they were just lucky last year and the drop in LTIs is due to sheer random luck.  Or maybe the drop in LTIs is due to some completely different factor. 

For example, when that crime in the United States started to decline in 1992, in complete contrast to the projections of criminologists, many politicians were quick to attribute this drop to their policies of 'zero tolerance' policies or extra police, etc.  Steven Levitt of the University of Chicago and John Donohue of Yale University meanwhile, suggest that the drop was due to a completely different and unexpected cause. 

They suggest that the the cause was the landmark legal case of Roe v. Wade, in which the United States Supreme Court controversially legalized abortion. Levitt and Donohue persuasively argue with detailed statistical analysis, that the absence of unwanted aborted children, following legalization in 1973, led to a reduction in crime 18 years later.  The years from 18 to 25 would have been the peak crime-committing years of the unborn children and hence resulted in lower crime.  For many years criminologists and politicians alike took credit for various risk management treatments as being the cause of this drop in crime. It probably led to a lot of wasted resources and unwarranted bonuses or at lease congratulatory back-slapping. 

The inference however is clear - attribution bias can easily have you wasting your precious risk management resources. At the very least you'll end up with a risk management benchmarking which is based on the wrong performance indicators and at the very worst, you can make things worse.






Monday, October 24, 2011

One thing in common...

The Fukushima Daiichi nuclear disaster, Greek currency crisis, AIG’s bailout and the US debt crisis all have something in common.  They all operate in highly regulated industry sectors and have sophisticated risk management programs. What they have in common however goes further than that - those risk management systems were manifestly inadequate.

The volatility of recent years has rightfully called into question the effectiveness of current risk management processes. Investors, regulators and the public are all seeking greater visibility into the positive and the negative impacts of risk management initiatives. It’s no longer enough to build a robust risk management system - ISO31000 and any number of other risk frameworks, offer us insight into how to do that - the challenge is to tell if your risk management system is actually effective. Is it delivering the outcomes that it was designed to deliver, and are those outcomes supporting organizational objectives?  
Figure 1: Linking risk management to organizational objectives

Every year we spend billions of dollars on risk management initiatives – most without any subsequent assessment of their effectiveness. How do you tell if the million dollars spent last year on risk mitigation actually delivered benefit and/or reduced risk? How do we tell how an organization is performing against its peers in terms of ROI on risk mitigations?

According to Wikipedia:
"The late-2000s financial crisis (...Global Financial Crisis ...) is considered by many economists to be the worst financial crisis since the Great Depression of the 1930s. It resulted in the collapse of large financial institutions, the bailout of banks by national governments, and downturns in stock markets around the world. In many areas, the housing market had also suffered, resulting in numerous evictions, foreclosures and prolonged vacancies. It contributed to the failure of key businesses, declines in consumer wealth estimated in the trillions of U.S. dollars, and a significant decline in economic activity, leading to a severe global economic recession in 2008."
It's easy with hindsight to see the downstream impacts of poor risk management practices but just how do we tell with foresight?  That's for another article, but you'll find the first clue in Figure 1 above.




Sunday, October 23, 2011

About this blog... (aka. Why benchmark?)

The volatility of the recent years has rightfully called into question the effectiveness of current risk management processes. Investors, regulators, managers and the general public are all seeking greater accountability into both the positive and the negative impact of risk on organizational and societal future performance. The downstream impacts of the Japanese tsunami on the nuclear power industry, the Greek currency crisis, volatility of global financial markets and rise of China/India are all examples of issues that require nothing but the very best risk management decisions that we can possibly make.  Every year, we spend billions of dollars on risk management initiatives – most of that, without any subsequent assessment of the effectiveness of those risk treatments.
Figure 1: The reasons why we bother with Risk Performance Benchmarking
Managers and leaders at all levels are accordingly becoming increasingly accountable to their stakeholders for the delivery of key outcomes in the most effective and efficient manner and risk management is no different in this respect.  Being successful in terms of achieving objectives requires thoughtful and consistent optimization of resources and... an ongoing process of improvement.  Key Performance Indicators and performance scorecards are indispensable tools in this quest to establish, measure and achieve the best possible performance.

The recent economic downturn has provided numerous examples of poor risk management performance and the speed with which some major corporations met their demise has rocked the confidence of investors. In many cases, the public had little if any, warning prior to the collapse of these organizations. Only when the impact of management mistakes had crystallized on the balance sheet (and when it was far too late to do anything) did the full magnitude of the problems become clear.

The release of ISO31000 Risk Management Standard in 2009 heralded in a new era of potential standardization which among other things, can help organizations measure the quality of their risk management and to do ‘apples for apples’ comparison from year to year or across industry sectors.  The first step however is to build a risk management performance scorecard and unfortunately, ISO31000 standard is relatively silent on exactly how to do that.

This blog (the precursor to a book on the subject) addresses a number of methods, examples and guiding principles for developing effective KPIs and scorecards to support and optimize the management of risks. It builds on our 50 years of experience in building risk management performance in a range of industries. Between us (Julian Talbot and Miles Jakeman) we have built risk performance tools for resources companies in Africa, the aviation sector in Asia, the $30 billion Australian Department of Defence and a host of other organizations. The tips, tools and lessons from these experiences have been condensed into a single practical book on risk performance benchmarking.