Saturday, October 20, 2012

Why benchmark? And how?


Being successful in business relies on taking full advantage of resources and aiming to constantly make progress and improve. Consequently, managers and leaders at all levels are becoming increasingly accountable to their stakeholders for the delivery of key outcomes in the most competent and diligent manner. Frameworks such as ISO 31000:2009 provide generic guidelines for the design, implementation, and maintenance of risk management processes throughout an organization, but the snag is, they don’t tell us how to generate performance scorecards or develop Key Performance Indicators (KPIs) which are indispensable tools in the quest to establish, measure, and achieve the best possible performances within the business.

At last our book on How to Performance Benchmark Your Risk Management is out. It is designed to offer a practical guide to help you analyze the effectiveness of your risk management program. Applicable to all areas of management, university level study, or technical disciplines, it addresses the following:

  • What is benchmarking?
  • Why should we benchmark?
  • What are Key Performance Indicators (KPIs), Critical Success Factors (CSFs), Key Result Areas (KRAs), etc?
  • What is the causal path between Risk–>Risk Treatment–>CSF–>KRA–>KPI–>Organizational objectives
  • The difference between lag and lead indicators
  • How to measure performance using a variety of tools (e.g.: Kirkpatrick’s four levels of training evaluations)
  • Methods for developing custom KPIs
  • Linking existing frameworks such as Balanced ScoreCards that an organization may have in place already to risk benchmarking
  • Using word pictures to define performance

Between us, we have built risk performance tools for dozens of organizations including resources companies in Africa, the aviation sector in Asia, and the $30 billion Australian Department of Defence. We have worked together on a range of client projects at Jakeman Business Solutions and are also lead authors of Security Risk Management Body of Knowledge, which details the security risk management process in a format that can easily be applied by executive managers and security risk management practitioners. We can be contacted via www.jakeman.com.au will also find some useful templates and short guides, which can be downloaded for free.

Friday, January 20, 2012

How to link risks, KPIs and objectives...


Before measuring performance, it is necessary to identify and develop KPIs that accurately reflect desired performance. Otherwise, monitoring and assessing KPIs can be more of a hindrance than a help in achieving goals. Indeed, the HB80  Benchmarking Handbook suggests a number of reasons why organizations might need to monitor and assess their performance, including to:

  • Set performance goals 
  • Develop measures of productivity
  • Improve competitive advantage
  • Improve products, processes, service, or all of these 
  • Confirm performance against strategic plans
  • Identify new business opportunities

Some equally important objectives might include to:


Ideally, the senior management team should select organizational KPIs as an aid to shaping and encouraging behaviors that support achievement of organizational objectives. One of the key challenges with performance measures, however, is to show a causal link between initiatives and performance. One way to think of KPIs is as follows:

  • Organizational objective: What results do we want to achieve?
  • Risk: What could adversely or positively impact the achievement of this objective?
  • Risk treatment: What do we propose to do to manage this risk?
  • Critical success factors (CSF): What has to occur for the risk treatment to be successful?
  • Key result area (KRA): Which areas will have the most significant impact on our risk treatment turning into organizational outcomes that we desire?
  • Key performance indicator (KPI): What data, statistics, or indicators will tell us if we are achieving or about to achieve our objective?

In practice, this might look something like this:

  • Corporate Objective #2: Maintain shareholder returns
  • Risk: Failure to protect sales margins because of an increase in raw materials prices as a result of global financial markets adversely affecting currency exchange rates
  • Treatment: Provide financial-analysis training to sales team regarding interpretation of the effect of currency fluctuations on cost of sales
  • CSF #5: Gross sales margins sustained
  • KRA #5: Profitability and margins
  • KPI #2: New contracts maintain 25 percent or greater gross margin

Different business units will inevitably have different KPIs that reflect their focus; however, KPIs at all levels should support organizational KPIs and objectives. Equally, we cannot overstate the interconnectedness of KPIs, KRAs, risks, and objectives, but applying the Pareto principle (the 80:20 rule) should allow us to track only the more significant indicators.

Figure 1: Example of linking Risks, KPIs and Objectives

As you can see from Figure 1, even the aforementioned examples are incredibly simplistic. Life is much more complex than it seems, so although we can draw a causal link between a risk, treatment, CSFs, KRAs, KPIs, and organizational performance, the elements that affect this organizational objective can be infinitely complex and interactive.

=====================

If you'd like to find out more about measuring the performance of your risk management, you might find our book on this topic helpful.


'How to Performance Benchmark Your Risk Management: A practical guide to help you tell if your risk management is effective' by Julian Talbot and Miles Jakeman PhD.




Thursday, December 15, 2011

Use Existing Reporting Frameworks


Wherever possible, KPIs should be aligned with organizational outcomes and the underlying management systems used to track and report these.  Indeed there are few cases where they would not be.

In the absence of an existing framework, however, the Balanced Score Card (BSC) by Kaplan and Norton is a useful measurement framework.  Robert S. Kaplan and David P. Norton introduced the BSC in 1992 to help determine whether the activities of an organization were meeting its objectives or not. It focuses not just on financial measures, but also the human issues, and helps provide a comprehensive view of an organization’s working in order to ensure alignment of their activities with the achievement of stated goals.

Not only is it ideal for our purposes, but there is also a well established body of literature to guide risk management professionals in the adaptation and application of this concept to their environment. The graphic in the blog entry "One thing in common..." illustrates at a simple level, how the Balanced ScoreCard can be aligned with other KPIs and KRAs.

You can do it in much more depth however if you look at the full breadth of organizational management systems. An enterprise level scorecard should include basically as many systems as possible to generate a detailed picture.
Figure 1: Example of linking risk management systems to a BSC Framework
At its simplest level, BSC provides a framework of Learning & Growth, Internal Processes, Customer Satisfaction and Financial metrics to identify, analyze, improve and report on organizational performance.  Part of the underlying principle is that appropriate learning, growth and staff development will support the correct application of internal processes that in turn leads to happy customers, thus providing financial success.  It is of course a much richer framework than this simple overview provides but readers are encouraged to seek out the books and articles on this topic for more detailed information, particularly those but Kaplan & Norton.

Tuesday, November 15, 2011

ALARP as a risk benchmarking target?

ALARP (As Low As Reasonably Practicable) is a fundamental concept in risk management. In essence, it represents the concept that we should mitigate threats (negative risks) down to the level where the expenditure of resources is balanced against the benefit.   As the saying goes, a picture paints a thousand words so this concept is probably best illustrated in Figure 1 below.

FIgure 1: ALARP - the point where risk and resources achieve optimal balance
Understanding the concept and measuring it in practice are however two entirely different things.  There are a number of challenges that can conspire against us in this mission, the main two being:

  • You need a consistent performance scorecard with at least 2 years of historical data. Unless there is some barometer that can give a quantitative or semi-quantitative level of risk, there is no way to objectively measure risk levels 
  • Much of what we spend on risk management is 'hidden' from view. It's relatively easy to quantify the cost of insurance, hedging, risk management training budgets etc but that is just the tip of the iceberg. A simple example of this would be an organization that relocates its office for security or safety reasons. It's easy enough to say that the organization spent $x on relocating and an extra $y/mth in rent but few organizations actually track this difference, much less allocate it to risk management.

If we want to understand when we are in the ALARP region (and it's more accurate to think of it as a 'region' rather than a point value), then the first step is to do the analysis that is required in order to build a performance scorecard. More on that later but here are some SIMPLE questions to consider:

  • Scope - What is the scope of our scorecard? The whole organization? Or do we want to separate out each division, each geographic location or each separate facility? Maybe just the safety and health program, or perhaps Health, Safety, Environment and Security?
  • Indicators - What are the specific indicators that we care about?  Are we more interested in lag or lead indicators? Do we have enough information already and if not, what would we need to track?
  • Measures - How do we measure success? At what point do we read the optimal resource/risk balance? Is there a specific number? If not, can we use client (internal or external) surveys to provide a semi-quantitative measure as a proxy for risk?
  • Performance - What long term performance are we seeking? How will our ALARP strategies and measures contribute to organizational performance?
  • Longitudinal data - What period of time do we want to consider? Will we measure monthly, quarterly, annually? What is the lifespan of our performance benchmarking tool? If we align it to our risk management framework, it will need to be revised when the framework is revised. Will we need to modify historical data to keep our long term trend information meaningful?
  • Excellence - How will we achieve excellence? How will we know when our benchmarking is excellent? Is it when the CEO or Board signs it off? Will we consider it excellent when it's working to plan? Or when external independent auditors sign it off? Equally, what do we need to do to achieve true excellence? How will we track hidden risk management costs? Eg: Will we create cost codes to track the hidden risk treatment costs such as the extra cost of rent as in the example above?

As you can probably see, SIMPLE doesn't mean simple, but lets start with the concept of ALARP as being important to our risk performance benchmarking. If we're really serious we also need to think about how to measure the AHARP concept (As High As Reasonably Practicable) for positive risks (aka. opportunity realization). More on AHLARP at this link but that's an idea for another day.

Friday, November 11, 2011

Three fundamentals of KPIs

Measuring performance seems complex (and it is) but let's not lose sight of the basics. The trick with key performance indicators (KPIs) comes down to three questions:
  • What is KEY? What are the long term objectives of the enterprise?
  • What is PERFORMANCE? What results do you aim for, what is success, what is failure, what are the acceptable ranges? 
  • What are INDICATORS? What metrics define desired performance, at reasonable cost? 
KEY
What do you really care about? You could apply this question to your personal preferences in life but what we are talking about here is what is important to your organization.  Organizations don't have a single identity so often you're trying to amalgamate a variety of disparate views, values and opinions. Certainly, you can be guided by policy statements, organizational objectives and strategy documents, and they are probably the first place to start looking for answers. In the end however, someone needs to be accountable to define what is actually 'important'.  Depending on the scope of your study, this could be the CEO, the budgetholder, Chairman of the Board or simply the line manager responsible for that specific area.

Here are a few questions that might guide your investigations when establishing what is 'key':

  • What is the scope of this KPI?  The whole organization? A small project? My workgroup?
  • What couldn't the organization survive without? You could take a 'red team' approach to this by getting a team together to ask "How would I attack or cause this organization to fail?"
  • What are we trying to achieve here? Ie. What are the objectives of this organization/group/project?


PERFORMANCE
As the adage goes "what gets measured, gets managed".  It's essential to have some simple and clearly defined view of what performance means.  That means you either need to ask binary questions (Eg: Is our gross profit greater or less than 10% of turnover?) or measure it quantitatively (Eg: What is our nett profit?). 

In some cases, you might also choose to use some semi-quantitative questions. These aren't quite as good as pure quantitative data but can at least provide some sort of ordinal measure (Eg: Is the organization improving our score each year?).  A simple example of a semiquantitative score can be found in the figure below. This simple example looks at converting some simple 'word pictures' into an ordinal scale. One of the limitations of this scale is that it's primarily subjective. That doesn't mean that it's not useful however. If you can think of 10 or more areas to measure, and are rigorous with developing the criteria for each of the four grades, you'll end up with a fairly good idea as to where your weaknesses are and whether or not the organizations performance is increasing over time.
Figure 1: Example of a Semi-Quantitative Scale

Here are a few questions that might guide your investigations when establishing how to measure 'performance':

  • What is the timeframe that I'm interested in?
  • What would we consider as catastrophically bad?
  • What would we like to be able to report to the Board/boss/shareholders? 
  • Would data would we need?
  • What information/data/statistics do we already have?
  • How much would it cost to get ALL the data that we'd need and how would we go about acquiring it?



INDICATORS

What are the metrics that can help us measure our performance? This question above all others is entirely dependent on context. An insurance company might have a wealth of data about it's clients risks but if it is developing KPIs for financial performance

Here are a few questions that might help with developing 'Indicators'':

  • Would data would we already have?
  • What information would would we like that we don't already?
  • How much would it cost to get all the data that we'd need and how would we go about acquiring it?
  • Are there industry standards or legislated reporting requirements that we can piggyback on?
  • What compliance framework do we already have?
  • What systems (accounting, safety, incident reporting, sales, etc) do we have in place? How could I modify or integrate those systems to get a better result?
  • Which are lead and which are lag indicators?


LAST BUT NOT LEAST...
Keep at it. Ask lots of people for help, input and go with the simple questions. What's important to you? How would you measure it? What information would help you understand your business? Once you can get to these answers, (which is every bit as hard as it sounds) you have the exact set of KPI's your organization needs. More is waste, less is mismanagement.

Friday, October 28, 2011

Is it a lag or a lead indicator?

Lag and lead indicators both have their place, but ultimately it's more important to build lead indicators. So what's the difference?  Lag indicators tell tell you 'after the fact' if your risk treatments have been effective.
Lag versus lead KPIs
By way of example, consider Lost Time Injury Frequency (LTIF) data. They are important to track but ultimately are only telling you what has happened in the past. That's good to know, but if your safety initiatives aren't working (or are making things worse), you'll only find out after you've injured several more people than you might have otherwise. By which time of course, you're already in the hurt locker and no wiser about how to fix things.  You can probably think of (and are probably using) any number of lag indicators but here are a few very common ones. Quarterly profit statements, house fire data, motor vehicle accident rates, cancer cases, return on equity... and the list goes on.

Far better to look at lead indicators that tell you if things are working and can give you some information about the future. In the safety example, you might like to track the level of hazard reporting, or incidents of non-compliance with safety procedures. If hazard reporting is increasing, and/or non-compliance is dropping, then (other factors being equal) it's likely that will see a reduction in lost time injuries.  Some other examples of lead indicators might include speeding tickets, hours per year of driver training, forward sales, blood sugar levels, etc.

The difference between lag and lead indicators is crucial, yet sadly too often overlooked. You might have an organizational objective of making a profit (not unusual in modern business) but if you only track profit, it can be way too late to do anything about things, if you have an unprofitable year. Some lead indicators that might help would include:
  • customer satisfaction
  • employee turnover
  • gross margins on new contracts
  • number of contracts won
  • risk exposures
The trick of course is to define which ones are relevant to you - and then refine that number downward. As a general rule, the less KPIs you're tracking the better. You might be able to identify 100 KPIs that are relevant but if one single KPI can give you the information you need, then stick with that one - but by now, I'm hoping you'll be a fan of making it a lead KPI.

Wednesday, October 26, 2011

Performance Benchmarking and Attribution Bias

One of the challenges for measuring the effectiveness of risk management (or any type of management system for that matter) is a little glitch in human perception known as attribution bias.   Attribution bias is simply our tendency to invent explanations and to attribute things to a particular cause (whether real or imagined).  These attributions serve to help us understand the world and give us reasons for a particular event. 

For example, let’s say that Bill gets sacked from his job. He will attribute his sacking to his boss being an asshole and that he is better off not working for that company anyway. He can thus make sense of his misfortune without having to accept any responsibility. The fact that Bill has been sacked from a series of jobs for poor performance can be easily overlooked.  Bill goes even further and attributes having a series of bad bosses to bad luck - despite the fact that he is the only common denominator each time. Without the attributional explanations, Bill will be very embarrassed and discomforted to believe that his performance at work has been the cause of his sacking(s).

Psychologists describe attributional biases as a class of cognitive errors which are triggered when people evaluate the dispositions or qualities of others based on incomplete evidence. The key element here is that biases are 'errors' that are hardwired into us. There are many good reasons for us to have a range of biases. Attribution bias for example, allows us to maintain self-esteem and a sense of purpose without having to face our own behavior. When it comes to risk management benchmarking however, attribution bias is simply another way to introduce errors. 

Attribution bias and risk management performance benchmarking
When XYZ corporation reviews the annual safety statistics and finds that Lost Time Injuries (LTIs) have fallen, they attribute this to 'greater management focus' on safety, or the introduction of the 'new safety induction scheme'. Maybe they are right - or maybe they were just lucky last year and the drop in LTIs is due to sheer random luck.  Or maybe the drop in LTIs is due to some completely different factor. 

For example, when that crime in the United States started to decline in 1992, in complete contrast to the projections of criminologists, many politicians were quick to attribute this drop to their policies of 'zero tolerance' policies or extra police, etc.  Steven Levitt of the University of Chicago and John Donohue of Yale University meanwhile, suggest that the drop was due to a completely different and unexpected cause. 

They suggest that the the cause was the landmark legal case of Roe v. Wade, in which the United States Supreme Court controversially legalized abortion. Levitt and Donohue persuasively argue with detailed statistical analysis, that the absence of unwanted aborted children, following legalization in 1973, led to a reduction in crime 18 years later.  The years from 18 to 25 would have been the peak crime-committing years of the unborn children and hence resulted in lower crime.  For many years criminologists and politicians alike took credit for various risk management treatments as being the cause of this drop in crime. It probably led to a lot of wasted resources and unwarranted bonuses or at lease congratulatory back-slapping. 

The inference however is clear - attribution bias can easily have you wasting your precious risk management resources. At the very least you'll end up with a risk management benchmarking which is based on the wrong performance indicators and at the very worst, you can make things worse.