Having identified a range of risks we now need to consider which are the most serious in order to determine where to focus our attention and resources. We need to understand both their relative priority and absolute significance.
People are not generally good at analysing risk. We tend to take decisions swayed by our emotional response to a situation rather than an objective assessment of relative risk. Given half a chance most of us will believe what we want to believe and selectively filter out information that does not support our case.
We are similarly bad at looking at probability in a holistic way. People generally focus on risks that have occurred recently even though another risk may have happened exactly the same number of times over the last five years.
We must nonetheless accept that most of the risk analysis done in our environment will be of a qualitative nature. Few of us have the skills, time or resources to undertake the kind of quantitative modelling that goes on in major projects in the commercial sector. This section aims to show that by taking a disciplined and structured approach it is possible to improve the objectivity of your analysis without getting into complex calculations or needing specialist software tools.
We have already said that it pays to involve a range of people in the identification and analysis of risk. Each will of course bring their own bias to the analysis but if you understand your organisation and stakeholders it ought to be possible to separate out the valuable experience from the personal agendas eg, ‘Fred always emphasises hardware security as he’s been in three colleges that have suffered major break-ins’ or ‘Fred always emphasises hardware security as he’s been after funding to move the machine room for the last five years’.
One technique that is sometimes used to keep politics out of this type of discussion is the Delphi Technique. Using this technique opinions are gathered anonymously then cross-checked with a range of experts. The experts are simply looking at the data presented rather than dealing with the personalities involved.
Delphi technique - gathering opinions anonymously
The Delphi technique has been defined by Linstone and Turoff as:
…a method for structuring a group communication process so that the process is effective in allowing a group of individuals as a whole to deal with a complex problem.
Linstone, Harold A and Murray Turoff (EDS) (1975) - 'The Delphi Method: Techniques and Applications' - Addison-Wesley, p3
It was originally developed as a technique for the US Department of Defense. The development occurred during the early part of the Cold War in the 1950s when the Rand Corporation was charged with finding a way to establish reliable consensus of opinion among a group of experts about potential Soviet military attacks.
Half a century later the technique is still widely used, but usually in much more peaceful endeavours. The underlying rationale continues to be:
to establish as objectively as possible a consensus on a complex problem, in circumstances where accurate information does not exist or is impossible to obtain economically, or inputs to conventional decision making for example by a committee meeting face to face are so subjective that they risk drowning out individuals’ critical judgements.
The approach tends to be a group of techniques rather than an individual procedure. Typically the approach will involve an expert panel; a number of information gathering rounds each of which ends with an analysis process and feedback into subsequent rounds. Individuals are given the opportunity to revise judgements as a result of feedback. The number of iterations can vary (the more rounds the closer the consensus is likely to be) as can the size of the panel – from a handful to several hundred participants.
The process has the following features – anonymity (to a greater or lesser extent, depending on how the exercise is structured), iteration, controlled feedback and statistical aggregation of group response.
In deciding how serious a risk is we tend to look at two parameters:
- Probability – the likelihood of the risk occurring
- Impact – the consequences if the risk does occur.
Impact can be assessed in terms of its effect on:
There is also a third parameter that needs to be considered:
- Risk proximity – when will the risk occur?
Proximity is an important factor yet it is one that is often ignored. Certain risks may have a window of time during which they will impact. A natural tendency is to focus on risks that are immediate when in reality it is often too late to do anything about them and we remain in ‘fire-fighting’ mode. By thinking now about risks that are 18 months away we may be able to manage them at a fraction of the impact cost.
Another critical factor relating to risk proximity is the point at which we start to lose options. At the start of a project there may be a variety of approaches that could be taken and as time goes on those options narrow down. We said earlier that risk management is about making better decisions. Very often in the education sector we put off taking decisions until the options disappear and there is only one way forward.
Assessment of both probability and impact is subjective but your definitions need to be at an appropriate level of detail for your project. The scale for measuring probability and impact can be numeric or qualitative but either way you must understand what those definitions mean. Very often the scale used is high, medium and low. This is probably too vague for most projects. On the other hand a percentage scale from 1-100 is probably too detailed.
Use enough categories so that you can be specific but not so many that you waste time arguing about details that won’t actually affect your actions. Experience suggests that a five-point scale works well for most projects. A suggested scale is:
|Very low||Unlikely to occur||Negligible impact|
|Low||May occur occasionally||Minor impact on time, cost, or quality|
|Medium||Is as likely as not to occur||Substantial impact on time, cost or quality|
|High||Is likely to occur||Substantial impact on time, cost or quality|
|Very high||Is almost certain to occur||Threatens the success of the project|
Assigning numeric scales
To move from qualitative to quantitative risk assessment, you can assign a numeric scale and, by using a ‘traffic light’ system – assigning red, amber or green against pre-determined value range – break the risks into groups requiring different response strategies. The red, amber, green designation is known as a ‘RAG status’ and was referred to in the risk log section.
This table uses the same linear scale for both axes:
The next table doubles the numeric value each time on the impact scale. This is perhaps a more useful model as it gives more weight to risks with a high impact. A risk with a low probability but a high impact is thus viewed as much more severe than a risk with a high probability and a low impact. This avoids any ‘averaging out’ of serious risks.
It is questionable whether the amber risks warrant separate classification in terms of your response strategy and it is suggested that you examine each in turn and either ‘promote’ or ‘demote’ them to red or green. This can be important in assessing the overall level of risk especially if you opt for the straightforward linear scale in the first table. This means particularly being clear about what you mean by a ‘medium’ level of probability.
Suppose a risk has a 50% likelihood of occurring and may cost you £20k. This is as likely to happen as not yet people often tend to ignore risks labelled ‘medium’. There is an argument for saying that it is hardly worth breaking down categories of probability over 50%. Once a risk is as likely to happen as not you should plan for it.
The diagram below shows the previous example with the amber risks demoted or promoted (here those risks with a value of 10 or above have been promoted to red, below 10 demoted to green).
Cutting your risk categories down in this way leaves you with two sets of risks requiring a response strategy:
Red risks = unacceptable. We must spend time, money and effort on a response. This is likely to be at the level of the individual risk. Green risks = acceptable. This does not mean they can be ignored. We will cover them by means of contingency. This means setting aside a sum of money to cover this group of risks. We will look at how you calculate contingency in the section on costing risk.
In the above example, progressing with the project itself may be called into question given that more than half the risks are now indicated in red.