At this point in the benchmarking process, assuming appropriate objectives and measures have been established and comparator groups have been identified on the basis of similar characteristics, the actual benchmarking process may comprise a simple comparison of metrics. However, within a large and diverse HE sector, direct comparisons of metrics will always need qualification and recognition of the diversity.
Even a carefully selected set of comparators based upon similar characteristics will display some diversity that might impact on the interpretation of benchmarking results. This problem may be overcome through the expertise of individuals who understand the key differences between comparator institutions and use this knowledge to guide their interpretation of benchmarking results – an element of diagnostic benchmarking.
However, such knowledge may be incomplete. For example, if one were to benchmark on non-completion rates between two institutions one may need to consider the subject mix of those institutions. If one institution had a medical school – a subject with traditionally low levels of non-completion – and one did not, then that might explain why overall non-completion rates differed. But how much of any observed difference might be explained by subject mix and how much by other factors?
Using performance indicators
Fortunately there are statistical methods of benchmarking available that aim to make adjustments for factors that might legitimately be expected to influence any measure under consideration. The most notable of these is probably the methodology used within the national performance indicators (PIs) publication.
The national PIs define a range of measures which cover areas of strategic or policy interest by a range of HE stakeholders. As such these measures are calculated and displayed for each HE institution in the UK. However, for more effective comparison of measures between HEIs, each measure is shown alongside a benchmark figure. This benchmark figure is calculated as a mean value for all HE institutions, but weighted on factors that might explain observed differences.
Taking the above example of non-completion, clearly the subjects offered by an institution have an effect on the overall non-completion rate we might expect. The entry qualification profile also has an effect – an institution that only admits high-achieving A-level entrants would expect very different non-completion rates from one that focused more on attracting students from diverse and non-traditional backgrounds and entry qualifications. The national PIs methodology defines a list of such factors for each of its indicators. These include:
- Subject of study
- Entry qualifications profile
- Age of students
- Region of domicile of students
In this way, rather than compare indicators directly between institutions, the PIs allow the comparison between an institution and the average of the sector, allowing for the characteristics of that institution. If direct comparisons between institutions are required then similar benchmark values might at least suggest that it is fair to compare those institutions.
In practice, undertaking this type of statistical benchmarking can be challenging and requires the sourcing of detailed data sets. However new functionality is planned for the Heidi system in 2012 that should make such benchmarking much more straightforward.
However as explained in the opening sections of this guide, metric benchmarking, no matter how sophisticated, can only identify performance gaps. In order to understand those gaps and be in a position to address them, further techniques are often required, such as diagnostic and process benchmarking.
Diagnostic benchmarking may be a useful next stage from metric benchmarking, in which knowledgeable individuals may use their own insights to interpret and understand the results of the metric benchmarking, and this may involve use of qualitative data. Such a combination of metric with diagnostic benchmarking may provide a good level of insight at a relatively low cost in resource terms, but much depends on the knowledge and experience of the individuals concerned.
Greater insights can be obtained by progressing from metric to process benchmarking, but at higher cost in resources, due to the typically collaborative nature of such approaches. As previously stated, a well-executed intervening diagnostic benchmarking stage can help to shape subsequent process benchmarking to ensure maximum benefit and to target resources most effectively.
Of course collaborative approaches for external benchmarking require willing partners, but with a wide range of professional groups/associations/mission groups and a long tradition in the UK HE sector of collaboration for mutual benefit, opportunities are plentiful.
- Metric, diagnostic, process or combination of these?