- Gather data/information
- Select comparator groups
- Decide on periodicity of measurement
Amongst the challenges to benchmarking are the following as illustrated during the project:
The selection of comparator institutions is of key importance. Many institutions select comparators with similar characteristics such as their mission group eg Russell Group, 94 Group, Million+, University Alliance. However it must be borne in mind that members of mission groups are self-selected and whilst members may show similar characteristics in some areas there may be high levels of diversity in others.
For example, members of a mission group may show similarities in research focus and mission, but may be quite divergent in subject mix or profile of graduate destinations.
There are also other specialist groupings, for example the Leadership Foundation brought a number of smaller institutions together in the MASHEIN network. But there are many institutions which do not subscribe to one of the national groupings.
The project found that the number of benchmark institutions for any one HEI can range from five to 50 institutions, but 50 would represent a very large comparator set.
For example the University of Sheffield’s strategic plan is steered and monitored by 27 key performance indicators, which are compared against other Russell Group institutions.
The smaller and specialist institutions face some challenges because of the lack of a sufficient number of similar institutions and the focus to date has tended to be within the institution rather than between institutions. However aspirations for external benchmarking amongst such institutions are growing.
Comparators are not always selected within mission groups and different comparators may be selected for different purposes. Factors influencing the selection of comparator institutions can include:
- Number of student applications
- Overall teaching/research balance
- Student satisfaction
- Graduate employment
- Research performance
- Subject mix
- Estate (general characteristics or specific measures)
- UK and international standing
However, in many cases the selection of comparator groups is based on less tangible criteria and may be informed by historic notions of which institutions have operated within the same context or competed for students or staff.
Comparators may change over time. Some institutions identify not only current comparators but aspirational comparators: institutions whose current performance represents a desired state or target.
Overall the key lessons to be learned in identifying comparator groups are that:
- Different sets of comparators may be needed for different types of benchmarking
- The characteristics of institutions that may make them relevant as comparators may change over time. Therefore the most appropriate comparator group may change over time. However some stability in comparator groups is desirable to enable longer-term genuine comparisons and to avoid the overhead of frequent change. An appropriate cycle might be to review the comparator group every 3-5 years and ideally at a time of major strategy review.
- Rigidly sticking with the same comparator groups permanently, whether these are based on mission groups or historical views of one’s peer group, risks failure to recognise change and to miss elements of good practice from which one could learn.
- Evidence-based approaches to the identification of comparator institutions are recommended wherever possible, based on measurable characteristics.
- Comparator groups need to be selected very carefully and ideally signed-off by governing bodies. They need to include the right level of challenge and often diversity of type.
Periodicity of measurement refers to the frequency at which measurements should be carried-out or updated. This is linked to:
- The aim of the benchmarking activity
- The availability and update frequency of data
- The timescale over which an institution can react and implement change
- If progress is being tracked over a long period of time, for example in monitoring progress against a strategic plan, then annual intervals will be appropriate, linked largely to availability of national data.
But the project heard, for example how Kaplan Europe, a large private education provider, will monitor internal management information in some cases on an hourly basis (against a target rather than a benchmark). Such regular monitoring will be familiar during key operational processes such as confirmation and clearing or registration/enrolment.
However there seems little point in monitoring on such a frequent basis if the institution cannot react and make operational changes in an appropriate timeframe. The periodicity of measurement must be linked to the institution’s ability to react.