Key tools to develop your understanding and use of benchmarking.
This content was archived in November 2017
About this guide
The resource has been developed as a key output from the Higher Education Statistics Agency (HESA) project ‘realising business benefits through the use of benchmarking’. Integrated with the Jisc strategic planning and business intelligence resources, and building on the HESA report benchmarking to promote efficiency, this resource is intended to be one of the key tools available to managers to support institutional strategic planning with appropriate business tools and evidence bases.
This resource does not aim to be a standalone or ‘one stop’ guide to every aspect of benchmarking, but instead a practical overview of the topic. It will evolve and change over time, as the sector discusses how best to develop and embed benchmarking processes.
This resource is intended as a toolkit to help develop understanding and use of benchmarking.
Case studies are offered either as examples of good practice, or to illustrate activity. The project has not attempted to identify ‘best practice’ because of the size and diversity of the Higher Education sector. Differences in approach for individual institutions may be required based on the mission, strategies, priorities and structure of each institution.
Practical and accessible, and not overly theoretical
A tool to aid reflection
Who is it for?
This resource is for wide use but will be primarily of interest to managers and planners and to senior staff responsible for strategic review and implementation of change.
The authors (Graham Fice and Jonathan Waller) would like to express their gratitude to all those who contributed material for this guide, and who are acknowledged individually within the text.
A special thank you to Dr Giles Carden (director of management information and planning at the University of Warwick, and consultant to the HESA benchmarking project) and Patrick Kennedy (former director of strategic planning and change at the University of Exeter) who kindly provided their time and expertise to review draft versions of the guide.
A process through which practices are analysed to provide a standard measurement (‘benchmark’) of effective performance within an organisation (such as a university). Benchmarks are also used to compare performance with other organisations and other sectors.
During the course of the HESA benchmarking project many variations on definition have been found, which have common themes. A simple and concise summary of these was provided by one university colleague:
A way of not only doing the same things better but of discovering new, better and smarter ways of doing things and in the process of discovery, understanding why they are better or smarter. John Gallacher, Director of Finance, York St John University
Types of benchmarking activity
Existing literature describes the following types of benchmarking:
Implicit (by-product of information gathering) or explicit (deliberate and systematic)
Conducted as an independent (without partners) or a collaborative (partnership) exercise
Confined to a single organisation (internal exercise), or involving other similar or dissimilar organisations (external exercise)
Focused on an entire process (vertical benchmarking) or part of a process as it manifests itself across different functional units (horizontal benchmarking)
Focused on inputs, process or outputs (or a combination of these)
Based on quantitative and/or qualitative information
This resource focuses on two main types of benchmarking – metric (sometimes referred to as ‘performance’) and process:
Provides the information to identify those areas where there is an apparent performance gap. Unless a very complex set of data has been collected it does not usually provide an understanding of explanatory factors which are the key to understanding the apparent performance gap: metric benchmarking often requires further investigation in order to understand the results. Put another way, metric benchmarking often doesn’t provide answers to a business problem but can usefully help to focus on the correct questions for further exploration. Metric benchmarking is often undertaken independently by comparing one’s own performance statistics with similar statistics for other functional units or organisations derived from a data set.
Seeks to use the metric benchmarking output as a basis for understanding the apparent performance gap. This involves focusing on the examination and comparison of processes and will often be undertaken on a collaborative basis between functional units within an organisation or with other organisations with the aim of identifying best practice.
These two types of benchmarking may be seen as two extreme points in a spectrum of activity. Between them lies an intervening stage – Diagnostic benchmarking – in which the self-evaluation of metric benchmarking may be guided by performance criteria and/or facilitated by the insights of well-informed individuals.
The value of this intervening stage should not be underestimated. A diagnostic stage following metric benchmarking may be useful in shaping and focusing any following review. It may be for example that a diagnostic approach at this stage identifies likely process deficiencies that are worthy of further scrutiny within a process benchmarking exercise. However diagnostics might equally suggest the requirement for something rather more strategic and challenging such as a major review at departmental level, possibly followed by a restructuring. In this way diagnostics are important in directing effort to achieve effective results.
Different types of benchmarking can deliver different benefits but approaches also require different effort and resource. This is illustrated as follows:
Institutions should select the benchmarking approach that is appropriate to their needs, priorities and circumstances.
The pros and cons of metric and process benchmarking can be illustrated thus:
Process benchmarking (qualitative, often collaborative)
Metric benchmarketing (quantitative, often non-collaborative)
Table - Sam Cole, University of Warwick
Opportunities for higher education institutions to increase efficiency and improve particular functionsForum to explore shared services
Individual measures can highlight strengths and weaknesses and offer insightsPotential to boost international reputation
Difficult to identify a benchmark groupSensitive business information may be difficult to access
Metrics may not be the most relevant or appropriateDanger of promoting homogenity
Benchmarking can be used to demonstrate accountability to stakeholders as well as develop the institution and competitive advantage in a continuum:
Two comments from the sector illustrate the two ends of the continuum:
"The HESA performance indicators and the University’s performance against benchmarks are included in the suite of performance management information which is shared routinely with the board of governors. Benchmarks and relative performance are good and objective ways of assuring that the university keeps on track with its plans. The indicators also help to provide assurance to stakeholders including the public and policy makers." Vice-chancellor of a post-92 university
"The overarching aim of a benchmarking process is to place performance in perspective against the sector or a more specific group of institutions. A key element of benchmarking is the identification of institutions that achieve high levels of performance which can act as examples of good practice.
By analysing, assessing and implementing actions based on examples of good practice, institutions can achieve more efficient processes and ultimately higher levels of performance. Sensible benchmarking can lead to realistic target setting processes in relation to a broad spectrum of performance indicators, which encourages a more efficient environment." A head of planning
It is clear that, given the seismic changes occurring in the UK HE sector, benchmarks and the uses of benchmarking are likely to change. Benchmarks that were important in a funding-led world will not necessarily be as relevant in the changed sector, and new benchmarks will be required.
There is no doubt that increasing pressure to demonstrate efficiency of operations and to prosper in an increasingly competitive environment are significant drivers to greater use of benchmarking techniques. Benchmarking enables institutions to focus on specific problems, isolate improvement opportunities and identify strengths that should be preserved.
"HE needs to understand how to operate in the changing environment and there is a consequent need to understand the market and student needs. There is a fast-moving agenda, and the dynamics of the marketplace are being introduced to HE. ‘privatisation’ and private providers are entering the sector and the role of FE colleges may change."
"There are issues of cost and quality, the efficiency and effectiveness of processes, tracking student mobility, and the data and skills necessary to support business-like operation." Senior higher education managers at a ‘thinktank’ event during the project
Why benchmark? Potential benefits
Ten reasons to use benchmarking have been set out by the benchmarking in European higher education project which says that benchmarking strengthens an institution’s ability to more successfully:
Self-assess its performance
Better understand the processes which support strategy formulation and implementation in increasingly competitive environments
Measure against and compare with other institutions or organisations, and assess the reasons for any differences
Encourage discovery of new ideas through a strategic look (inside or outside the institution)
Obtain data to support decision-making
Set effective targets for improvement
Strengthen institutional identity, strategy formulation and implementation
Respond to national (or international) performance indicators and benchmarks
Set new standards for the institution and sector
Benchmarking the university’s performance against other higher education institutions allows the University to get a sense of where it is performing well in relation to others. University of Bristol website
This illustration of the value of benchmarking and its essential link to institutional improvement was offered during the project:
A strategy-contingent approach to benchmarking
This section proposes a process model which places benchmarking exactly where it can generate the most significant benefits – at the heart of the strategic management of an institution. When benchmarking activity is properly aligned with, and is used to support, the strategic objectives of the institution it can be a powerful tool for managers and decision-makers.
We begin this section by examining the preparation and foundations necessary to ensure success in any benchmarking process.
Benchmarking is not a black-box technology. The success of a benchmarking exercise ultimately comes down to the capability of managers to use that information to better understand their institutional situation and produce an agenda for strategic change. European benchmarking project
It is important to be aware at the beginning of the benchmarking process of:
The need for careful planning of the process
The need to assess, and ideally quantify, change (impact) of the process to demonstrate benefit
Benchmarking is neither a ‘black-box’ technology nor a ‘tick box’ exercise. The benchmarking in European higher education project says benchmarking is not:
A fad or panacea which can be picked up and applied, or a ‘cookbook’ from which elements can be selected, but an integrated and integral part of strategic management
A measurement mechanism but a process of discovery and learning
The presentation of data without action on the data
A mechanism for resource reduction although resources may subsequently be redeployed in a more effective way to increase institutional performance
There are certain key attributes which are identified by a range of studies on HE benchmarking which are suggested as indicators of success:
Senior commitment and leadership
Clear objectives with key links to organisational: mission, strategy, performance, structure, resources
Commitment to ongoing improvement and change on a wide scale
Willingness to become a learning organisation
Understanding of own business
Characteristics of benchmarking process
A dynamic but rigorous and professional approach
Clear measures at the outset
Appropriate data to support the process
Understanding of the data
Appropriate comparators and partners (for collaborative exercises)
Involvement of stakeholders as necessary
Understanding of comparators and partners’ business (to analyse and interpret data)
Ability to share innovation and good practice
Appropriate time and resources to support the process
Characteristics on completion of benchmarking process
Clear decisions on implementation of results
Management of change and improvement
Feed back into ongoing process
Ongoing learning, improvement and change
The process model
A strategy contingent approach to benchmarking:
Steps one and two: set objectives and investigate context
For a benchmarking exercise to be of value it must have clearly defined objectives from the outset. These objectives will be most effective when properly aligned with the strategic aims of the institution.
This essential linkage is illustrated below:
Benchmarking objectives will usually be derived from the decomposition of strategic plan objectives into delivery and operational objectives at functional level within the HEI. This process will often result in the setting of specific targets for improvement over a pre-defined time period, which may be informed by evaluation of the results of preliminary benchmarking.
However, benchmarking isn’t always about improvement. A valid benchmarking objective may simply be to establish the relative position of an institution as compared to its peers or competitors on some characteristic. This may provide reassurance to managers or governors if the relative position is deemed acceptable or may generate a requirement for improvement if not.
Having identified the objectives of any benchmarking process, the next requirement is to investigate the context within which the objective is sought. This must include a thorough and honest self-appraisal of the organisation’s current position. The following questions may help guide such a self-appraisal:
What is our current state of maturity with regards to the objective?
What evidence is available to establish our starting point or baseline position?
What environmental factors may impact our ability to meet the objective? Are these static or changing over time?
Environmental scanning and awareness is an essential part of strategic planning. It may be the precursor to a benchmarking exercise when information suggests the need for investigation or action.
“Benchmarking makes a valuable contribution to operational and strategic development but it must be aligned. Context matters in understanding business performance.” Ken Sloan, Director of Universities and Higher Education, Serco
Step three: research target
What to measure is related to the scope of the benchmarking activity. Clearly, measures must relate closely to the activity under investigation – ideally direct characteristics of the activity, but where direct characteristics cannot be measured proxies may be used instead. Whether or not direct characteristics can be measured will often depend upon the availability of relevant data.
Whilst relevant, detailed and up to date data may exist within one’s own institution, comparable data for other institutions may not be readily available. For example, if one’s objective was to benchmark tutor group sizes one would probably have access to relevant and detailed data for one’s own institution but for the purposes of inter-institutional comparisons one may be forced to use a proxy measure such as student:staff ratios instead.
Additionally, while the common emphasis may be on quantitative data, a survey of the HEI planning community in phase one of the HESA benchmarking project showed equal use of qualitative data. Qualitative data may provide for a more narrative-based analysis, which can set context and provide depth to quantitative approaches. In this way qualitative analysis is often a valuable and complementary technique to use of quantitative analyses.
Selecting data sources
At this stage of any benchmarking process the next consideration must be around identification of suitable sources of data to support the measurement. In the case of internal benchmarking, perhaps between functional units within an HEI, there may be a range of internal management information resources that are appropriate.
In the case of external benchmarking between HEIs (or indeed other types of organisation) there are a number of existing published summaries which provide details on the main data sources that are appropriate and useful for higher education benchmarking.
HESA is also preparing a list of data sources specifically for the purposes of supporting benchmarking processes. This will be published shortly. This includes information on the utility of each source for this purpose and is classified by the type of benchmarking for which each source is most relevant.
The characteristics of any potential data source should be assessed in making a judgement on its suitability for benchmarking. The following is a suggested checklist of the most important characteristics in ensuring that any external data source is fit for purpose:
Are the data relevant to what we are trying to measure?
Do they allow for direct measurement or do they provide proxy measures?
How readily accessible are the data? Are they published or available on request?
Is there a charge to access the data? Is there a budget available to cover any such costs?
Are there any restrictions on access and usage of data for particular purposes?
Are data available in formats that are useful to us?
Is full supporting information available – including any metadata, definitions, notes and caveats to aid interpretation
How up to date are available data?
How frequently are the data updated?
Do the data provide the coverage we need? Geographically? By range and type of data subject?
Stability over time
Do we need time-series of data?
Is any time-series available over the time period we need?
Have there been any major changes in the data that might cause discontinuities in time-series over the period in which we are interested?
How are the data compiled? By census or survey?
How are the data quality-assured?
Are the data audited?
If data are collected by sample surveys, have the samples been designed to provide data which is sufficiently representative of the population in which we are interested?
Are the response rates for survey data sufficient for our purposes?
How comparable are the data between HEIs?
Have the data been collected using a standardised framework and methodology?
To what extent do the data utilise standard definitions and coding frames?
How much latitude is there for respondents and data suppliers to interpret the data requirements in different ways during collection?
What should we measure?
How should we measure: quantitative or qualitative data?
What data/information are available?
Are the data of sufficient quality and coverage to support our intended measurement?
Step four: gather data / information
Select comparator groups
Decide on periodicity of measurement
Amongst the challenges to benchmarking are the following as illustrated during the project:
The selection of comparator institutions is of key importance. Many institutions select comparators with similar characteristics such as their mission group eg Russell Group, 94 Group, Million+, University Alliance. However it must be borne in mind that members of mission groups are self-selected and whilst members may show similar characteristics in some areas there may be high levels of diversity in others.
For example, members of a mission group may show similarities in research focus and mission, but may be quite divergent in subject mix or profile of graduate destinations.
There are also other specialist groupings, for example the Leadership Foundation brought a number of smaller institutions together in the MASHEIN network. But there are many institutions which do not subscribe to one of the national groupings.
The project found that the number of benchmark institutions for any one HEI can range from five to 50 institutions, but 50 would represent a very large comparator set.
The smaller and specialist institutions face some challenges because of the lack of a sufficient number of similar institutions and the focus to date has tended to be within the institution rather than between institutions. However aspirations for external benchmarking amongst such institutions are growing.
Comparators are not always selected within mission groups and different comparators may be selected for different purposes. Factors influencing the selection of comparator institutions can include:
Number of student applications
Overall teaching/research balance
Estate (general characteristics or specific measures)
UK and international standing
However, in many cases the selection of comparator groups is based on less tangible criteria and may be informed by historic notions of which institutions have operated within the same context or competed for students or staff.
Comparators may change over time. Some institutions identify not only current comparators but aspirational comparators: institutions whose current performance represents a desired state or target.
Overall the key lessons to be learned in identifying comparator groups are that:
Different sets of comparators may be needed for different types of benchmarking
The characteristics of institutions that may make them relevant as comparators may change over time. Therefore the most appropriate comparator group may change over time. However some stability in comparator groups is desirable to enable longer-term genuine comparisons and to avoid the overhead of frequent change. An appropriate cycle might be to review the comparator group every 3-5 years and ideally at a time of major strategy review.
Rigidly sticking with the same comparator groups permanently, whether these are based on mission groups or historical views of one’s peer group, risks failure to recognise change and to miss elements of good practice from which one could learn.
Evidence-based approaches to the identification of comparator institutions are recommended wherever possible, based on measurable characteristics.
Comparator groups need to be selected very carefully and ideally signed-off by governing bodies. They need to include the right level of challenge and often diversity of type.
Periodicity of measurement refers to the frequency at which measurements should be carried-out or updated. This is linked to:
The aim of the benchmarking activity
The availability and update frequency of data
The timescale over which an institution can react and implement change
If progress is being tracked over a long period of time, for example in monitoring progress against a strategic plan, then annual intervals will be appropriate, linked largely to availability of national data.
But the project heard, for example how Kaplan Europe, a large private education provider, will monitor internal management information in some cases on an hourly basis (against a target rather than a benchmark). Such regular monitoring will be familiar during key operational processes such as confirmation and clearing or registration/enrolment.
However there seems little point in monitoring on such a frequent basis if the institution cannot react and make operational changes in an appropriate timeframe. The periodicity of measurement must be linked to the institution’s ability to react.
Step five: measure
At this point in the benchmarking process, assuming appropriate objectives and measures have been established and comparator groups have been identified on the basis of similar characteristics, the actual benchmarking process may comprise a simple comparison of metrics. However, within a large and diverse HE sector, direct comparisons of metrics will always need qualification and recognition of the diversity.
Even a carefully selected set of comparators based upon similar characteristics will display some diversity that might impact on the interpretation of benchmarking results. This problem may be overcome through the expertise of individuals who understand the key differences between comparator institutions and use this knowledge to guide their interpretation of benchmarking results – an element of diagnostic benchmarking.
However, such knowledge may be incomplete. For example, if one were to benchmark on non-completion rates between two institutions one may need to consider the subject mix of those institutions. If one institution had a medical school – a subject with traditionally low levels of non-completion – and one did not, then that might explain why overall non-completion rates differed. But how much of any observed difference might be explained by subject mix and how much by other factors?
Using performance indicators
Fortunately there are statistical methods of benchmarking available that aim to make adjustments for factors that might legitimately be expected to influence any measure under consideration. The most notable of these is probably the methodology used within the national performance indicators (PIs) publication.
The national PIs define a range of measures which cover areas of strategic or policy interest by a range of HE stakeholders. As such these measures are calculated and displayed for each HE institution in the UK. However, for more effective comparison of measures between HEIs, each measure is shown alongside a benchmark figure. This benchmark figure is calculated as a mean value for all HE institutions, but weighted on factors that might explain observed differences.
Taking the above example of non-completion, clearly the subjects offered by an institution have an effect on the overall non-completion rate we might expect. The entry qualification profile also has an effect – an institution that only admits high-achieving A-level entrants would expect very different non-completion rates from one that focused more on attracting students from diverse and non-traditional backgrounds and entry qualifications. The national PIs methodology defines a list of such factors for each of its indicators. These include:
Subject of study
Entry qualifications profile
Age of students
Region of domicile of students
In this way, rather than compare indicators directly between institutions, the PIs allow the comparison between an institution and the average of the sector, allowing for the characteristics of that institution. If direct comparisons between institutions are required then similar benchmark values might at least suggest that it is fair to compare those institutions.
In practice, undertaking this type of statistical benchmarking can be challenging and requires the sourcing of detailed data sets. However new functionality is planned for the Heidi system in 2012 that should make such benchmarking much more straightforward.
However as explained in the opening sections of this guide, metric benchmarking, no matter how sophisticated, can only identify performance gaps. In order to understand those gaps and be in a position to address them, further techniques are often required, such as diagnostic and process benchmarking.
Diagnostic benchmarking may be a useful next stage from metric benchmarking, in which knowledgeable individuals may use their own insights to interpret and understand the results of the metric benchmarking, and this may involve use of qualitative data. Such a combination of metric with diagnostic benchmarking may provide a good level of insight at a relatively low cost in resource terms, but much depends on the knowledge and experience of the individuals concerned.
Greater insights can be obtained by progressing from metric to process benchmarking, but at higher cost in resources, due to the typically collaborative nature of such approaches. As previously stated, a well-executed intervening diagnostic benchmarking stage can help to shape subsequent process benchmarking to ensure maximum benefit and to target resources most effectively.
Of course collaborative approaches for external benchmarking require willing partners, but with a wide range of professional groups/associations/mission groups and a long tradition in the UK HE sector of collaboration for mutual benefit, opportunities are plentiful.
Metric, diagnostic, process or combination of these?
Step six: evaluate
Having undertaken a benchmarking process by some combination of metric, diagnostic and process approaches, the next stage is to evaluate the results in the context of the original strategic objectives of the benchmarking exercise. A useful first stage of this can be to undertake a ‘sense-check’ of the results – are the results plausible and credible?
Some expert assessment at this stage is useful in ensuring that the benchmarking results represent a well-founded and reliable basis for decision making. Appropriate, insightful interpretation of the results is helpful in identifying which benchmarking outcomes offer the most promising improvement opportunities.
Presenting benchmarking data
The results of the benchmarking processes will typically be presented to managers and decision-makers at this point in the process. Effective presentation of benchmarking data (or indeed other forms of business intelligence) is very important in ensuring that the results of benchmarking are accessible, applied and understood.
Such presentation must reflect:
The audience, their expectations and level of technical knowledge
The objectives for the presentation (for example: for information or to prompt action)
Any caveats that must be considered in interpreting the data
The following two illustrations were offered and the case study of the University of Warwick in step five provides relevant examples.
Principles of effective presentation
Effective visual presentation
Effective presentation of benchmarking results to decision makers will often lead to searching discussions.
If we are doing badly compared to others – why?
If we are doing well, what are the underlying factors?
Decisions must be taken on areas of performance in which improvement is required and the setting of targets to direct improvement.
The results of benchmarking which highlight performance gaps and generate insight into causes will provide context for realistic target setting in terms of extent and timeframes for change. Of course this must be balanced against the costs and capability of an institution to implement change.
The University of Sheffield case study contains further information about this stage of the process.
Use measurements and benchmarks to evaluate performance
Determine gaps and/or establish process differences
Set targets and monitor
Step seven: manage improvement
An expert panel was asked to identify the outcomes of a successful benchmarking exercise during the project. The responses were:
By a change in institutional policy or process
Through change in institutional culture and leadership
Benchmarking usually involves change. Many institutions have offices focused on supporting change management. The Association of University Administrators (AUA) supports the change management group and both the Leadership Foundation and the Higher Education Academy provide development and support at more senior levels.
Change takes time and the amount of time depends upon the scale of change being attempted. It is also important to keep unintended consequences in mind – a change to improve performance in one area can affect other areas. Implementing a benchmarking programme and acting on its results will help equip the organisation to improve continuously. Focus on developing a strategy to implement leading practices and making the process changes needed to drive improved performance.
Communications to staff regarding the benchmarking initiative should be expansive, regular and delivered to many levels within the organisation.
Strategy execution – prioritise improvement initiatives by strategic need and track performance against targets
Manage change and improvement
Stage eight: review strategic objectives
The strategy-contingent approach to benchmarking posed in this resource is not a linear process but a cyclical one.
This reflects two facts:
Benchmarking is used to best effect when it becomes an intrinsic part of continuous development, refinement and implementation of strategy, rather than a one-off exercise
In a rapidly changing environment all aspects of the process intended to effect improvement, from monitoring of the operational actions taken to assessment of the strategic objectives on which those actions are based, must be regularly reviewed
Step eight is therefore an extremely important one that can easily be undervalued. Without such a stage, changes in the environment can overtake a programme of improvement and lessen or even negate its effects. A number of questions must be answered:
How effective are the improvements being made?
How quickly are improvements being seen? Are we on course to reach improvement targets in the scheduled timeframe? If not, then what corrective actions are required?
Are the targets originally set still the right ones in the current context? Do we need to update any aspect of the benchmarking work to re-establish the latest context for the targets? Do we need to change the targets as a result?
Do we need to re-assess any aspect of the original strategic objectives?
In this way assessment of progress and context can, depending on the outcome, result in a loop back to one of three points in the strategy-contingent benchmarking process as shown in the diagram:
Manage improvement – continue managing the improvement with the same strategic objective and existing targets but perhaps with modified actions to ensure correct trajectory of change
Evaluate – review/change targets in current context. Perhaps update benchmarking work to re-establish current context
Set objectives – fundamental review/change of original strategic objectives
Adjust plans or targets
Assess benefits of benchmarking exercise
Benchmarking - a maturity framework
The benchmarking in European higher education project has underlined that there will be a different starting point for every higher education institution dependent inter alia on:
Focus on improvement
Willingness to change
Degree of autonomy (of institution within sector or units within institution)
The HESA-commissioned report from PA Consulting ‘International benchmarking in UK higher education’ identifies the concept of maturity in relation to international benchmarking in higher education. Maturity is also identified as a key concept in our business intelligence guide as institutions assess their current position and measure progress.
In terms of the findings from the HESA benchmarking project, it is clear that:
Each institution has varying levels of understanding of, and gives varying priority to, benchmarking (often linked to mission and strategy)
Each institution has varying capacity and capability to undertake benchmarking (often a factor of size)
Within institutions, specific areas may be more or less advanced in benchmarking
A maturity framework for benchmarking can be drawn out of the success factors and case studies shown within this infoKit, which provides a useful means of self-appraisal and also a route for enhancing competence and capability to benefit from benchmarking. This framework addresses leadership and governance, alignment with corporate strategy, resources, comparator groups, types of benchmarking, technology and source data.
Leadership and governance
Benchmarking undertaken by specialists/analysts. Results viewed by individual enthusiasts at middle management level. No appreciation of value of benchmarking at more senior levels.
Individual Senior Managers are advocates of benchmarking and may promote use of technique within their departments on an ad hoc basis.
Senior management team members are advocates of benchmarking and review outcomes of benchmarking in setting strategic objectives. Benchmarking analyses part of suite of information routinely shared with head of institution and senior management team.
Alignment with corporate strategy
Results of benchmarking reviewed by individuals in particular departments with no explicit relationship with corporate or departmental strategy. Analyses undertaken on ad hoc basis to investigate particular issues.
Benchmarking analyses used as context for departmental strategy but not on systematic basis. No explicit link with overall corporate strategy.
All benchmarking activity fully aligned with corporate strategy across institution. Development of Corporate strategy informed where relevant by benchmarking analyses.
Lone individuals undertaking benchmarking when time and resources permit.
Resources made available to support benchmarking but on an infrequent and ad hoc basis.
Benchmarking considered intrinsic part of strategic planning with staffing and resources allocated appropriately.
Comparator groups (external benchmarking)
Mission groups or other historic groups used for all benchmarking analyses.
Different comparator groups used for different types of benchmarking, but composition not updated regularly using latest evidence.
Different comparator groups for different types of benchmarking selected through evidence. Groups updated at a frequency that aligns with the strategic planning cycle of the institution. Selection designed to include correct level of challenge and diversity of type. Aspirational as well as current comparator groups.
Types of benchmarking used
Simple comparisons based on metric benchmarking (internal or external). Provides more focused questions for further exploration.
More sophisticated comparisons using metric benchmarking with diagnostic approaches to provide depth to the analysis. Targets areas for further investigation but also starts to generate insights and context.
Sophisticated metric benchmarking, perhaps using techniques to normalise data by institutional characteristics. Diagnostic techniques used to focus and shape subsequent collaborative process benchmarking. Provides valuable and in-depth insights and gives practical depth to identification of good practice.
Any benchmarking comparisons undertaken largely through use of standard spreadsheet applications.
Use of specialist software applications providing more sophisticated benchmarking/dashboard functionality.
Use of enterprise-wide business intelligence system which is based on a comprehensive data warehouse application.
Source data compiled on one-off and ad hoc basis. Questionable quality and comparability of data. Localised sources of data held within departments that are not accessible to all staff and are not trusted across the institution.
Data feeds taken from good quality sources at regular intervals. Comparability ensured to high degree. Visibility and sharing of data between departments. Increasing trust in the data arising from developing consistency and transparency of data gathering processes.
Quality-assured internal and external data which is maintained as a central institutional resource. Integrated and coordinated approach to data gathering and update promoting timely and consistent data – “one version of the truth” that is trusted across the HEI. Adherence to data standards ensuring comparability and stability over time.
It is important to understand that although level three in each of the above dimensions of benchmarking represents the highest level of maturity, this may not be attainable or even desirable for every institution.
Application of benchmarking must reflect the mission, characteristics and resources of each institution, and so decisions must be taken as to what the desired maturity level is within each dimension, together with the steps that are required to achieve it.