What to measure is related to the scope of the benchmarking activity. Clearly, measures must relate closely to the activity under investigation – ideally direct characteristics of the activity, but where direct characteristics cannot be measured proxies may be used instead. Whether or not direct characteristics can be measured will often depend upon the availability of relevant data.
Whilst relevant, detailed and up to date data may exist within one’s own institution, comparable data for other institutions may not be readily available. For example, if one’s objective was to benchmark tutor group sizes one would probably have access to relevant and detailed data for one’s own institution but for the purposes of inter-institutional comparisons one may be forced to use a proxy measure such as student:staff ratios instead.
Additionally, while the common emphasis may be on quantitative data, a survey of the HEI planning community in phase one of the HESA benchmarking project showed equal use of qualitative data. Qualitative data may provide for a more narrative-based analysis, which can set context and provide depth to quantitative approaches. In this way qualitative analysis is often a valuable and complementary technique to use of quantitative analyses.
Selecting data sources
At this stage of any benchmarking process the next consideration must be around identification of suitable sources of data to support the measurement. In the case of internal benchmarking, perhaps between functional units within an HEI, there may be a range of internal management information resources that are appropriate.
In the case of external benchmarking between HEIs (or indeed other types of organisation) there are a number of existing published summaries which provide details on the main data sources that are appropriate and useful for higher education benchmarking.
Our business intelligence guide provides one such summary, which is classified by functional area entitled ‘what data can external sources supply?‘ Another useful list of data sources which is aimed at academic researchers is provided by the Administrative Data Liaison Service.
HESA is also preparing a list of data sources specifically for the purposes of supporting benchmarking processes. This will be published shortly. This includes information on the utility of each source for this purpose and is classified by the type of benchmarking for which each source is most relevant.
The characteristics of any potential data source should be assessed in making a judgement on its suitability for benchmarking. The following is a suggested checklist of the most important characteristics in ensuring that any external data source is fit for purpose:
- Are the data relevant to what we are trying to measure?
- Do they allow for direct measurement or do they provide proxy measures?
- How readily accessible are the data? Are they published or available on request?
- Is there a charge to access the data? Is there a budget available to cover any such costs?
- Are there any restrictions on access and usage of data for particular purposes?
- Are data available in formats that are useful to us?
- Is full supporting information available – including any metadata, definitions, notes and caveats to aid interpretation
- How up to date are available data?
- How frequently are the data updated?
- Do the data provide the coverage we need? Geographically? By range and type of data subject?
Stability over time
- Do we need time-series of data?
- Is any time-series available over the time period we need?
- Have there been any major changes in the data that might cause discontinuities in time-series over the period in which we are interested?
- How are the data compiled? By census or survey?
- How are the data quality-assured?
- Are the data audited?
- If data are collected by sample surveys, have the samples been designed to provide data which is sufficiently representative of the population in which we are interested?
- Are the response rates for survey data sufficient for our purposes?
- How comparable are the data between HEIs?
- Have the data been collected using a standardised framework and methodology?
- To what extent do the data utilise standard definitions and coding frames?
- How much latitude is there for respondents and data suppliers to interpret the data requirements in different ways during collection?
- What should we measure?
- How should we measure: quantitative or qualitative data?
- What data/information are available?
- Are the data of sufficient quality and coverage to support our intended measurement?