This section is taken from the work of the Open University’s OULDI project which researched staff attitudes to adopting new tools to support curriculum design. They identified and analysed five common objections:
“I haven’t enough time.”
There may be a number of reasons that staff perceive themselves to have ‘not enough’ time. Firstly, they perceive themselves to be less skilled or, compared to someone else, relatively inexperienced or inefficient in using the tool or approach.
The use of ‘I’ may be important here; ie “I haven’t the skills or knowledge to do this in the time available” rather than ”there isn’t time for someone with sufficient skills or knowledge to do this.”
Furthermore, the estimate of the time an individual requires may include the time needed to first learn the new skills or knowledge. This extends the reason not to engage but continues to pin the reason for resistance against an external factor (limited time) rather than to an individual specific factor (less skilled).
Even depersonalising the claim to “there isn’t enough time” is a statement of fact rather than an overwhelming reason to disengage. There are still possible options available eg
- Stop doing something else less important and do the proposed change instead
- Do other stuff more efficiently so as to find time
- Expend more time by adding the change to existing practice
Invariably, the discourse of resistance favours the latter. This is perhaps because the first option requires an appraisal of the current process and a determination of the relative value of the new tool or approach in question – compared to other tasks what priority should the change be given?
“It doesn’t help me.”
Objections presented in terms of low value to the individual concerned are quite hard to argue against because they relate so much to individual personal experience.
The Open University notes however that there is a discernible strand in this particular discourse of resistance that presents arguments about value that appear ill thought through, not properly articulated, or focus on relatively superficial issues and cites examples of long arguments about semantics used within design tools. They also note that, in contrast, more positive discourses associated with value tend to refer more to experiences gained through extended and applied use of the tool or approach, the benefits to others (such as colleagues or the institution) and the benefit to students.
“Prove to me this works.”or “Where is the evidence?”
On the face of it these are fair and reasonable requests. However, the link between demonstrable evidence of impact and convincing someone to use a tool or approach is not straightforward. Some academics are happy to pilot a new teaching idea with almost no evidence beyond their own ‘hunch’ whereas others continue to argue against practices (especially with regard to assessment and feedback) that are recognised sector-wide as good practice.
The Open University suggests that such anomalies in behaviour relate to the desire of the change adopter to understand the risks involved. Using valuable time to trial a new tool or approach involves risk and, conversely, doing nothing also involves a risk. Therefore, individuals seek reassurance that the risk is worthwhile and, presumably, the greater the perceived risk the greater the demand for evidence to ‘prove’ it will work and the greater the trust the individual or team must have in that reassurance. This may also be compounded by the fear that a new approach will reveal deficiencies in existing practice or result in loss of autonomy.
There are many examples where those resisting change continue to ask for evermore evidence or guarantees. The University of Huddersfield has undertaken considerable research into electronic assessment management (EAM) and concludes that the risk aversion that leads institutions to procrastinate and demand evermore and better evidence is resulting in them muddling along with costly and sub-optimal solutions.
"Senior managers in HEIs are keen to adopt new strategies for managing assessment, but many feel reluctant to do so because of the perceived risks involved. Key amongst these risks is the concern that implementing EAM strategies will invite strong resistance from academic staff and that a system which is not reliable and/or robust will generate distrust and dissatisfaction amongst students.
Lesser amongst the risks, but which are nevertheless significant, are training, procurement and data management issues; these all bring with them potentially significant costs as well as risk. In terms of training, concerns that students will struggle with electronic submission systems remain high.
So, while institutions are keen to adopt EAM strategies as quickly as they can, many are also feeling hesitant to do so. This leaves institutions running the risk of finding themselves constantly stalling or, alternatively, developing radically over-engineered solutions which are more cumbersome, costly and inflexible than they need to be."
University of Huddersfield, EBEAM project final report
“I don’t really need to use it.”
This approach questions on what basis a problem, and hence a need, has been identified. The fact that it may even be a contractual requirement or part of the institutional process will not guarantee effective or productive use.
Demonstrating need can often be difficult due to the fact that many of the current measures of quality have emerged to measure current practices not new practices. It is therefore difficult to argue, for example, for greater collaboration across module production teams if no one has data to show this is an issue or if management has not recognised it as such.
Our managing course information guide has some interesting examples of where analysing data about the curriculum and about assessment practice has led to large-scale change.
The broader organisational and further and higher education context is important. Many change projects describe staff within their organisation as suffering from initiative fatigue. The technology you are trying to implement may be just one of many changes taking place internally and externally.
The perception that new tools or approaches represent a generalised shift to top-down control of teaching and learning can also distort how staff understand and respond to the case for adopting new tools and working practices.