Evaluating a change involving technology can be challenging for a variety of reasons, from the number of variables involved to the ‘Hawthorne Effect’ (explained in cost/benefit section). For mobile learning the complexities surrounding evaluating the success of an initiative are often heightened because of the added difficulty of evaluating across various contexts.
'A major task for educational evaluation is to identify and analyse learning within and across contexts. For mobile learning, the interest is not only in how learning occurs in a variety of settings, but also how people create new contexts for learning through their interactions and how they progress learning across contexts.'
Vavoula and Sharples (2008)
Vavoula and Sharples argue that “in order to establish, document and evaluate learning within and across contexts” it is necessary to analyse:
- physical setting and layout of the learning space (the ‘where’)
- social setting (who, with whom, from whom)
- learning objectives and outcomes (why and what)
- learning methods and activities (how)
- learning progress and history (when)
- learning tools (how)
In order to be able to evaluate the effectiveness of a change management initiative it is important to have a baseline from which to work as well as clear success criteria. Whilst projects may often yield unexpected benefits (and come up against unexpected barriers) it is important to share the key elements against which the project shall be judged.
Jisc has a range of resources and publications to help institutions evaluate mobile learning initiatives:
- Different routes to evidencing value – blog post from the curriculum design and delivery team referencing a report summarising evaluation methods and techniques
- e-learning programme and project evaluation – Glenaffric-produced and Jisc-funded resource featuring a checklist, handbook and a six step model for evaluation.
- Exploring tangible benefits of e-learning – publication discussing various ways ‘benefits’ relating to e-Learning can be conceptualised and measured.
- Guidance on learner-centred evaluation – looking at evaluation from a pedagogical point of view, this resource provides guidance on developing learner-centred evaluation questions, gathering and analysing data from learners and on ‘purposive sampling’.
- Measuring benefits – a short but useful overview on how to measure benefits of a programme or project.
Traxler (2007, pages 8-9) points out that “there are no a priori attributes of a ‘good’ evaluation of learning” but that there are, however, some “tentative candidate attributes” of what would make a ‘good’ evaluation of mobile learning initiative. These are that mobile learning should be:
- Rigorous (trustworthy and transferable conclusions)
- Efficient (cost, effort, time)
- Proportionate (“not more ponderous, onerous, or time-consuming that the learning”)
- Appropriate (technology, learners, ethos)
- Aligned (to chosen medium/technology)
Using these as headings for the evaluation of a mobile learning initiative allows organisations to focus on those aspects of mobile learning that make a real and sustainable impact on an institution.