Why is this important?
Clarity about criteria
Students undertaking an assessment need to understand what is required of them and the criteria against which their performance will be judged.
This sounds self-evident but many learners find it difficult to derive this information from the available sources such as course and module handbooks. Even where criteria and grade descriptors are provided, they may be couched in academic language that requires some skill to interpret.
It helps to have a common template for assignment briefs, so that the essential information is presented, in plain English, in a consistent way for each assignment.
Feedback for learning: a comprehensive guide, an e-book by FeedbackFruits, gives useful guidance on designing rubrics that support active learning. It explains how to create a rubric that is relevant, unambiguous and serves as a bridge to future performance by avoiding traps that can result in a rubric becoming a task-focused checklist.
By implementing this principle you will, however, take this a step further and engage students in activities that allow them to create meaning from the criteria and engage in discussions about quality. You may ask them to rewrite criteria in their own words and compare their understanding with that of others. Later they may even define the criteria by which they think their work should be judged.
The role of feedback
The capacity to recognise and interpret feedback, and use this to lead to further improvement, is key to developing the ability to recognise what good looks like
Feedback provides information about where a learner is in relation to their learning goals so that they can evaluate progress, identify gaps or misconceptions and take action that results in enhanced performance in future.
Feedback should be constructive, specific, honest and supportive.
Effective feedback shouldn’t only focus on current performance or be used to justify a grade. It should also feed forward in an actionable way so the learner understands what they need to focus on in order to improve.
Feedback is conventionally thought of as a dialogue between student and tutor but it can come from a variety of sources. Self and peer feedback can be equally valuable and we discuss this further in relation to principle number 4 develop autonomous learners.
Ipsative assessment is the idea that, rather than evaluating a learner’s performance against an external benchmark, we simply look at whether they have improved on their own previous performance. This is after all the most effective measure of learning gain.
At University College London this approach is proving useful with master’s level students and those studying MOOCs.
To grade or not to grade?
Defining what good looks like does not have to mean assigning a grade.
There is a school of thought that believes grading is counter-productive and diminishes students’ broader curiosity about a subject. Read more about upgrading in Jesse Stommel's article.
Diverse assessment formats and different customs and practices across disciplines may distort marks. Even without these complications, experts question whether it is possible to distinguish the quality of work with the precision implied by percentage marks.
Higher education does indeed operate without grading in some areas. We recognise the value of a doctorate without questioning whether the holder achieved a few percentage points more or less than one of their counterparts.
The absence of a grade can oblige learners to focus on their feedback and encourage deeper reflection.
Technology can help
Some of the ways technology can help:
- Making information about the assessed task and criteria available in digital format helps ensure it can be accessed readily on a range of devices. It also helps with applying a consistent template
- Having the information in digital format allows cross-referencing to a wide range of examples
- Use of digital tools makes it easier to give ‘in-line feedback’ so students can see to which section of their work a feedback comment refers
- Digital tools can support better dialogue around feedback
- Storing feedback in digital format increases the likelihood that students will refer back to the feedback in future
- Digital tools that support self, peer and group evaluation are all means of actively engaging students with criteria and the process of making academic judgement. You can find examples of this in the section about putting each of our principles into practice throughout this guide
Putting the principle into practice
Guidance and templates
Sheffield Hallam University makes all of its guidance and templates available via its assessment essentials website.
Manchester Metropolitan University (MMU) has developed guidance on assessment grading, criteria and marking (pdf). This helps ensure consistency across the organisation. MMU uses an app to make this information available to students via any mobile device.
The University of Reading's A-Z of assessment methods (pdf) can help you choose an appropriate type of assessment
Adaptive comparative judgement
Assessing a piece of work objectively against a rubric is not easy even for an experienced evaluator.
Comparative judgement works on the principle that people make better judgements if they compare two items, and decide which is better, than if they try to evaluate something in isolation.
Repeated comparison of pairs ultimately allows the items to be rank ordered. This usually takes nine to 12 rounds of comparison.
Adaptive comparative judgement (ACJ) tools automate the process of presenting a group of assessors with pairs to compare. The items are initially selected randomly. Comparison between very good and very bad is obvious very quickly. The computer algorithm can then start to select the pairs that will most improve the reliability of the ranking so more effort goes into assessing those that are closely matched.
Research has been undertaken using this technique to compare the evaluative judgement of staff and students. It has also been applied to using artificial intelligence (AI) to partly automate marking of complex items such as essay scripts.
The most common application is, however, in engaging students with assessment criteria and ‘learning by evaluating’ to identify what good looks like.
Using ACJ tools to provide peer evaluation and feedback can provide learners with a rich body of evidence to help them improve their work.
ACJ is being used at many universities internationally. In the UK examples include use at Goldsmiths, University of London and the University of Manchester.
The technique is explained in this presentation and session recording, which includes a case study from the University of Manchester.