Students should be able to see how marks are arrived at in relation to the criteria, so as to understand the criteria better in future. They should be able to understand why the grade they got is not lower or higher than it actually is.
One way to do this is to use the sentence stems:" “You got a better grade than you might have done because you ...” and “To have got one grade higher you would have had to ...”.
Transforming the Experience of Students through Assessment (TESTA)
What does recording grades involve?
The end point of this stage is the culmination of marking and moderation processes - a single grade is recorded against each piece of work. In practice this can involve a number of separate tasks such as collating the marks from different assessors who may be marking in various tools and/or on paper; profiling the marks in order to identify a sample to be moderated; reconciling anomalies and formally approving the marks via some form of board.
There are therefore a lot of iterative relationships with the marking and production of feedback.
Institutional regulations will determine who records the grade, how this is verified and in which system it is stored. However, in most cases, the student record system is the definitive source of grading information.
What are we trying to achieve?
The purpose of this stage is to give each summative assignment a definitive grade that describes how well the student has met the criteria for the assignment and hence the learning outcomes.
In order to do this there has to be quality assurance of the original marking process and a procedure for reconciling any anomalies.
How might we use technology at the recording grades stage of the lifecycle and what are the benefits?
Ideally there would be a seamless workflow whereby the work of each marker would be picked up from the marking tool they used, submitted for profiling and analysis and then transferred to the system that records the final mark along with any associated audit trail.
In practice most institutions are still a long way from achieving this and there is a lot of manual intervention in most cases. Some institutions have however developed their own marks recording systems. EMA can provide the following benefits:
- Recording marks in digital format can help avoid transcription errors
- Storing marks in digital format can make collation and profiling easier even when a number of different systems are involved
- An online audit trail makes quality assurance processes easier.
What are the common problems?
As academics mark using many different tools (including paper) this often means that marks have to be transcribed from one system or medium to another and transcription errors such as mixing up one's and seven's are extremely common.
The problems of manual intervention are often exacerbated by academics not trusting in the ability to edit central systems as needed and keeping marks elsewhere 'under their control' until all adjustments have been made and marks have been verified. In many cases the moderation process is carried out on shared drives and by exchanging emails back and forth.
In most cases it is insufficient to simply record a mark and know that it has been adjusted as a result of moderation: there needs to be an audit trail of the actual marks before and after moderation and the reason for the change. Currently this is a weakness in EMA systems.
The ways in which systems record and store marks can cause issues for many institutions whose grading schemes do not match the way the software is configured eg, an institution may have a letter grading scheme whereas their IT systems can only support percentage marks. There are also concerns about the rounding of numeric marks and the possibility that double rounding of marks in different systems can give an inaccurate result.
What resources can help?
Read our briefing paper on the Learning Tools Interoperability specification - a way of seamlessly connecting learning applications and remote content to virtual learning environments.