Why is this important?
"Feeling connected and valued really helps to improve learning as well as engagement."
Danny Liu, University of Sydney
"Principles that foster human connection are vital. How we develop relationships and maintain connections with and between peers, have become vital questions. Creating a sense of belonging in the virtual classroom has far reaching effects."
Ewoud de Kok, FeedbackFruits
"Collaboration is at its best when students truly believe they will have a greater chance of achieving their academic goals by working with peers."
Kareem Farah, the modern classrooms project
This ties in with principle number two support the personalised needs of learners. Involving students in discussions around assessment practice helps make it more inclusive and allows students who may face barriers, for example, due to a disability, to voice their concerns.
Institutions commonly focus on developing learners’ study skills, graduate attributes and digital literacies but none of these fully addresses student understanding of, and engagement in, the assessment process.
Study skills resources tend to help students develop their assessment ‘technique’ through essay writing, presentation and preparing for exams rather than understanding the nature and purpose of assessment and feedback practice.
Engaging students as active participants in making academic judgement develops their assessment literacy and gives assessment a sense of purpose which is motivating.
Participating in discussions about assessment approaches, criteria and standards can help develop students’ academic judgement as well as developing skills important for employability. Self and peer review activities are the most active way of allowing students to practice making evaluative judgements.
Developing academic practice
It is just as important for staff to reflect on their assessment and feedback practice as it is for students to strive to improve their own performance.
New findings from education research and advances in technology may offer improved ways of implementing good practice even if the underlying pedagogy is still sound.
"We didn’t need to do anything different but we did need to do things differently."
Elizabeth Hidson, University of Sunderland
Course and module teams may spend a lot of time discussing approaches at the curriculum design stage only for assessment and feedback to be left to personal choice.
"In a competitive and time-poor working environment, investing in developing new and challenging feedback practices can be the last item on the list."
Dr Ian Davis, University of Southern Queensland
Feedback is the area that most often takes place in a ‘black box’. There may be little or no team discussion leading to inconsistency of approach and variation in the quality and quantity of tutor feedback.
Tutors who haven’t had the opportunity to discuss feedback practice with colleagues may produce feedback that is skewed towards either praise or correction or short-term and focused on the task in hand rather than developmental.
"New tutors often have a limited feel for what good feedback looks like or what standard of feedback, in terms of length and specificity, is expected. They may concentrate on proving their superior knowledge to the student rather than focussing on improving the students’ work in future."
Transforming the Experience of Students through Assessment (TESTA)
Despite rigorous quality processes, marking and grading may still be subjective. In the absence of whole team approaches, tutors may develop tacit and personalised standards that lead to inconsistency and inequality for learners.
Listening to the student voice can reveal how the lived experience of assessment and feedback practice matches the expectation of course designers.
Some of the ways technology can help
Our digital experience insights surveys can play a part by showing how your students and staff are using the technology you offer, what is making a difference to their learning and working experiences and where improvements can be made.
Digital tools supporting self, peer and group evaluation can help develop student assessment literacy.
Digital storage of marks and feedback can simplify analysis to identify anomalies. Analytics about the types, quantity and timeliness of tutor feedback can stimulate discussion about what is most appropriate in each context and the equity of the learning experience across an institution.
Online communities of practice permit sharing of experience across a wider network than a single course or institution. Sometimes it can be easier to share ideas and get an objective view on a hard-to-solve problems with colleagues who are removed from the working practices and day-to-day politics of your own institution.
Putting the principle into practice
Find out more about our change agent network supporting student staff partnerships across the UK.
University College London (UCL) is one of the active participants in the network. The University has established a central team of digital assessment advisors and runs ‘assessment mythbusting’ town hall events and assessment ‘hackathons’. UCL also has appointed a group of student assessment design advisors to help develop and pilot new forms of assessment. Find out more about UCL's partnership approach to assessment transformation.
Assessing the process of thinking
We tend to focus on immediate peers when we talk about learning communities but academic development is also taking place at scale. Across France open standards and data science are being applied to improve assessment practice.
The French ministry of education uses digital technology to develop more authentic ways to measure traditional competencies and 21st-century skills.
These assessments don’t only provide information about whether the answer is correct. They capture a rich set of data that reveal the students’ thought processes.
Solving problems in mathematics and science requires students to use cross-curricular skills such as calculating, modelling, and scientific reasoning. Until recently, it has been difficult to measure these skills because traditional math and science assessments contain test items that are scored, based on a student’s final answer.
Using an extension of the open standard QTI (question and test interoperability), the ministry is developing PCIs (portable custom interactions) to deploy authentic assessments that measure skills such as creativity, problem solving, collaboration and critical reasoning. The questions cover a wide range of types including game-like situations and interaction with chat bots to measure creativity.
Researchers make sense of a vast amount of data derived from these digital tests by defining patterns based on what knowledge and skills are involved in answering each question and common types of error.
In simple terms, there is a difference between the activity pattern of a student who has conceptual understanding and knows how to apply it in context and another who achieves the same answer via trial and error or guesswork.
Large-scale research, such as this French example of educational data mining2, has the potential to deliver valuable insights to help learning designers.
More immediately, the examples of authentic question types openly shared, can provide inspiration and a rich reference source for others to use.
If you thought that item banks and automated marking could only be used with very basic multiple-choice questions (MCQs) think again.
View a set of presentation slides and watch the session recording on this case study.
- 1 Video-enhanced dialogic assessment (VEDA) of teaching practice portfolios: the dialogic construction of teachers’ standards evidence in an online space - https://sure.sunderland.ac.uk/id/eprint/13898/1/BERA_VEDA_Poster_2021.pdf
- 2 When didactics meet data science: process data analysis in large‑scale mathematics assessment in France - https://largescaleassessmentsineducation.springeropen.com/track/pdf/10.1...