Bias exists in numerous guises within higher education, and institutions continue to tackle this as part of wider efforts to address systemic inequalities. Whether through awareness-raising, unconscious bias training, or implementing new policies, it is important that these efforts are felt within all corners of an institution. Both students and staff must be impacted by the reduction or mitigation of bias on an individual level, and feel assured that they are receiving the fairest possible learning or employment experience. However, uncovering exactly where this bias exists can be a difficult and complex process.
Literature reviews on bias in higher education remain sparse, and Advance HE recognised a need to investigate the prevalence of bias within lesser-explored topic areas, such as teaching and learning. The first in a series of three literature reviews; Advance HE has today published a review into bias within the assessment and marking practices, as well as a presentation of ‘what works’ and current good practice initiatives put in place by institutions to tackle bias in this area. Two additional literature reviews, exploring bias in the curriculum and online pedagogy, as well as bias in decision-making, will be published in 2021.
The topic of bias in assessment and marking practices continues to be contentious for institutions, namely because it is difficult to ascertain exactly where bias lies. The practice of anonymous marking is one such example, and several recent studies have been conducted into the differences in bias shown towards demographic student groups when using anonymous marking practices versus not using them. To date, few differences have been discovered and results among researchers remain inconclusive.
In any case, students’ own perceptions of fairness seem to outweigh the inconclusive evidence of bias in non-anonymous marking, leading to demands that institutions adopt anonymous marking practices regardless. Previous Advance HE research conducted in 2008 and 2011 highlighted that BME students’ main cause of dissatisfaction with assessment and feedback was due to perceptions of unfair assessment and a lack of transparency in marking arrangements, compared to white students. In the 2020 NUS results, the difference was again clear. 67.4% percent of BAME students agreed with the statement ‘marking and assessment has been fair’ compared to 74.9% of white students.
As there is little evidence of bias in anonymous marking, some may question whether institutions were instead swayed by students’ perceptions of fairness, in turn leading to a domino-effect of institutions adopting the practice in the 19/20 academic year. A participant in a 2008 study by David Brennan on student anonymity in the assessment of written summative work encapsulated this in a quote. They explained that their institution proceeded with anonymous marking simply because it was ‘difficult to justify not doing it', that the danger of not doing it was too pertinent to ignore if bias was to occur. This is perhaps why, after consulting with their students in the form of focus groups and surveys, the decision for institutions to proceed with anonymous marking ultimately seemed like the sensible – and fairest – option.
The anonymous marking policies adopted by institutions for the 19/20 academic year varied in their application, and some were more nuanced than others. But institutions were mostly consistent in ensuring that the anonymous marking policy applied only to summative work, rather than impacting on the student-tutor relationship nurtured through the provision of feedback on formative work. Institutions must make an effort to ensure their anonymous marking processes do not become entangled with the feedback process, and that feedback remains personal and meaningful for the students’ learning experience. The needs for anonymity need to be carefully considered, and institutions need to be as transparent as possible if, and when, implementing new marking practices.
With the devastating impact of Covid-19 continuing to unfold, transparency in assessment and making processes is essential. Following the negative side effects created by the Ofqual’s grades standardisation algorithm rolled out earlier in 2020, it is understandable that students may feel increasingly sceptical about automated processes. While alternative, automated assessment approaches are gaining interest to help mark large volumes of work with minimised instances of bias, just like the implementation of anonymous marking these will be need consulted and/or piloted with students. Whether or not there is evidence of bias, perceptions of fairness will continue to tip the balance in decision-making.
Hannah Borkin is a researcher at Advance HE, and has led on numerous mixed-method research projects pertaining to equality, diversity and inclusion.
Unconscious Bias Literature Review: Bias in Assessment