A paper from the STEM Annual Conference 2012.
Referring to an assessment system adopted by a Built Environment School in 2009 this paper starts a critical review on the strengths and weaknesses on assessing undergraduates' dissertations. Over the last two years a series of changes has been made to the module and the impacts of these changes have been analyzed and reviewed. This paper demonstrates part of the findings of this completed project. Overall the use of an electronic system to streamline the process of feedback and assessment for learning was very effective. Particular concerns were raised e.g. on improving the quality of intermediate and final feedback and minimizing conflicting comments between markers. These are indeed the most difficult areas to be addressed as the timing of 'hands-on' or 'hands-off' (that is engagement in assessment for learning) are based on both the commitment of the student and also the quality of supervision from an individual lecturer. This leads to the argument of whether the assessment is more like a self-learning process or indeed should the school expect to get the most out of the students by learning through the module.