Skip to main content

Sector-Wide Analysis of Free Text Comments

08 Jul 2022 | Dr Stuart Grey The founder of Student Voice, Dr Stuart Grey discusses how they successfully analysed the free text data generated in Advance HE’s annual nationwide UKES, PTES and PRES surveys.

Student Voice and Advance HE set out in early 2021 to see if it was possible, using the latest machine learning tools, to label (or “Code”) all of the free-text comments generated by the nationwide UK Engagement Survey (UKES), Postgraduate Taught Experience Survey (PTES) and Postgraduate Research Experience Survey (PRES) that Advance HE conducts on behalf of the 104 UK universities who opted into the project.

The machine learning models developed over the course of the project allowed us, for the first time, to show institutions specifically what topics undergraduate, postgraduate taught, and postgraduate research students were talking about when they were discussing both the positive and negative aspects of their student experiences.

As well as the automated labelling/coding we were also able to generate individual analysis reports for each of the institutions taking part. These results also allowed institutions to benchmark the themes in their own students’ free-text comments with those in other institutions, at both the institutional level and at the course and the demographic level, all without sharing any of their student’s comment data with any other individual institutions.

Make better use of student comments.

The project initially set out to try to answer the question, “How can we use student comments to improve teaching and the student experience?”. A key issue in answering this question is being able to determine which areas of teaching and the student experience highlighted by the students are problems within individual institutions versus part of wider trends across the sector. This equally applies across subject and demographic dimensions and a key aim of our work was to enable institutions to, for example, see if the issues raised by female engineering students at their institution were echoed by female engineers across the sector. This naturally led to designing the reporting outputs to highlight discrepancies between cohorts such as these to be of most use to institutions.

The Survey Data

For this project Student Voice looked at all of the free-text comment data generated through the 2021 runs of the UKES, PTES and PRES surveys. In total, 104 UK universities took part in one or more of the surveys in 2021 and the dataset to be analysed totalled over 250,000 comments. An emphasis was put on analysing the responses to the general positive and general negative free text questions which made up the vast majority of responses, as this allowed for maximising the ability to compare across surveys, institutions, and with historical data in the future.

How do we build an automatic classifier?

In order to classify and label/code this amount of data we have to leverage the current state of the art in natural language processing (NLP), a subset of machine learning (ML) focused on processing written language. The models we developed are called “Supervised” models which at a fundamental level allow the model to learn from examples that it fed, more on which later.

Another interesting part of these ML models is that they are not trained from scratch but are built on top of huge models of the English language. Importantly for our use case, these underlying models of the English language contain information on the underlying concepts behind each word, allowing the model to have an understanding of the similarity between words, even when there is no direct match, an incredibly useful feature when students may use myriad different ways to talk about the same fundamental idea.

Training the Models.

So how do we teach the model how to label/code the comments? As we are using a supervised machine learning approach we have to supply it with examples of text/label pairs. To make sure our classifiers are able to handle both short and the sometimes incredibly long comments students make, we split each comment into its constituent sentences and label each one separately. We also allow each individual sentence to have multiple labels if required before rolling them up into the parent comment. All this is to ensure that we capture all of the issues raised in any given comment.

The question then becomes, how many human labelled comments do we need? The answer is a lot! For this project over 70,000 comments were manually labelled by a team of labellers before cross checking between labellers on overlapping datasets to ensure consistency. This results in what is termed a “Gold Standard” human labelled training dataset.

The reason we have to go to all this trouble is that these models can only recognise what they have already seen and a broad range of human labelled inputs is absolutely vital in ensuring the models accuracy.

Validating the Models

Given these ML classifiers are fundamentally statistical models they are never going to be perfect but how do we a measure their accuracy? At Student Voice, the accuracy of the models is paramount, so we validate the models using an approach called “cross-validation” where, for example, we hold back 20% of our 70,000 human labelled comments and train the models on the remaining 80% of the comments. Once the models are trained we can then test their performance against the 20% of human labelled comments that the model has never seen for a rigorous view of how accurate the models are.

Outcomes and Next Steps

The outputs generated from this project allowed over 100 universities to better understand what issues their students were facing. What’s more, the data we generated and the consistent set of models we trained on sector wide data allow us to make useful comparisons between institutions without divulging sensitive data. This has allowed us to start working with institutions directly to analyse all their free text data, both current and historical, across all types of surveys from NSS to Pulse, module evaluation questionnaires (MEQs) to student experience surveys. This has in turn has made our models and reporting more effective and we look forward to helping more institutions make more sense of their free text survey data and ultimately improve the teaching and student experience at their institution.

 

Dr Stuart Grey is the Founder of Student Voice and a Senior Lecturer at the University of Glasgow. You can find out more about his work here.    

Insights Event: Advance HE Surveys Findings - 22 November 2022, Virtual

Drawing on findings from selected Advance HE reports and surveys, this event will offer further insights and analysis into the postgraduate experience of both taught and research areas such as race, ethnicity and religion and awarding gaps, the student experience and governance and undergraduate engagement. Find out more

Author:
Subject:

We feel it is important for voices to be heard to stimulate debate and share good practice. Blogs on our website are the views of the author and don’t necessarily represent those of Advance HE.

Keep up to date - Sign up to Advance HE communications

Our monthly newsletter contains the latest news from Advance HE, updates from around the sector, links to articles sharing knowledge and best practice and information on our services and upcoming events. Don't miss out, sign up to our newsletter now.

Sign up to our enewsletter