If we are serious about preparing graduates for systemic risk, climate instability and geopolitical uncertainty, then the most consequential leadership decision we make may not be what we teach but how we assess it.
Over the past year, redesigning an assessment in my final-year International Finance module forced me to confront how profoundly assessment structures shape student thinking. And more uncomfortably, how some of our inherited designs may now be misaligned with both technological reality and the commitments we make under frameworks such as the Principles of Responsible Management Education (PRME).
Assessment is not neutral. It signals what counts as legitimate knowledge, what kinds of reasoning are valued and what intellectual risks are safe to take. In that sense, assessment is epistemic leadership.
The misalignment
Finance education has traditionally rewarded certainty. Apply the model correctly. Follow the steps. Derive the conclusion.
In international finance, we teach models of exchange rate determination, capital mobility, hedging strategies and cross-border investment appraisal. These models are elegant and powerful. They rely on structured reasoning, defined variables and equilibrium assumptions.
But we are teaching them in a very different context now.
Generative AI can produce fluent explanations of uncovered interest parity or derivative hedging strategies within seconds. Fluency is no longer reliable evidence of understanding. If assessment rewards structured exposition, students – or the tools available to them – will optimise for it.
At the same time, the phenomena these models attempt to illuminate – such as currency crises – resemble what Rittel and Webber (1973) described as “wicked problems”: interdependent, evolving, resistant to tidy resolution.
And there was a second misalignment. As a PRME institution, we commit to developing responsible leaders capable of navigating systemic challenges. Yet if our assessments reward procedural certainty over relational reasoning, we risk signaling that responsibility is secondary to technical fluency.
The redesign
So I changed the task.
Instead of constructing a linear essay, students were required to build a systems map of a contemporary international finance problem and defend the logic of their relationships in a concise commentary. Where do feedback loops occur? Which factors reinforce or dampen each other? What assumptions underpin the connections? How might outcomes shift under changing conditions?
On paper, the change looked modest. In practice, it altered the nature of engagement.
Students could no longer rely solely on exposition. They had to justify why one factor influenced another. They had to articulate conditionality. They had to explain uncertainty rather than smoothing it away.
A subtle shift occurred in their language. “It depends” stopped being a prelude to hesitation and became a disciplined analytical move: “It depends on expectations” or “This stabilises in the short term but amplifies risk over time”.
They began interrogating the assumptions embedded in models rather than treating them as fixed truths.
In class and office-hour conversations changed. Instead of asking, “Is this right?”, students asked “Does this relationship hold?” or “What happens if this variable moves more quickly than we assumed?” The dialogue shifted from correctness to coherence.
Why this matters
Across the sector, we often debate what content to add: sustainability, digital literacy, global citizenship. But assessment design sends a more powerful signal than any module descriptor. It shapes how students approach uncertainty, how they interpret disagreement and how they experience intellectual risk.
If ambiguity is penalised, students will avoid it.
If relational reasoning is rewarded, students will practise it.
Knight and Yorke (2003) remind us that graduate capability depends on managing complexity and uncertainty, not eliminating them. The PRME principles similarly call for responsible management education that recognises interdependence and long-term consequence. If we claim to develop graduates capable of navigating systemic risk, our assessments must require them to demonstrate precisely that.
In theoretical terms, this was a move from explanation to interrogation. It was also an attempt to restore constructive alignment (Biggs, 1996). If learning outcomes promise complexity and responsibility, the cognitive demand of assessment must reflect those commitments.
The leadership question
For me, the experience reinforced that educational leadership is embedded in curriculum architecture.
A carefully redesigned task can:
- alter the quality of student thinking
- reshape classroom dialogue
- surface disciplinary assumptions and,
- prompt collective reflection on what we value in graduate capability.
In an environment characterised by technological acceleration and complex global challenges, higher education faces pressure to demonstrate both relevance and integrity. One response is defensive: tighten controls, attempt to restore previous modes of assessment. Another is developmental: redesign assessment so that the intellectual work we claim to value becomes visible again.
The latter demands leadership at module and programme level. It requires us to ask whether our assessments truly align with the reasoning contemporary graduates need: tracing interconnections, articulating uncertainty and recognising systemic consequence.
These shifts rarely begin with institutional reform. They often begin with a module leader asking a deceptively simple question: what kind of thinking does this task actually reward?
Assessment architecture shapes academic culture. It determines whether students experience uncertainty as failure or as professional reality. It signals whether interdependence is peripheral or central to disciplinary competence.
Redesigning one finance assignment will not resolve systemic global risk. But it has reshaped how my students reason about it – and how I understand the responsibilities embedded in assessment design.
If we are serious about responsible management education, assessment is not a technical afterthought. It is leadership.
Victoria Willis is Subject Group Lead for Finance at the University of Bath’s School of Management. She holds a PRME Certificate of Excellence for embedding responsible management education and focuses on systems thinking, sustainability and assessment design that develops graduates capable of navigating complexity and systemic risk.
References
Knight, P. T. and Yorke, M. (2003) ‘Employability and Good Learning in Higher Education’, Teaching in Higher Education, 8(1), pp. 3–16.
Biggs, J. (1996) Enhancing teaching through constructive alignment. High Educ 32, 347–364.
Rittel, H.W.J. & Webber, M. M. (1973) Dilemmas in a General Theory of Planning. Policy Sciences, 4(2), 155–169.