Skip to main content

AI and the future of student teamwork: collaboration or collusion?

24 Nov 2025 | Dr Folashade Akinmolayan Taiwo Folashade Akinmolayan Taiwo, Director of the Centre for Research in Engineering and Materials Education, SEMS (QMUL), explores how AI is reshaping student teamwork and collaboration in higher education.

Artificial intelligence (AI) is reshaping how students learn, work and collaborate. For many in higher education, that’s both thrilling and alarming. As universities adapt their understanding of what Generative AI means for assessment and integrity, a subtler question risks being overlooked: what happens to teamwork when AI joins the group? 

The promise of AI-enabled collaboration 

In principle, AI could help student teams overcome many of the well-known frustrations of group work. Tools like ChatGPT, Gemini or Copilot can summarise sources, generate ideas or turn a scattered brainstorm into a structured plan within minutes. That productivity boost has clear pedagogical benefits. Students can spend less time wrestling with formatting or blank-page anxiety, and more time engaging in critical discussion and creative decision-making. For some students, especially those for whom English is an additional language, or who experience anxiety about writing, AI can make collaboration more inclusive by lowering communication barriers. 

Used well, AI can even enhance collaboration. A project group that treats AI as a shared brainstorming partner might use it to prototype ideas, debate alternatives and collectively evaluate outputs. In this sense, AI can act as a catalyst for reflection, prompting teams to ask, “Why did we accept or reject this suggestion?” or “What does this reveal about our assumptions?” 

The perils of partnership 

But these opportunities come with risks. The “great deskilling dilemma” (as explored by colleagues at Queen Mary University of London’s Centre for Excellence in AI in Education) looms large. If students outsource too much cognitive labour to AI, they may lose precisely the skills that group projects are designed to develop: negotiation, critical judgement, and co-construction of knowledge. 

The division of labour within teams could also shift in unhelpful ways. Those most confident with AI tools might dominate early stages of a project, leaving others as passive reviewers. This could reproduce existing inequities, giving undue power to the “AI-fluent” while marginalising peers who are less experienced or sceptical. 

And then there’s integrity. If a group can collectively generate a polished draft via AI, how do we evaluate the authenticity of their collaboration? Group work has always been tricky to assess fairly, but AI introduces an extra layer of opacity: who did what, and what was the tool’s role? 

Loss of teamwork 

At its best, student teamwork can simulate a well-oiled machine. It involves negotiation, disagreement, compromise and co-creation. These are skills graduates need in workplaces where collaboration is complex and multidisciplinary. The danger of AI is that it smooths away this productive friction. 

A team that immediately turns to ChatGPT to “get the answer” misses out on the intellectual wrestling that builds deep understanding. The temptation to automate tasks can make teamwork transactional: rather than a social learning experience, it becomes a set of parallel interactions with the same machine. 

In that sense, AI threatens not just assessment design but a foundational pedagogical principle: that learning is social, and that grappling with difference and ambiguity is essential to it. 

Training students for the AI-augmented workplace 

Ignoring AI is not an option. Graduates will enter workplaces where human-AI collaboration is routine. If higher education doesn’t prepare students for that reality, we risk another kind of deskilling - one of relevance. 

So what should universities do? 

First, we need to explicitly teach AI literacy as part of teamwork and digital capabilities. Students should understand what AI can and cannot do, how to prompt effectively and how to critique and verify its output. Embedding this into group projects, rather than bolting it on as a generic skills course, makes learning contextual and meaningful. 

Second, assessment design must evolve. Instead of trying to eliminate AI use, educators can require students to document and reflect on it. Asking teams to explain how they used AI, what they accepted or rejected, and why, makes invisible processes visible and assessable. 

Third, universities should clarify expectations and policy around AI in group work. Ambiguity breeds anxiety and inconsistency. A shared understanding of “acceptable AI use” supports fairness and integrity. 

Fourth, staff development is essential. Many educators are still grappling with their own relationship to AI. Without support, the burden of redesigning AI-infused group work risks burnout or resistance. Communities of practice, training sessions and shared examples can help academics feel confident in adapting their pedagogy. 

Finally, universities should ensure equitable access. If only some students can afford premium AI tools, disparities will widen. Providers should consider licensing or institutional access to ensure fairness across cohorts. 

Rethinking what teamwork means 

The arrival of AI in group work invites a deeper question: what is the purpose of teamwork in higher education? Is it simply to divide labour efficiently, or is it to cultivate the interpersonal, reflective and ethical skills that make collaboration meaningful? 

If the latter, then our role as educators is to design learning that resists full automation and preserves the human elements machines cannot replicate: empathy, negotiation, critical debate and collective accountability. This may require reframing success. Perhaps a “good” group project in the AI era is not the most polished report, but the one that demonstrates the most thoughtful integration and critique of AI assistance. 

From policy to pedagogy 

For policymakers and sector leaders, this moment also raises strategic questions. How should quality frameworks, employability strategies and digital education initiatives evolve to recognise AI-mediated collaboration as a core graduate competency? 

If we treat AI use as misconduct, we risk pushing it underground. If we embrace it uncritically, we risk undermining the educational value of teamwork. The challenge is to occupy the space in between: to cultivate critical collaboration with AI. 

The sector now needs a shared conversation about what good collaboration looks like in an AI-rich world. How do we maintain authenticity, fairness and inclusion when human and machine are co-authors? 

The answer will not be found in a policy document or a new piece of software. It will come from experimentation, reflection and open discussion. This is exactly the kind of teamwork that universities, at their best, model every day. 

 

Folashade Akinmolayan Taiwo is a Reader in Engineering Education at Queen Mary University of London. Her primary research focuses on innovative and inclusive pedagogies embedding graduate attributes, enhancement of student experience focusing on embedded support mechanisms in group work, and student team dynamics in problem/project- based learning. 

 

Join the Artificial Intelligence Symposium 2026, a presenter-led event to explore, discuss and share the latest AI-driven practices and pedagogies. Find out more

Author:

We feel it is important for voices to be heard to stimulate debate and share good practice. Blogs on our website are the views of the author and don’t necessarily represent those of Advance HE.

Keep up to date - Sign up to Advance HE communications

Our monthly newsletter contains the latest news from Advance HE, updates from around the sector, links to articles sharing knowledge and best practice and information on our services and upcoming events. Don't miss out, sign up to our newsletter now.

Sign up to our enewsletter