In the rapidly evolving landscape of higher education (HE), the bedrock of academic integrity and a guide for ethical conduct is set out by the International Centre for Academic Integrity. This cornerstone of academic and research practice is built upon pillars of trust, honesty, fairness, responsibility, respect, and courage. However, as we step into the era of Generative Artificial Intelligence (GenAI), higher education is at a crossroads.
The recent Advance HE report on Integrity in the era of Generative AI (Academic Integrity in an Era of Generative AI) paints a nuanced picture, juxtaposing practitioners’ hopes and fears regarding AI's impact on learning and teaching in higher education. From the optimistic prospects of enhancing learning experiences to the looming concerns of job displacement and ethical dilemmas, the perspectives are manifold in nature. Policy and practices vary, ranging from meticulous attempts to design AI out of the curriculum to varying levels of adoption and/or incorporation. Amidst these developments, the core values of academic integrity remain to provide an invaluable underpinning to inform our discussions about transparency, equity, ethical use, dialogue, skills development, adaptability, and respect for diversity in relation to these new technologies.
A critical aspect of this dialogue for example revolves around the role of Generative AI in assessments. The University of Limerick's (UL) recent decision to disable the Turnitin AI detector underscores the complexities involved. As with other institutions UL must grapple with the decisions involved in ensuring a delicate balance can be struck between leveraging AI's affordances and potential whilst preserving academic integrity. The approach of the Academic Integrity Unit in UL, guided by Perkins et al.'s framework, offers an example of a thoughtful response that delineates expectations and encourages critical engagement with AI tools; one moreover that emphasises data protection and compliance with European standards.
Like UL we are all seeking to uphold the integrity of assessments while embracing technological advancements. The challenges though are not inconsiderable, not least in relation to staff and learner development and support. Large student-staff ratios; insufficient time and resource for ongoing CPD for staff; an absence of embedded learner development relating to academic integrity and GenAI for students; and a reluctance to rethink some traditional assessment methods all add to the risks. There is also the potential, equally loaded with risk, that AI is utilised in the assessment process by teachers primarily to find affordances from efficiency rather than educational perspectives.
The question then arises: what staff development and support will be most sustainable and effective in the long run? For academic and senior leadership representatives attending the European Universities' Association Annual Conference at Swansea University this April the need for coherent institutional strategies and policies was high on the agenda. So too was open internal debate within and across professional academic learning communities. Different disciplines will need to critique the continuing relevance of the learning outcomes in their courses, and to have the courage to embrace the need to align graduate attributes and skills with those demanded by the AI-influenced workplaces of the future. EUA colleagues saw a need to tackle redundant and outmoded educational cultures and practices and to work instead towards developing a culture in higher education in which AI is integrated in critical and proactive ways. If we can rise to the undoubted challenges then opportunities abound. Transforming the ubiquity of AI into a golden opportunity requires concerted effort. Properly resourced, high quality staff development, encompassing curriculum and assessment design alongside IT competencies, is essential. Equally crucial is fostering learner autonomy and confidence, empowering students to navigate the complexities of AI with integrity.
The advent of Generative AI in HE presents a paradoxical landscape therefore - an environment in which doing nothing will prove fatal. The only constant will be change, ‘panta rhei’ (everything flows). Navigating this terrain will require a steadfast commitment to academic integrity, tempered with a willingness to embrace innovation. As we stand at this crossroads, it is our agency, the choices that we make, that will shape the future of education and determine the answer to the question in our title.
Dr Angélica Rísquez is Lead Educational Developer (Learning Technologies and Learning Analytics Lead) at the Centre for Transformative Learning, University of Limerick (UL, Ireland). She has over 20 years' experience in Higher Education, is a Senior Fellow SEDA and winner of the KBS Pedagogical Awards 2024 (UL) with the project 'STELA Live: Learning Analytics for Student Success'. She has extensive experience in the area of digital education, teaching in online and blended environments, and academic integrity. She is currently the Secretary officer in the Active Learning SIG in the Association for Learning Technology in UK (ALT).
Professor Mark O’Hara is a Principal Fellow and Senior Consultant (Education) at Advance HE. He has over 30 years’ experience in higher education in a wide variety of roles including programme leadership, Head of Department, Head of Student Experience, Associate Dean and Associate PVC. Mark is both a National Teaching Fellow and a winner of the UK’s Collaborative Award for Teaching Excellence (CATE). Mark is also Vice Chair of the European Association of Institutional Research (EAIR).
Find out more about Advance HE's Member Project 23-24 - Generative AI: Beyond Assessment which takes us beyond current debates on AI and assessment to consider in-depth other ways the technology will impact higher education and the communities we serve.