Higher Education Research and Development Society of Australasia
Since widely-available generative artificial intelligence (genAI) exploded in late 2022, educators have been frantically working behind the scenes to ensure that their assessments remain fit for purpose.
How can we be sure that learning is occurring when many answers are just a few short prompts away?
It’s not hard to see why so much attention has been dedicated to attempting to secure assessments. But what is the cost of trying to secure our assessments? And what are we really trying to achieve by doing so?
The arguments put forward here come from a debate held at the Biomedicine Discovery Institute (BDI) education forum at Monash University. Tim, Georgina and myself, were on the affirmative side of the debate for the statement “Securing assessment is a waste of time and effort”.
It was certainly the trickier side of the debate, but what emerged were some interesting provocations about the role of assessment and its place in the educational ecosystem going forward. So, while this blog post largely focuses on one side of the story, it is probably also the side you’ve heard less about. Therefore, while we recognise that there is often a need for some combination of restrictions and methods of observing and enforcing them within assessment; our purpose in this post will be to defend the position we took in the debate to provoke broader thinking about assessment in the age of genAI.
We don’t assess students for the sake of assessing. A primary purpose behind assessing students is to help us understand whether students are developing the knowledge, skills, and professional capabilities they will need to thrive in the future. Another is to support that development. But if security of assessment dominates our thinking, it’s only natural for important aspects of these guiding purposes to take a back seat.
So, what exactly do we mean by securing assessment? Phill Dawson has defined assessment security as “adversarial, punitive and evidence-based”, involving “measures taken to harden assessment against attempts to cheat. This includes approaches to detect and evidence attempts to cheat, as well as measures to make cheating more difficult”1. This includes approaches such as verifying identity, invigilation and evaluating for plagiarism and unauthorised engagement with third parties, or indeed the use of genAI.
Securing assessments is increasingly becoming a poor use of our educators’ limited time. At the end of the day, do we really want our educators to be policing learning?
As Professor Danny Liu puts it:
“Academics are teachers, not police… we want to verify whether a student is learning, not whether they’re cheating - because if they’re not learning, they’re essentially cheating themselves.”
Every hour an educator spends attempting to secure an assessment is time taken away from designing new learning experiences, engaging with students, or working on other measures to help students learn. The cost of securing assessments is simply too high. And to put it bluntly, it’s a futile battle that we can’t possibly win. Or, if we can, we will have destroyed much that is valuable in our education.
So, perhaps some reframing is in order. Rather than focusing on how secure our assessments are, perhaps the question we should be asking ourselves is: how can we collect meaningful evidence that learning is occurring?
We’ve reached a point where the use of genAI is embedded across many industries. So, to make the university experience fit for purpose in 2026 and beyond, we need to address the reality of widespread GenAI. Now that genAI is part of the educational ecosystem, we must find ways to work with it or, at least, learn about it, rather than being fixated on controlling when and how it’s being used.
In many ways, the past few years have been characterised by an unwinnable arms race. As AI continues to improve, and as students develop more complex and integrated practices around it, educators are asked to try harder and harder to ensure that their assessments are secure. This is not a fight that we can win, and some would argue it's not even a fight worth winning. With every new security measure educators introduce, new workarounds emerge.
GenAI detection tools are certainly not the answer. They’ve been shown to be at best unreliable and at worst dangerous2. Many students have been falsely accused of AI use in their assessments. Something that is impossible is to disprove.
Beyond detection, we have surveillance and invigilation. Use of these invigilated and timebound settings risks further marginalising trust relations with students, while also narrowing the scope of learning that students can demonstrate.
Where has this focus on assessment security and genAI detection got us? It’s eroding the trust between institutions, educators and students. And it comes back to the fundamental question of why are we putting so much effort into securing assessments?
Once students enter the workforce, they will largely have autonomy in how they choose to use genAI. It is unlikely that anyone will be checking which AI tools they’re using or restricting their use of these platforms. So, if we want our students to be equipped to use AI ethically once they enter the workforce, we need to provide them with opportunities to practice this behaviour at university. You could argue we’re actually doing students a disservice if we don’t help them to learn to use genAI in academically and ethically appropriate ways.
Now, let’s consider the dark side of the surveillance culture that is proliferating in our attempt to secure assessment.
Professionalism and integrity are vital traits that we want our students to develop. But it’s impossible for students to develop these traits while being watched.
To cultivate professionalism, we need to create an environment where students are guided on how to behave ethically; and to trust them to do so when no one’s watching. If we were to secure all assessments, we would actually be removing opportunities for students to behave with integrity.
So far, we’ve seen a lot of issues with securing assessment, but it raises the question: If securing assessment is so problematic, what should we be doing instead?
We want our assessments to provide evidence that learning has taken place. So, if we can reframe our thinking from ensuring our assessments are secure, to ensuring that learning has taken place, then it opens up a number of opportunities.
Instead of focusing, as we often do in higher education, solely on the product of the assessment, we can lean into seeing assessment as a journey – and collecting evidence along the way. Through designing assessments where we observe the processes used to work towards products, we can generate evidence of learning over a period of time.
How? Through talking to students, providing opportunities for collaboration and iteration, and, of course, reflection.
If we can see something of how students go about their work, and if they can explain the decisions they’ve made over the course of putting together an assessment, while also justifying these decisions and responding to feedback, we can have confidence that learning has indeed taken place. In such an assessment, whether a student has used AI or not becomes secondary to the more fundamental question: Did the student engage in authentic learning during the assessment task?
Ask yourself now: How sure are you that your students are engaging in authentic learning during your assessment tasks?
If you’re not sure they are, consider how you might be able to generate and collate evidence of learning through the processes your students go through while putting together their assessment work.
Collecting evidence of learning that extends beyond the final product is no small feat. It’s difficult and time consuming. But we need to find a way to make it a reality. We cannot cling to a model of learning and assessment that is no longer serving educators or our students.
If universities are serious about preparing students for the complexity that the future will bring, we must find ways for educators to meaningfully encounter the work of students, and guide their learning processes.
So instead of focusing on securing assessments -a battle we will never win- let’s instead redirect our efforts to designing for learning experiences and environments that facilitate meaningful encounters between students and educators.
If the ultimate goal of assessment is to support and verify meaningful learning, then security is the wrong framing. We don’t need surveillance; we need evidence of learning.
So, how might you develop assessments where you can gain insights into what your students know and how they are thinking during the task?

Banner image source: Microsoft Copilot
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.
Member's Comments
franceschang
Wed, 02/11/2026 - 15:50