Higher Education Research and Development Society of Australasia

Assessment has a history spanning millennia, with the earliest records dating back to the Chinese imperial examination system, over 3,000 years ago. Often, when assessment is mentioned, we consider it to relate to examinations, pencil and paper testing halls, or perhaps an oral viva voce. These ideas relating to test-taking experienced in school are related to the traditional framing of measurement. However, these common forms of assessment are not the only way of understanding a learners' ability, proficiency, or evidencing prior learning, and other ways of framing assessment, for example as a cultural practice, are important considerations.
We cannot talk about assessment if we do not address feedback in the discourse. The term feedback has evolved over time. The shift has gone from viewing feedback as a tool to enhance students' behaviour, to a way of using information from different sources and consolidating it in order to develop one's own knowledge and enhance learning pathways. Feedback represents a potent influence in learning. For this reason, all assessments should consider elements of feedback offering meaning for what has been assessed. Regardless of the type of assessment, it is essential to inform the test-taker about their strengths and areas for improvement.
While traditional assessment methods offer valuable insight into learner proficiency, AI presents opportunities to redefine these practices in both promising and challenging ways. For example, in the challenging aspect, there is a concerning element if AI tools start to receive primary responsibility for assessment feedback. Beyond the documented risks of biased, toxic, or erroneous content in AI responses, the fundamental absence of a human-to-human relationship between machine and student may result in feedback that lacks important contextual understanding and empathy. The machine's inability to genuinely comprehend the human experience behind the work represents a substantial limitation.
In our current educational landscape, feedback, due to time constraints from educators and growing classroom sizes, is often generic and impersonal. However, in this scenario, based on the work of Corbin and collegues (2025), such feedback is not only fraught with similar issues that are shared with human feedback, the important dimension of recognition within the teacher-student relationship is also degraded.
When considering how an AI-integrated dystopian current scenario looks, we find that the use of AI technology, as it has been tried with bans and extreme guardrails in different contexts, stifles learners, restricts freedom and agency, and leads to a regime of surveillance, ultimately disempowering the learner. Much like Bentham's 'panopticon' a regime of permanent background surveillance might lead to a learner feeling as though they are imprisoned, observed, and restricted. At the same time, such a dystopian form of assessment is insecure, offering opportunities for those with a wish to do so to cheat, gain unfair advantages, or compromise the integrity of the assessment. AI developed assessments might be open to security breaches, may be invalid and fraught with hallucinations, toxic or improper content, and may be developed with profit as the primary motivation.
Our goal is to draw on the praxis of assessment and feedback that we are all familiar with and provide a provocative account of what dystopian and utopian approaches to assessment and feedback may look like. We then invite the reader to think about our current lives as a dystopian approach to compare to the collectively envisioned scenarios for educational futures regarding AI, assessment, and feedback. These represent a dichotomy of utopian and dystopian situations. The goal is not necessarily to represent a real, possible scenario, but rather to identify from this exercise what it is that we value in AI in assessment and feedback.
In these scenarios, our current lives and the story we will present, we have foregrounded several areas that we think are representative of positive potential aspects that AI might help us to realise in assessment and feedback in education. The first of these is personalisation. Personalisation is a controversial aspect of AI's implementation in education - there is a schism between those who believe that the power of AI models such as ChatGPT is to give 'personalised feedback', whereas others argue that feedback from teachers or instructors is already personalised, and what does personalised learning really mean anyway?
Regardless, we take this aspect to mean that feedback generated from AI systems really understands the learner. At present, this is not possible and remains with the realms of science fiction - for now. On the other hand, other aspects of AI-enabled assessment and feedback are more realisable - flexibility, rapidity, and instantaneous questioning of feedback or prompting and probing for further information by the learner are clear potential benefits, and research suggests that both students and learners, in some cases, are open to receiving AI-supplemented feedback, although human feedback may still be preferable.
In our scenario, we consider not only these aspects, along with security, reliability, and crucially validity, but overall, a form of AI-mediated assessment and feedback which empowers learners and enhances agency. Having examined the dystopian elements of our current assessment reality, we now present a utopian scenario that imagines how AI might transform assessment and feedback in more empowering and beneficial ways:
Francis didn't need to leave home for their assessment this morning. Upon receiving McArthur—an advanced educational AI robot—in an elegant box before classes began last year, it had transformed from a mere study tool into an indispensable companion. Named after Francis' grandfather, their greatest supporter in life, the robot had become equally supportive in navigating the complexities of university education while balancing a full-time job.
McArthur wasn't just any AI—it was a personalised educational coach that would accompany Francis throughout their entire degree programme. Having this companion was Francis' choice, and the benefits were immediately apparent. The robot provided instantaneous feedback, maintained a comprehensive record of Francis' learning journey, and helped them stay on schedule with assignments and studies. Though Francis was developing a certain reliance on McArthur, they understood that their relationship with the educational robot was based entirely on their own inputs and choices.
On assessment days, Francis would place McArthur in a quiet room where it could observe them with 360-degree vision as they articulated and demonstrated their knowledge. The questions McArthur posed were brilliantly tailored, drawing from university staff inputs, Francis' past interactions, and core course objectives. Most impressively, McArthur could make interdisciplinary connections and validate results while protecting Francis' sensitive information.
The immediate feedback was perhaps the most valuable aspect of this system. After assessments, McArthur would analyse Francis' strengths and weaknesses, then suggest personalised resources—articles, books, videos—aligned with their learning style. The robot's perfect alignment with university regulations sometimes surprised Francis; it was like consulting the most knowledgeable staff member about policies, procedures, and content. Nevertheless, Francis still valued their human teachers as irreplaceable facilitators of learning and relationship-building. Going to campus is still something Francis enjoys and does twice a week.
Beyond academics, McArthur held valuable data about industry needs and career opportunities. It designed assessments to prepare Francis for positions that matched their abilities and aspirations. This process was fully transparent—McArthur discussed its conclusions with Francis daily, providing guidance toward their dream career.
The robot represented a significant investment, made possible through university-industry partnerships. The results were undeniable: better-prepared graduates and a streamlined recruitment process. Francis maintained control of their data, with the option to share their comprehensive skills assessment with future employers when changing jobs.
As graduation approached, Francis occasionally contemplated the inevitable parting with McArthur. The robot would be reset, no longer recognising the student whose educational journey it had so intimately shared. "Better this way," Francis would think, acknowledging the bittersweet conclusion to their transformative educational partnership.
This scenario illustrates several key elements that contrast sharply with our current dystopian reality of assessment: genuine personalisation that goes beyond inserting a student's name into standardised templates; transformed assessment timing that is immediate and ongoing rather than fixed and delayed; preserved learner agency and control; and acknowledgment that human connection remains essential for meaningful education.
You are now called to think about how we currently treat AI tools integrations in the educational context and think about the possibilities in the future of assessment and feedback. Considering our lives as the dystopian scenario, the utopian scenario brings imaginaries to this world of opportunities.
By contrasting these dystopian and utopian visions, we can identify several key values that should guide the integration of AI in assessment: learner agency and control must be preserved and enhanced; human relationships must remain central to meaningful assessment; transparency in how assessment data is collected and used is essential; and validity must be ensured through careful integration of human and machine capabilities.
As we navigate this rapidly evolving landscape, maintaining focus on these values can help us move beyond both dystopian surveillance models and unrealistic techno-utopian visions, toward assessment systems that genuinely serve the needs of learners and education. The dichotomy presented here is not meant to predict specific technological developments but rather to provoke reflection on what we truly value in assessment and how technology might help us better realise those values.

Banner Image: ChatGPT
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.