Higher Education Research and Development Society of Australasia
So, you're thinking about interactive orals at scale? They're everywhere in the conversation about secure assessment right now. JL recommended them in this TEQSA resource from last year, and for good reason.
But here's the thing: thinking about doing interactive orals and actually doing them with 230 students are very different beasts. Professor Phill Dawson's observation about drawing the owl is spot on here:

We've been drawing educational owls for decades, but this particular owl proved trickier than expected.
This is our story of what happened when we tried to implement interactive orals in a postgraduate course (UQ’s nomenclature for a unit/subject) on learning and development theories. The course feeds into two initial teacher education programs and attracts a reasonable number of international students.
Let's be clear from the start: we're not the experts in this matter. The brilliant work of Danielle Logan-Fleming and colleagues, Sarah Davey and colleagues, and Gordon Joughin (among many others) should be your first stop. Tracii Ryan, Chris Ziguras, and Raoul Mulder have also recently released a guide to interactive orals, which includes case studies. These are all great resources to refer to.
Many others in the sector have attempted interactive orals. We're also not even the first at our own institution to try this. Others deserve credit for being the real pioneers in testing and theorising interactive oral assessment.
What we do have is experience analysing what works and what doesn't in higher education. JL has been working with colleagues across tertiary and secondary contexts internationally on assessment reform for the age of AI for the last couple of years. It was only fair and reasonable that he ate what he’s been dishing out. Between us, we've over 50 years of combined teaching and training experience. We (hopefully) know how to learn from our mistakes.
And boy, did we (mostly JL) make some mistakes (despite the generous advice we received from colleagues across the sector – thank you, by the way).
Here's what actually happened:
This is what we were dealing with: 230 students, individual 10-minute conversations about lesson plans they created, all needing to happen within a reasonable timeframe.
Room booking: Our first reality check came with room bookings. University booking systems generally aren't designed for this kind of mass individual assessment. We needed quiet spaces suitable for recording, and there are very few available. Booking out rooms for three solid weeks during the semester was far from straightforward. We needed a lot of help from our wonderful professional staff colleagues in the School of Education to manage this process.
Scheduling: We needed to schedule 230 individual appointments around our other commitments. We tried online booking systems, manual coordination, spreadsheets, you name it. Students got confused about when they were booked. We got overwhelmed coordinating schedules across multiple spreadsheets. People (both us and students) missed appointments because they thought they were booked for a different time slot. It was a logistical challenge, to put it mildly. There are effective tools for managing these kinds of situations, but the tool we were recommended has been disabled at UQ (nobody can tell us why).
Technical headaches: Then there's the recording and storage. Audio quality needs to be decent for later review and remark (if needed), but university systems are mostly set up to store either educational resources or research data. Dedicated storage is an essential consideration for this type of assessment.
Once we got past the setup drama, the actual interviews were... surprisingly energising. Who knew?
To clarify, our observations here are reflections on the process as educators and do not constitute data collected from students. We are not presenting student data.
Student reactions varied significantly: Some students were visibly nervous, and most had never participated in an oral assessment before. Others took to it like ducks to water. The really concerning thing? Several students attempted to read from prepared scripts. We had to gently redirect them toward actually having a conversation, rather than performing a monologue. Differentiating this form of assessment from formal presentations is critical. Even though we stressed it many times, some students still saw their task as delivering a presentation. There are undoubtedly cultural and language issues intertwined with this scenario for international students.
The good news: The assessment process worked. We could quickly distinguish between students who had deeply engaged with the course learning experiences and those who were winging it. Some students, who seem destined to become fantastic teachers, demonstrated sophisticated thinking about learning theories and could articulate their lesson planning decisions marvellously. Many openly told us that they put in extra work, knowing they would be chatting with us one-on-one later in the semester.
Others? Well, let's just say some students had clearly done minimal to no work, and it showed. There's no hiding a lack of preparation when you're having a real one-on-one conversation.
The surprising bit: We noticed interesting mismatches between written and oral performance. Some students who struggled with essays came alive in conversation and showed much deeper understanding than their writing suggested. A few high-performing writers struggled to articulate their thoughts verbally. It made us wonder what we'd been missing with a written-only assessment (on top of the obvious question about whether the student actually wrote it).
The real magic? Within about two minutes of starting each conversation, we knew where that student was. Experienced teachers develop an intuition for genuine understanding versus surface-level responses, and the interactive format allows us to probe in ways that written work simply can't. How to help less experienced colleagues develop this sense is worth considering.
Here's what we didn't think through properly: what happens after 230 individual appointments? Turns out, plenty.
The no-show problem: Students who didn't turn up or couldn't make their scheduled oral created headaches we hadn't anticipated. Is a missed appointment something that requires an extension to the due date? Is it a deferral? When 230 students have individual time slots instead of a single deadline or exam session, traditional policies and processes begin to break down.
The individual nature of everything meant tracking no-shows, managing rescheduling requests, and processing adjustments became incredibly time-intensive. We found ourselves juggling multiple calendars and constantly updating tracking sheets.
Feedback timing: JL hadn't properly planned how feedback would work. Do you engage in it during the session (which might affect the natural flow of conversation) or afterwards (which creates more admin work)? We learned this lesson the hard way. Ideally, feedback should have been systemically built into the conversations, but it wasn’t.
Removing flexibility: This form of assessment can’t be done easily around other schedules and commitments. There’s no marking the work at night with a bottle of wine (not that any of us would ever dream of doing such a thing). Therefore, there are significant workload issues to consider when conducting these types of assessments at scale.
Based on our drawing of the owl, here's what we'd tell anyone considering interactive orals at scale:
Make it feel like a chat, not an exam: Students responded much better when we framed sessions as conversations rather than high-stakes assessments. Drop words like "examination" or "defence." Call it what it is: a professional discussion about their work.
Sort out your systems first (if possible): Check that your booking systems, storage requirements, and admin processes can actually handle what you're planning. Talk to professional colleagues early. Trust us on this one.
Plan for accommodations upfront: Most students can handle a conversation, but some will need alternatives or additional support. Students with anxiety, language difficulties, or other circumstances might need different approaches or flexibility in how the orals are carried out. Design these in from the beginning, not as afterthoughts.
Use artefacts to anchor the conversation: Conversations work best when students have something concrete to discuss, such as lesson plans, projects, or case studies. This provides a natural structure and reduces the likelihood of students relying on memorised scripts. However, we learned that it might not be best to allow students to bring additional notes. Many of them over-relied on notes, which did not instil much confidence in their knowledge, skills, and abilities without them.
Decide on feedback early: Will you engage in feedback during the session or later? Both approaches can be effective, but you need to decide upfront and communicate this decision clearly. In-the-moment feedback feels more natural but can interfere with the conversation flow. Engaging in feedback during the conversation is more complex and will require some training and development with assessors to ensure that the feedback process occurs effectively. In some cases, we asked students how they thought they did at the end of the session, which afforded some engagement in feedback. This is a possible starting point.
Absolutely. Despite the logistical, workload, and resourcing challenges, interactive orals provided us with insights into student learning that written assessments alone simply can't provide. As far as assuring learning goes, it was crystal clear which students had not done the work of learning.
Discussing ideas with students means that assessment is more formative, and feedback can be more targeted, personalised, and humanised. This hopefully results in better learning experiences for students, more aligned with the idea of assessment as learning. Verbal and non-verbal feedback are possible, and the relational nature of the conversations can promote development from the assessment in ways that aren’t possible through one-off feedback cycles on written work.
For teacher education, particularly, the oral format mirrors the professional conversations graduates will have throughout their careers. Many students appreciated the chance to actually discuss their thinking instead of just writing about it.
We're redesigning everything: This experience convinced us to redesign the entire course around interactive oral assessment, not as an add-on to existing structures, but as a central component. That means rethinking how oral and written assessments work together throughout the semester. In a redesign, we will consider the value of multiple opportunities for one-on-one conversations throughout the semester. However, this kind of change has implications for accreditation, as it represents a significant alteration to the course.
What we are going to stop doing: We're planning to remove substantial chunks of written assessment to make room for more oral components. This is about recognising that oral assessment gives us a richer picture of what students actually understand and better prepares them for professional practice. The change is also obviously about keeping workloads manageable.
The reality check: Timing, integration, experience, and workload all need serious consideration. You can't just bolt interactive orals onto existing course structures and expect them to work (as we found out).
Interactive orals are a valuable addition to our assessment toolkit, especially as we figure out how to assure learning in the age of AI. The logistics are challenging, and implementation at scale requires significant planning and resources, but the educational benefits make it worthwhile.
Would we do it again? Yes, but with much better planning from the start. The authentic, secure assessment of student learning, especially the ability to distinguish between genuine understanding and surface-level responses, justifies the investment. Interactive orals allowed us a better way to look for evidence of learning (or lack thereof), and that’s precisely what we found.
Blog Contributors:

Banner Image: Used with permission.
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.
Member's Comments
Amanda Wolf
Wed, 07/30/2025 - 12:57
Jason M. Lodge
Mon, 08/04/2025 - 10:48
DanGriffiths
Fri, 08/01/2025 - 15:39
ADucasse
Mon, 09/01/2025 - 11:30