Higher Education Research and Development Society of Australasia
Assessment is, of course, an integral part of teaching and learning at UQ. We are continually working to improve assessment as a certifiable system that both judges the extent to which students meet identified standards, and engages them in productive learning. Improvements to assessment, such as increasing quality, authenticity, and engagement, should be evidence-based with data that illustrate assessment issues on course, program, and institution-based levels. A system that has eighty two types of assessment, plus extensive use of an ‘Other’ option, is obviously not readily able to provide the necessary data. We also found that the teaching changes in response to the pandemic and increased concerns around academic integrity, highlighted the inability of this system to provide the data necessary to understand the assessment offered.
In response, we undertook an investigative process to deliver a classification system that would provide the evidence necessary to improve assessment practices across the institution. The new system would be used to: heighten academic integrity through user guidance and analysis of trends across programs; promote a range of potential authentic and engaging assessment tasks; support staff to design tasks that better assess learning outcomes and inform assessment decision-making; and bring clarity to students about their assessment tasks.
At UQ we found that approximately half of assessment items were classified using only six categories, with the top category being ‘Other’. In 2019 and 2020, this meant that 1830 courses were offered with at least one assessment piece unhelpfully categorised as ‘Other’. A scan of practices across the Australian Group of Eight universities indicated three different approaches. Classification of tasks from a prescribed list were found at three institutions; open classification of tasks with no prescribed list were found at four; and no classification of assessment tasks was found at on institution. We also noted that Griffith University had developed a tight classification system using five categories with twenty-three sub-categories, and similar examples were found at other national and international institutions.
A list of common elements was developed from these examples and current UQ practices and proposed as an improved classification system. This list was discussed at school, faculty and institutional levels using various fora including Teaching and Learning Committees, and Academic Board. There were several iterations, but it quickly became apparent that a tiered classification system was needed in order to capture the diversity of assessment practices from Objective Structured Clinical Examinations, to demonstration of a built prototype, to a critical essay, and provide sufficient data to fulfil the terms of reference. Thankfully, a system was finally agreed, but in the process of consultation across discipline there was an increase in the number of items in most classification tiers. However, the largest list contains twenty options which is less than a quarter of the current eighty-two choices.
The new classification system, in addition to the title of the assessment, has three sections from which all options that apply can be selected. Section 1 Category which indicates the type of assessment activity from a list of twenty options; Section 2 Mode which asks how students will respond to the assessment task from a list of the four options: Activity/Performance; Oral; Product/Artefact/Multimedia; or Written. Section 3 Conditions or rules associated with the assessment from a list of twelve options, for example hurdle, identity verified, in-person, peer-assessed, team or group-based. Discipline-specific terminology and assessment practices that are not widely applicable will be captured using free text ‘tags’ instead of appearing on a drop-down list.
We anticipate this work will add a stronger evidence base for future assessment practices and ensure improved outcomes for students. It will provide an overview of assessment practices that allows us to describe an assessment item at a course level; give an indication of assessment over groups of courses on a program or plan basis; and identify academic integrity issues and consideration for types of assessment across the institution.
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.