Higher Education Research and Development Society of Australasia
Ahmad, et al. (2018) describe the catalyst for writing their Guide was the number of requests they received from external stakeholders for an educational program to have ‘impact’. The call for impact is often driven by a concern that what is being done with other people’s money does not make a positive difference to people’s everyday lives. This is most obvious in universities’ research outputs where journal articles and PhDs can languish on the shelf largely unread and ignored. The expectation of social, environmental, and economic returns for the public investment in university research has forced researchers to account for their outcomes, not just their outputs, and describe the impact of their research on the world beyond academia.
This shows that any move to evaluating impact requires a profound shift away from the usual top-down focus of evaluations. There are many parallels between research and evaluation, with their primary difference coming down to their goals. Research attempts to add to our store of knowledge while evaluations provide information to their stakeholders. The lack of identifiable stakeholders makes realistic assessment of impact difficult in many areas of research. Ahmad et al. show that teaching and learning projects have readily identifiable stakeholders who can help define and demonstrate impact throughout a program’s development and delivery.
Fundamental to the writers’ shift from evaluating value to investigating impact is incorporating into their evaluation framework a theory of change which contains the desire for transformation, as well as quality. Introducing a theory of change allows stakeholders to be involved in the early stages of the evaluation planning by building an understanding of the current situation and what is needed to enable change to occur, and then monitoring progress against the intended outcomes as the program progresses.
The theory of change identifies what impact looks like to the stakeholder community and is then combined with an evaluation framework to provide practical steps for putting the evaluation into practice. Having taught evaluation planning for many years I’m aware that academics can struggle with deciding what they would evaluate when they were planning the evaluation process. Most are comfortable with interpreting the results of qualitative and quantitative data that has been collected on their behalf by their institution for them to judge their program’s effectiveness. Planning their own evaluation becomes a complex task due to the large number of frameworks that are available to choose from.
What most academics want is a tried and proven template that can applied in a relatively straightforward manner. Ashad et al. provide us with a six step template that combines a theory of change with a road tested evaluation framework. The steps in their framework are intended to be sequential, and like many sequential processes each step is presented as if it is equal to all others. However, if I understood Ashad et al. correctly, the critical step that makes this an investigation of impact is Step 2, defining the objectives and impact anticipated. This involves a conversation with the stakeholders identified in Step 1 about the desired outcomes of the program. This is somewhat different from what would normally be described as objectives and involves “envisioning the long-term impact of the change you want to accomplish” (p. 18). The benefit for the evaluator is you only evaluate matters that are meaningful to the stakeholders and it helps to develop a hypothesis about what the evaluation might reveal.
The Guide provides two case studies to help demonstrate what the framework looks like in practice. The first is helpful in understanding why certain elements were included in the framework as it was developed. However, don’t expect the case study to offer a model of flawless implementation. Instead it exists as a cautionary tale to remain flexible and adapt as your evaluation unfolds. The second case study provides a good example of the importance of the visioning step and how impact evaluation can play a central role in project management.
The call for impact set the writers on the path of designing robust evaluations with the goals of the stakeholders in mind. By defining impact as something specific to the context of a particular stakeholder group, Ahmad et al. have written a Guide for those interested in the co-design of their evaluation. The shift from outputs to outcomes described in this Guide will be familiar to every university teacher and increasingly every university researcher. Going beyond being the subject of evaluations to being a collaborator in the process will be new territory for most. Framing your evaluation around impact will no doubt make it attractive to education administrators looking to justify the resources being spent. More than that it will have tangible real-world benefits through stakeholder engaged evaluation which is something that HERDSA is right to champion.
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.