Higher Education Research and Development Society of Australasia

It’s in the guidelines, the strategy documents. And at conferences like HERDSA2025, it’s in the keynotes. Institutions are keen to understand AI readiness, with frameworks drafted. But amid all the noise about innovation, far less attention is being paid to the toll it’s taking on those implementing it.
I’m not talking about academic workload in the abstract. I’m talking about risk. Psychosocial risk. That’s when job demands, culture, or lack of control can cause psychological harm. That was the focus of my HERDSA talk: Reframing Generative AI as a Psychosocial Risk: Governance, Well-being, and Sustainability in Higher Education.
For two years, I’d watched conversations swirl around chatbots and prompt engineering, often celebrating AI’s promise of becoming a $32.27 billion industry by 2030. So yes, standing up at a major conference to talk about how GenAI might be causing psychosocial harm while others demo grading tools and share top prompt hacks felt like unplugging the WiFi at a tech party.
GenAI isn’t just changing how we assess or teach. It’s shifting how we work, which vendors claim, ‘saves us time’. Yet, those rolling out AI in the classroom, were whispering to me about the time it takes to console students falsely accused of cheating and increased workloads.
So, I did a pilot project, call But did they actually write it? supported by the Australian Teacher Educators Association (ATEA) which surveyed 84 educators nationally and interviewed 14 educators from TAFE, higher ed, and preservice teaching. When the WiFi was unplugged the uncertainty, fatigue, and added work was quite clear. The emotional labour of caring for students, guiding peers, and interpreting AI expectations was falling far outside workload models, and it was impacting people’s wellbeing. As one TAFE educator said in frustration:
“That’s five students I now have to verbally assess. That’s five extra hours out of my week… it’s blowing my marking out by two or three hours just having to go back and check it.”
I used Safe Work Australia’s guidelines to make sense of the data. I checked to see if people’s comments aligned with notions of low job control, poor organisational change management, role ambiguity, cognitive overload, focussing on educators’ stress and coping associated with GenAI in their workplace.
While all academics experienced low job control and change with GenAI as has happened in the past with technological disruption, some were also holding space for the real, messy experiences of students resulting in role ambiguity and increased job demands. Some were responding to hundreds of emails from distressed students and redesigning assessments without fanfare. They arguably, whispered their concerns, because voicing the toll of caring was less valued than celebrating the latest chatbot.
In my survey (N=84), 38% said they felt stressed trying to detect AI-generated work. Fourteen percent reported high anxiety associated with the counselling needed for students, coupled with the increased workload. Over half said GenAI was a threat, not to their jobs, but to their mental health. Not one participant said GenAI felt neutral. All wanted clearer policy. All had asked for support.
The interview data (N=14) showed that GenAI had entered higher education in an emotionally charged environment, and some were managing the care work associated with integrity and assessment more than others, and without additional workload. Second that GenAI was a strain, not a support for some. Third, when GenAI was being used as a support, it was to cope with the additional workload that GenAI had caused.

Acknowledging that sample size for this pilot was small, I made some leaps here to extend the thinking. I considered the massive body of work about care work being gendered, and linked ‘care work’ to 'women's work' which is systematically under recognised. This is reasonable as women face systemic obstruction for promotion, career inequity and a myriad of other factors that have long been exacerbated by structural bias and inadequate societal and institutional protections. I connected the dots, to compare the funding for AI, and the apparent time efficiencies of chatbots, to the lack of reward and workload provided for this care work. I decided that it is reasonable that women were potentially being exposed to increased psychosocial risk.
Acknowledging this potential, I began looking further afield for other gendered risks associated with GenAI. I considered the work of the eSafety Commissioner, that has shown that 98% of deepfakes are pornographic, and 99% of those target women. So not only are women in higher education, potentially doing more care work associated with GenAI, assessment and integrity in higher education, they are also at a greater risk of being deep faked because of GenAI.
Soon, those already doing the additional care work may need to also respond to emails from students who have discovered pornographic likenesses of themselves circulating while on school placement – all the while, still whispering about the stress and workload, but now also the 3 deepfake videos trending on OnlyFans.
Let’s pause the relentless noise of innovation, assessment reform, and integrity debates long enough to listen to the real workplace risks of increased care work, deepfake pornography, harassment, and sexualised abuse for women in higher education. We have the drivers: Psychosocial Risk Legislation and the new Code for the Prevention of Gender-Based Violence. The new code rightly acknowledges that both students and staff deserve protection, and the Psychosocial Risk legislation provides the scaffold for staff to have voice.
If we reposition GenAI as a psychosocial risk for women in higher education in this light, we might finally hear the whispers about gendered care work and be able to start there. Let’s do that - because I haven’t even mentioned intersectional harms yet. And we all know that some women are more impacted than others. Further, I haven’t even talked about doxing, impersonation, identity fraud, smart glasses or algorithmic misogyny associated with GenAI yet – let alone the massive impact to Country AI is having. If AI’s promise of becoming a $32.27 billion industry by 2030 is real, and women in Australian universities already face entrenched gender inequality, lets lead how AI is located in higher education better. This means recognising GenAI’s role in amplifying workplace risks for women.
Image source: ChatGPT
The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.
HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.
Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.
HERDSA members can login to comment and subscribe.