GenAI – Fears, Opportunities & Possible Solutions

You are here

I was recently invited to an Atlassian showcase on the seamless integration of AI technologies in its product suite. This company serves over 300,000 customers across more than 200 countries.

This event crystallised the fact that AI is not going away. We can’t ignore it, and we can’t put the genie back into the bottle. We need to learn to live with it, which involves teaching students to use it ethically and productively.

AI vs GenAI

AI is not new, it was proposed back in 1955 and was evolving quietly in the background. It was not until ChatGPT-3.5 (a GenAI technology) passed an exam that most of the world population took much notice. One year managed to outdo the hype of the previous 68!

Has that much changed? My team explored the AI world before and after GPT3-5. We found that academics appreciated common potential benefits, including personalised learning experiences, immediate and enhanced feedback and support, improved learning outcomes and accessibility, to name a few. The ability of GenAI to create content sparked excitement and new fears: bias, relevance, accuracy, and eroded skill development — with academic integrity top of mind.

The Common Fears

However, common concerns persist: privacy, data security, dehumanisation of education, and integration barriers, such as lack of training, workload capacity to retrain, and ethical guidelines or institutional policies. No one blinked an eye regarding the lack of integration action with traditional AI, as most students didn’t even realise they were using it in everyday applications.  With GenAI, everything is different; every student knows what it is, and most students use it, but many have limited GenAI literacy and do not use it very well.

The capabilities of GenAI evolving rapidly, especially if you are prepared to pay. The problem is that even the students using GenAI ineffectively are ahead of the average academic who may have had a quick play of ChatGPT-3.5, did some bad prompts and thought not in my lifetime.

Solutions

Our study suggests that moving forward requires the structured development of comprehensive training programmes, the creation of institutional support structures, the implementation of pilot programmes, the development of ethical guidelines and policies, and encouraging student engagement, critical thinking, and the cultivation of human skills that surpass AI capabilities. The solutions below refer to both AI/GenAI as AI.

1. Institutional support and training shape behaviours. The scarcity of formal training, systematic guidelines, and policy frameworks currently limits effective integration.

Provide clear guidance to staff on how workload capacity can be modified to fit professional development needs. Simply asking staff to find time for AI training will likely mean it gets deprioritised in favour of immediate teaching and research demands. If proper training does not happen, the gap in understanding will grow, making future adaptation even harder.

One way to alleviate this is to consider departmental blocks of time allocated to all staff for structured, hands-on workshops focusing on: 1. Understanding AI tool functionalities, their potential for misuse, and ethical considerations. 2. Enhancing productivity through AI and adapting assessments accordingly. 3. Analysing multiple GenAI tools rather than relying on a single example allows staff to understand their varied strengths and limitations.

Short, structured video modules can supplement training, ensuring ongoing learning in manageable segments. However, passive learning alone is insufficient—staff need interactive sessions where they can experiment with AI tools in a guided environment.

To foster institutional change, identify and support departmental champions by providing workload relief so they can lead AI implementation efforts and mentor peers. In addition, a centralised AI hub should be established to serve as a repository for policies, best practices, AI tool comparisons, and assessment adaptation strategies.

Some may argue that structured training is too resource-intensive. However, institutions that fail to invest in AI capacity-building now will face greater challenges later, particularly in maintaining academic integrity and adapting assessments to future technological advancements.

2. Academics' intentions to use these technologies are contingent upon the development of robust ethical guidelines and supportive institutional policies.

The biggest concern most academics have with AI remains academic integrity. They are not wrong. Still using quizzes without security measures? It’s time for a risk assessment! (A risk assessment procedure is provided in the supplementary materials).

Our comprehensive study (see Table 10 for a summary) shows that across most assessment types, unsupervised and unsecured assessments are becoming obsolete for assuring learning. Now that we realise we can’t stop it, guidelines and policies that represent such concerns are vital. This will shift the mindset towards the numerous opportunities for AI integration into new, structured assessment methods.

Our study suggests that a long-term solution must be implemented, something akin to the 2-lane policy assessment policy, which differentiates between assessment for learning and assessment of learning. Getting to that point is a difficult task. It requires substantial redesign and planning. While assessments get the most attention, they alone are not the solution to academic integrity problems.

To create these ethical guidelines and policies, engage with current AI literature, consult staff who have already implemented AI tools, and bring in internal and external AI ethics experts. AI is already embedded in students' habits, many of which were developed in high school, making early engagement critical to breaking bad habits. Institutions must address ethical AI use from the first year of study, ensuring students understand responsible application from the outset.

Collaboration across institutions can enhance policy development and create a unified approach to AI integration. At the AAIEEC, we have over 70 academics from 14 universities actively tackling challenges related to assessment integrity, AI ethics, and educational integration in fields such as project-based learning, programming, and tutoring.

3. Encouraging student engagement, critical thinking and capability beyond the machine:

Authentic learning experiences that bring out the human capabilities that go beyond what AI can do, at least today, are a new frontier with great potential. This involves acknowledging AI as a co-intelligence that supports cognitive learning and adding focus to more human skills, like psychomotor and affective. However, we can’t just rush out and do authentic experiences to prevent cheating because we may overlook validity.

For example, the teaching laboratory provides holistic learning opportunities, but our recent study shows that assessment validity beyond cognitive learning objectives is a major concern. We need to think before we act. This may include using diverse assessment types to triangulate capability, which is also a possible solution for assessing thesis work. It may also include reimaging assessment, like transforming essay work to new formats. While transitioning to authentic assessments requires initial investment in redesign, the long-term benefits — enhanced engagement, deeper learning, and more reliable assessment of student capabilities — far outweigh the short-term challenges.

 

Banner image source:

- DALL-E AI image generator


The HERDSA Connect Blog offers comment and discussion on higher education issues; provides information about relevant publications, programs and research and celebrates the achievements of our HERDSA members.

 

HERDSA Connect links members of the HERDSA community in Australasia and beyond by sharing branch activities, member perspectives and achievements, book reviews, comments on contemporary issues in higher education, and conference reflections.

 

Members are encouraged to respond to articles and engage in ongoing discussion relevant to higher education and aligned to HERDSA’s values and mission. Contact Daniel Andrews Daniel.Andrews@herdsa.org.au to propose a blog post for the HERDSA Connect blog.

 

HERDSA members can login to comment and subscribe.