Last week, I was at the University of Otago, presenting our research on the AI literacy levels of tertiary learners in New Zealand. This is a collaborative project with Geri Harris, built on the AI literacy framework from Kathryn MacCallum, David Parsons and Mahsa Mohaghegh - we are working on understanding how supporting AI literacy can enable an equitable approach to assessment for tertiary learners.
The AI literacy framework is the result of a Delphi study which established four levels of AI literacy: Know and understand AI, use and apply AI, evaluate and create AI, and beyond AI literacy. Geri and I were interested in how our learners were using and applying AI literacy to support their assessments, so we turned the descriptors from level 2 ‘use and apply’ into 28 Likert scale questions and added some questions focusing on assessment.
On the night before sharing the research, I recorded a reasonably raw video of my short 15-minute presentation, and you can view that below (or at the previous link for the x2 speed version) and find out how I linked AI literacy to the computer game Lemmings(!) as well as other important things like the results. I have also collated all of the links (Presentation slides, symposium short paper, AI literacy survey, AI literacy framework, AiEdCoP and an opportunity to collaborate on the next stage of the research) here on this flowpage so you can follow up on anything that you find relevant.
We are interested in collaborating on subsequent iterations of this research to build a broader picture of AI literacy, if you are interested please contact me by the 20th of September 2024, as we have a team ready and will be starting the next phase.
If you are on a bus or have screaming children running around your house and you can’t watch the video you can read the summary of the presentation below. I copied the transcript from the video and asked chatGPT to rewrite it as a blog post using my usual writing style here on the blog. I think it turned out quite well! AI-generated content follows…
Why AI Literacy Matters
AI literacy, in my view, is the cornerstone of integrating AI tools thoughtfully and ethically into education. It's not just about understanding how to use these tools—it's about knowing when to use them, the risks involved, and how to mitigate those risks. With AI playing an increasingly significant role in educational settings, it's crucial that educators and students alike understand both the potential and the pitfalls of these tools.
One of the key challenges we've encountered is around assessment. There's a growing concern that AI tools could undermine the integrity of assessments, leading many institutions to simply halt any experimentation with AI in their practice. This, I believe, is a roadblock to progress. Assessment is an integral part of learning, and rather than shutting down the conversation, we need to rethink how we approach it with AI in mind.
AI and the Lemmings Metaphor
A metaphor that often comes to mind when thinking about AI in education is the classic game Lemmings. In the game, these little creatures wander aimlessly until they fall off cliffs, we need to support them to complete the level by enabling a safe pathway to the finish line. This could be likened to our students’ journey through our courses; many educators are either blocking the use of AI tools—essentially acting as the "stop lemming"—or they're relying on AI detection tools at the bottom of the cliff, hoping to catch students in the act of misusing AI. Neither approach is particularly effective.
Instead, I propose that we focus on understanding AI literacy at a deeper level (a bit like the lemming that digs down to explore different outcomes), helping students navigate these tools more thoughtfully. In the same way that you could give a lemming an umbrella to soften its fall, we can equip students with the right knowledge and skills to use AI tools without falling into unethical practices ☂️.
Our Research on AI Literacy in Assessment
Our research focused on evaluating AI literacy through a survey of students, most of whom were enrolled in postgraduate programs at academyEX. We used a framework developed by Kathryn, Dave and Mahsa, which provides a broad lens for understanding AI literacy across various educational levels. The survey aimed to assess not only how students were using AI in their assessments but also why they were using these tools and how effective the framework was in identifying gaps in their AI literacy.
Some key findings from our research included:
Self-Rated AI Literacy: A surprisingly high percentage of students rated their AI literacy as either high proficiency or advanced. This could indicate a self-selection bias, as those who are already comfortable with AI are more likely to engage in a survey on the topic.
AI in Assessment: 75% of the students reported using AI in their assessments, with the primary reason being efficiency and time management. Many of the participants are working professionals, often time-poor, and they see AI as a way to streamline their study processes.
How AI is Used: Students reported using AI as a learning aid, to generate content, for research and analysis, and to support their writing. The ability to reduce cognitive load and organise ideas was frequently mentioned as a key benefit.
Challenges and Opportunities
One of the lowest-scoring areas in our survey was around the ethical use of AI tools. While we do discuss ethics in our classes and workshops, it’s clear that this is an area where more support is needed. Students want practical guidance on how to use these tools responsibly, and they want clear policies and guidelines from their institutions.
Looking Ahead
As we move forward, it’s critical that we continue to explore how AI can be integrated into assessment practices in a way that is both equitable and effective. AI literacy is not just a technical skill—it’s about understanding the broader implications of these tools and ensuring that students are equipped to use them thoughtfully.
For those interested in this area, I encourage you to join our growing AI in Education Community of Practice, where over 350 educators and professionals meet regularly to share ideas, trial new approaches, and discuss the future of AI in education. We are also looking for others to collaborate with for the next stage of the research, so please reach out if you are interested.