What could AI-assisted assessment look like?
Combining a critical friend and peer review to raise critical thinking, research quality and AI literacy
Over the past few months, I’ve used generative AI to help me scope ideas and scaffold research funding applications. With the support of AI, I've identified research gaps and potential projects that could fill them. I’ve fine-tuned a multilevel guided prompt that supports me in thinking critically about the purpose, aims, research questions, methods and methodologies.
This process is similar to one of the assessments our students submit, which requires them to identify a problem in practice or a challenge that will be met by a practice-based intervention. Currently, this is a formative assessment involving a collaborative oral element where the peer groups read each other's project proposals and then provide feedback to each other in a synchronous online meeting. This works reasonably well and is authentic in its purpose and outcomes; however, sometimes, the feedback isn’t as critical as required.
Reflecting on this with students, we’ve identified two main hurdles. First, the discomfort associated with providing critical feedback to peers, and second, the struggle to provide feedback on unknown territory - the classic conundrum of not knowing what you don’t know. This is particularly challenging in a ‘live synchronous’ assessment. The existing project scope assessment aims to evaluate the critical understanding of applied research methods and tools within the context of practice-based change projects, as well as how the students critically evaluate all ethical considerations and mitigations relevant to practice-based change projects.
Drawing inspiration from the University of Sydney’s innovative ‘two-lane’ approach to assessment, I’ve started reimagining our assessments to better equip our students for an AI-integrated future. In the model proposed by Danny Liu and Adam Bridgeman, ‘Lane 1’ assessments ensure students have mastered the skills and knowledge our programs require, while ‘Lane 2’ assessments motivate students to learn and teach them to engage responsibly with AI.
I often encourage our students to use generative AI to support the development of research ideas. I have written about this in the past, and based on my experimentation with AI for developing research proposals, the project scope assessment seemed the ideal opportunity to introduce a ‘lane 2’ generative AI approach. I could also see the potential of mitigating some of the shortcomings identified in student feedback about the assessment, particularly the challenge of providing critical feedback to peers. Devoid of feelings (as it so often lets us know!), AI can serve as an impartial ‘critical friend’, providing feedback that students may not have considered. This unique characteristic of AI, its lack of emotions, makes it easier for students to critique. I’m hoping that students can freely express their thoughts and criticisms without the fear of offending or hurting someone’s feelings. The aim is that this can build confidence in this area and, in turn, support critical feedback for real human peers!
The revised ‘lane 2’ project scope assessment encourages students to input their emerging project ideas into the generative AI using the provided multi-staged mentoring prompt. The generative AI then attempts to provide critical feedback and suggests improvements based on the parameters, in this case, learning outcomes written into the prompt. This process culminates in the synchronous online meeting where students collaboratively reflect on their original scope and the version with modifications suggested by the generative AI. They justify their critique against a framework based on the assessment outcomes, demonstrating the critical understanding and evaluation we aim to foster, and in doing so, they collaboratively navigate the lanes of learning in an AI-integrated world.
Another intentional outcome of this ‘lane 2’ approach is to raise AI literacy. While some students were using AI to support the development of their research ideas, others were avoiding it because they feared ‘being caught cheating’ or because they did not understand how to prompt effectively. By making generative AI use an integral part of the assessment and by reflecting on the output collaboratively, the students understand use cases for generative AI as well as the potential shortcomings of the tools.
In the revised assessment, I’ve intentionally included a comprehensive, multi-staged mentoring prompt designed to guide a student through the detailed process of developing a research proposal for a practice-based research project. It provides a structured and interactive framework where the AI acts as a mentor, offering critique and suggestions and ensuring the student is prepared before moving to the next stage. This method ensures thorough coverage of each aspect of the project scope, facilitating deep engagement and iterative refinement. This also provides a consistent experience for all learners and supports them with a framework for creating their own prompts. I’ve also indicated they are free to tinker with the prompt and make it work for their context. Further into the course, I will encourage students to create their own prompts and support critical evaluation based on the rubric suggested by Danny and Adam.
At this stage, we have tested the prompt using the generative AI models available to our students and trialled the concept with a small group of students. In the next iteration of the course, we will introduce the assessment with the AI prompt and aim to quantify the level of critical thinking based on a modified scale relevant to generative AI use. If anyone has any suggestions, it would be great to learn more!
A link to the prompt I developed can be found here. I’m sure there could be further refinements, so I’ll keep working on it. Any feedback would be appreciated.
You are leading the way! For someone like me, who is still wary of AI due to my own lack of effort to investigate it, the intentionality of your assessment design would be very useful. There are obvious benefits such as the fact AI is impartial and can be that true critical friend without the emotion involved. Providing critical feedback to peers about 'unknown territory' was something I struggled with as it meant having to spend quite a lot of time reading and understanding their kaupapa to enable me to give meaningful and actionable feedback. The prepared prompts are also valuable for first-time users as a guide before learning the nuances of this form of questioning and prompting for ourselves. Tim I have no ideas for 'refinements' as all this is very new to me but what you are doing is incredible. Kia kaha tonu!