What If? Using Scenarios to Imagine the Future of AI in Your School and Beyond
Four different scenarios to use and adapt in your context
As we approach the end of this year, numerous organisations are grappling with the task of creating their AI policy for 2024. Artificial intelligence is revolutionising every aspect of our society, from education to healthcare to businesses. However, the pivotal question that needs to be answered is how can we ensure that AI is used ethically, responsibly and equitably? How can we balance the benefits and risks of AI for different stakeholders and scenarios? These are some questions that schools, departments and workplaces need to address in their AI policy.
One way to approach this task is scenario planning, which helps us imagine and prepare for different possible futures. Scenario planning can help us identify the opportunities and challenges that AI may bring and the actions and strategies we need to take. I was introduced to this approach by John Bai at ASCILITE 2023. John and his team invited educators working in higher education to evaluate four strategic scenarios of possible AIEd applications developed based on critical macro-and meso-factors. The research found that it takes a collaborative approach to unpack the complexity that applying AI in education requires. You can read more about John's study here, as well as the scenarios that he used here: View of Future prospects of artificial intelligence in education (ascilite.org)
In this post, I will share four scenarios illustrating how AI could impact different elements of education in 2024. Feel free to modify and adapt the scenarios to fit your context.
One thing that John and the team worked on in-depth was how they developed the scenarios; they developed five strategy elements from a diverse range of AI readings and then discussed the future options that could apply to each strategy. They then added the elements and options into a morphological box, which supported them in breaking the complexity down into a more manageable system. They could then investigate the total set of possible relationships or configurations between the elements. I am not suggesting that you do this, and I did not do this either. However, I did not want to simplify the work that had gone into the study. This approach might be worthwhile if you have specific values or goals that you would like to include in your scenarios - it is well worth looking at John's paper for more guidance. However, I tried a different approach to developing scenarios, the same method that anyone short on time at the end of the year would use... GenerativeAI!
I used the scenarios from John's article, Bing Co-pilot and some editing to come up with the following scenarios. When I facilitate this session, I will split the room into four groups to discuss their scenario, ask each group to present the scenario and their thoughts and then ask the whole group the broader questions. There are some similarities in the scenarios, and I hope these come out in the more general discussion. It would be interesting to see how this worked with different groups, for example I think students would have some great perspectives.
*Scenarios edited 24/5/24 to reflect ‘omni’ generative AI.
Scenario 1 - Generating Original Work with AI
A student uses ChatGPT-Omni to generate a short story for their English assignment. The tool not only uses advanced natural language processing and deep learning to create a diverse and engaging story but also provides a distinctive narrative style. The student submits the story without acknowledging the use of ChatGPT. The teacher suspects that the poem is not the student's original work and uses a plagiarism checker to verify it. The plagiarism checker finds that the poem was created with AI.
Some possible questions for discussion are:
How should the teacher address the issue of academic integrity with the student? What steps should be taken to ensure a fair outcome?
What guidelines should be established for the ethical use of AI tools like ChatGPT-Omni in assignments?
How can students be educated about the responsible use of AI in creating content, balancing creativity, and originality?
Broader discussion:
How can educational institutions create policies that both embrace the use of advanced AI tools like ChatGPT-Omni and uphold academic integrity?
Scenario 2 - AI-Curated Teaching Resources
A teacher uses ChatGPT-Omni to generate and curate lesson plans and resources for a unit on climate change. The tool uses generative models and deep learning to produce and select various types of content and resources based on the topics and objectives of the curriculum. The tool also verifies and validates the quality and relevance of the content and resources using web search and fact-checking. The teacher uses the content and resources generated by the tool without reviewing or modifying them. The students find some content and resources inaccurate, outdated, and inappropriate for their learning.
Some possible questions for discussion are:
How should the teacher handle the situation?
What responsibility does the teacher have in verifying the accuracy of AI-generated content before presenting it to students?
How can teachers effectively balance the efficiency of using AI tools with the need for accuracy and reliability in educational content?
What training should teachers receive to enhance their skills in evaluating AI-generated resources?
Broader discussion:
How can schools ensure that the integration of AI in teaching enhances educational quality without compromising content accuracy?
Scenario 3
A parent has developed a GPT to support their child's learning and performance as the student is behind with their work and does not have access to private tutoring or extra support. The tool has been designed to guide the student through the learning process without providing the correct answer directly. It offers personalised feedback and guidance based on the student's performance and progress. The tool also adapts the curriculum and the teaching methods to suit the student's individual needs and preferences.
The student has become reliant on the tool for learning and assessment activities without consulting or collaborating with their teacher or peers. However, as a result, the student's skills and attainment have improved drastically beyond curriculum expectations. Their confidence has improved, and they are more motivated to engage in class. Other students are asking if they can use the tool.
Some possible questions for discussion are:
How should the teacher handle the situation? Should the teacher intervene and communicate with the student and parent about the use of the tool? Should the teacher suggest that all students should be able to access the tool? Should the teacher attempt to create the tool for the whole class, how would other parents feel about this?
How should teachers integrate AI tutoring tools in a way that encourages student autonomy and peer collaboration?
What guidelines should be in place to ensure that AI tools complement rather than replace traditional teaching methods?
Broader discussion:
How should the school regulate and monitor the use of AI tools to assist the learning and performance of students? Should the school provide or recommend any standards or guidelines for the use of AI tools for educational purposes? Should the school require or encourage the students to share and reflect on their use of AI tools and their learning outcomes and experiences? How should the school work with parents and caregivers to provide guidelines about AI use for school work at home? How can the educational system incorporate personalised AI tutoring tools like ChatGPT-Omni while fostering independent learning and social interaction among students?
Scenario 4
A teacher uses an AI tool to assess and evaluate the performance and progress of their students. The tool uses a webcam and speech recognition to capture and analyse the students' work and responses. The tool also uses data mining and analytics to generate and provide scores and feedback to the students and the teacher. The teacher uses the scores and feedback generated by the tool without reviewing or modifying them as it is the end of term and they are a bit tired. The students find some scores and feedback unfair, inconsistent, or irrelevant for their work and responses.
Some possible questions for discussion are:
How should the teacher handle the situation?
How can teachers ensure that AI-generated assessment comments are fair and accurate?
How should the school regulate and monitor the use of AI tools to assess and evaluate the performance and progress of students?
Should the school require or encourage the teachers to disclose and justify the use of AI tools and the sources of the scores and feedback they generate and provide? What are our ethical responsibilities?
How can students be involved in the assessment process to ensure transparency and fairness?
Broader Discussion
What steps can be taken to balance the efficiency of AI-driven assessments with the need for human oversight and personalised feedback?
You could also create other scenarios about students using AI to create deepfake images of their teachers or peers, age restrictions on certain tools and open access on others, and students using VPN servers to still access tools even if the school restricts them. The possibilities really are endless.
Hopefully, the scenarios I have shared today will prompt deep dialogue and initiate an urgent discussion on how guidelines on generative AI use could be developed within the school community. If you have any other scenarios or have tried this approach, please let us know in the comments.
In my next post, I will share my thoughts on what a flexible and supportive AI use policy could look like.
Great ideas for discussions in schools and I would also like to hear what the students have to say.