AI tools are making their way into classrooms: even in schools today, students create texts, presentations, images and translations at the click of a button. How can teachers deal with AI’s new possibilities?
For many teachers, homework, papers and tests raise the question of autonomy. How should they grade exams if it’s not clear who did the work – the examinee or an AI? A common reflex to digital developments in education is to regulate these possibilities, to put digital devices into exam mode without network access, or to ban AI tools.
The Evangelisch Stiftische Gymnasium in Gütersloh, Germany, is taking the opposite approach: laptops and iPads have been widely used there for 20 years. GPT-3 and Co. are being tested in German lessons and are even required for class tests.
Since the pandemic began, the school has been experimenting with digital exam formats that are more closely aligned with the realities of later life. Part of this new culture is the use of AI in exams: Students can do all or part of their work with the assistance of text AI.
But doesn’t the use of AI tools contradict the core idea of exams: that students must perform on their own?
Not necessarily. There is a rule that prevents this: Copied parts are marked as AI citations. At the end, students have to justify in a reflection why they took certain parts of their work from the AI or intentionally wrote them themselves.
Students learn personal responsibility through the use of AI tools
Class 8a waits tensely in the classroom on exam day. The students compose written arguments, as always on their laptops – without exam mode, there are no restrictions on network access and they have free access to the AI tools. They enter their arguments into the AI system’s input form. It only takes a few seconds for them to create the first text.
Anyone who thinks that the students can now sit back and relax is mistaken. Their work is just beginning: They have to critically examine the AI text.
In previous lessons, the students had learned that it is unwise to simply adopt AI-generated texts: In the week leading up to the class assignment, for example, they addressed whether smartphone use should be allowed in schools.
Most of the texts produced by the AI advocated for a smartphone ban and extolled the virtues of all-analog teaching, sometimes with strange arguments that lacked evidence, such as that students send too many SMS messages at school.
Four problems with AI texts and how students deal with them
- AI texts sometimes reflect societal stereotypes and conservative views.
- AI texts rely on outdated information – for example, GPT-3 was last updated in 2019.
- Grammatically and stylistically, many German AI texts need major improvement.
- Work assignments are often poorly fulfilled by AIs, or the AI strays from the topic and provides too little evidence for arguments.
Switching languages shows students how much room there is for improvement: English-language AI texts are significantly more up-to-date, sophisticated, and consistent than German texts.
This encouraged students to combine different AI tools: They started with an English-language prompt to generate English text. For the translation into German, they also used an AI tool (deepl.com). This was followed by stylistic and grammatical revision using LanguageTool or Papyrus Author.
In the exam, no student is satisfied with the AI text he or she created. Suitable passages are adopted by students in their argumentation. Mostly, however, the AI texts serve as a quarry for ideas or as a whetstone for their position, as they are confronted with new arguments that contradict their own.
Many students do additional research on the Internet to verify the information in the AI texts or to sift through additional evidence, expert opinions, and studies. Often, as described above, they combine multiple AI tools to achieve better results.
After 90 minutes, students submit both parts – the argumentation part and the reflection part. They will be equally weighted in the grading.
Students do not blindly rely on AI
The AI exams produced an important finding: no student blindly relied on the AI texts. Moreover, those who did not know how to construct and write an argument before the test and lacked expertise were overwhelmed when dealing with the AI texts, uncritically adopted incorrect information, and did not benefit from the suggested reasoning.
The students found the AI text suggestions helpful as they made their work easier. They would also like to use AI tools in class in the future.
Perhaps their wish will soon come true: The school’s next idea is to feed the image AI DALL-E 2 with literary texts. The students are then supposed to make their own analysis of the text based on the generated images, which can be understood as a visual interpretation of the text.
All in all, AI creates new opportunities for teaching, but also makes it more challenging: A part of the students will use AI to do less work and present AI products as their results. Another part will use AI tools to outsource routine tasks they have long mastered to have more time for complex and interesting questions. For this very reason, dealing with AI and its reflective use in schools is also a question of educational equity.
The Evangelisch Stiftisches Gymnasium is looking for partners who want to test further possible uses of AI in schools. If you are interested in a collaboration, please contact Hendrik Haverkamp via Twitter or Linkedin.