How can I adapt assessment to deal with generative AI?
It is important to recognise that the use of GenAI in assessment should not, in all cases, be treated as plagiarism (see the University of Glasgow's plagiarism policy). Rather we should carefully consider how we design assessment to ensure that students can demonstrate skills beyond purely knowledge recall.
We gathered examples from colleagues across the University of Glasgow, outlining ways that you might consider changing assessment. All of these examples can be adapted to your college. Most changes that colleagues will wish to make will be within the parameters set by the description of the assessment in the course specification. The following suggestions are small but effective changes. We hope that these examples inspire colleagues to think creatively about their assessment and help to bring meaningful change into assessment design.
Assessment Design Considerations
Context and scenarios are important!
Meaningful assessment (see the Learning Through Assessment) is a key pillar for UoG assessment that has become even more important in the context of generative GenAI tools. Requiring students to reflect on or include the physical or social context is key because this cannot be created by GenAI. This approach could include reflections on personal events relevant to students’ own lives, classroom/study experience or social factors. An example may also include very contemporary sources of information from local or national news which of course has additional advantages of ensuring that student work is up to date. While it is accepted that content could be generated by GenAI, a more meaningful design requires the student to interject and integrate contemporary sources and context. Thus, they will be required to interact with the GenAI-generated text to produce outcomes, and by extension learn this process/skill.
Update examinations
In-person (invigilated) examinations are not the ideal solution to overcoming the challenges that GenAI tools pose for assessment. From extensive research, we know that that invigilated examinations are not conducive to meaningful or inclusive assessment (see Learning Through Assessment p. 9 & 15). Recent and ongoing development of GenAI tools requires us to consider these in the context of different types of examinations. While there is still a place for invigilated examinations, we must consider how to bring GenAI into that setting. For example, rather than getting students to engage in knowledge recall in examinations (e.g., reproducing a mathematical proof or derivation, or writing down the definition or characteristics of a certain type of behaviour) you could use GenAI to generate these items and then require students to critique or fix certain problems. In addition, rather than asking students to generate an article on a contemporary topic, you could generate this through GenAI and require students to identify areas for improvement and/or critique the way it is constructed. Alternatively, there is an opportunity to draw upon unique data sets that may have been introduced during the semester, to create examinations that require students to engage in critical analysis and apply skills during the examination.
Generate resources for comparison
Generative AI tools can produce artefacts for comparison purposes. This can be done inside and outside of the classroom. Inside the classroom (and possibly for in-course assessment), you could ask students to generate an artefact via generative AI (e.g., a piece of prose, a poem or a business plan) and then ask students to work in groups to present and critique it. Assessment could be based upon what they can improve or extend beyond what has been generated, or indeed, what mistakes are present in the generated response. To overcome the unreliability of information that AI tools may produce, making a comparison against another trustworthy source is key. This can be replicated for assessment that has to be undertaken outside of the classroom. Evidence of the AI content can be appended to the submission so that students show how they have used it. Generative AI output is also an additional artefact when students are generating their own feedback through Active Feedback.
Use AI to create assessment resources
It can often be challenging to create meaningful assessments for students that are personalised and specific, in particular generating fictitious scenarios that help to give assessments background and a realistic edge. You could use AI to generate case studies, for example, that you can then go on to edit with additional information and scenarios. This could significantly speed up the assessment writing process and enable you to generate additional case studies (for example) for multiple groups of students. This aligns with our desire to work towards more meaningful assessment. Creating unique case studies also means it is almost impossible for groups to collude on their responses. Having spent less time on the generation of the case studies you could invest this time in making them even more specific to students, their personality types and/or areas of interest.