From Plagiarism to Progress: Advance HE’s Assessment and Feedback Symposium
Published: 6 December 2024
Dimitar Karadzhov, Mia Wilson, Julie Langan Martin and Laura Sharp reflect on Advance HE's Assessment and Feedback Symposium which took place on 5th November.
Dimitar Karadzhov and Laura Sharp delivered the presentation, ‘A Multi-Pronged Approach to Plagiarism Prevention and Academic Integrity Training’ (with co-authors Eric Davies, Ailsa Foley, Julie Langan Martin, and Mia Wilson). The key messages stemming from our work in this area are:
- Plagiarism is a complex, multi-layered issue and, therefore, requires a multi-pronged approach to prevention.
- Instructional guidance should consider the distinct needs and challenges of specific sub-cohorts, such as international and postgraduate students.
- Plagiarism can be viewed as a transitional issue, including in terms of cultural transitions.
- Gamification can be meaningfully introduced into academic integrity training. However, special attention to equality and inclusion is needed.
The conference session opened with a range of students highlighting challenges they face with assessment. These included:
- Over-Assessment: This is when the same skills are repeatedly assessed across courses within the programme.
- Unhelpful Responses to Clarification Requests: Students expressed that when they seek clarification about assessments, staff often appear to perceive this as a request for handholding, rather than simply providing one-off guidance.
- Assessment Overwhelm: The students felt there were times when they had an unrealistic number of activities and assessments to juggle and that there was little communication between courses to help them navigate this.
The day then unfolded with a range of presentations, workshops, and lightning talks, revolving around several themes including authentic assessments, rebalancing marking structures, and artificial intelligence (AI).
Authentic Assessments
Authentic assessments measure students' abilities to apply knowledge and skills in real-world contexts, emphasising understanding and critical thinking rather than memorisation. One of the presentations highlighted that a large proportion of students come from heavily exam-focused institutions, which can make it difficult for them to adjust to more authentic assessments. The presenters discussed a potentially valuable approach that involves engaging with students during the induction period to understand their prior assessment experiences, needs, and concerns. This enables course leads to provide more targeted support. For the Research Methods module of the Global Mental Health (GMH) on-campus programme, Dimitar has enquired about students’ academic backgrounds and determined that most have conducted primary research before, but few had systematic review experience.
Others spoke about the importance of including not just students but industrial partners in assessment design. For example, businesses could be asked what type of activities they want students to be assessed in. Dr Gwenda Mynott of Liverpool John Moores University spoke of how they had involved a local business leader in marking. Another presenter highlighted that teaching teams should support students to discuss how their authentic assessments relate to job roles during applications and interviews.
Rebalancing Marking Structures
An innovative change in practice that caught our eye was the allocation of 25% of the dissertation grade to students’ supervision experiences. Students create a portfolio, in which they log their scheduled meetings and record the work they undertake around them (e.g. papers read or statistics undertaken and their implications). Students are encouraged to prepare agendas and minutes and share these as part of the process. The supervisors involved in this initiative reported increased engagement, and the recorded attainment improved. Another benefit observed was that the academic time for supervision reduced, as meetings became more productive.
A second intriguing approach was that by Dr Giulia Getti at the University of Greenwich, who, by reducing the length of an assessment to 500 words, had started providing feedback on a draft. A percentage of the student’s overall mark was then allocated depending on how well they had applied the feedback in their final version.
AI
In recognition of the rapidly evolving digital landscape, there was a lot of focus on the transformation AI is having on education. Students were recognised as increasingly tech-savvy, with a wealth of AI-related knowledge that often surpasses the expertise of academic teams. The need to continuously readjust and learn was a key message. Interesting recommendations included:
- Limited Paraphrasing is OK but Beware of Certain Sites: One discussion topic was about whether students can use generative AI to paraphrase within assessments. The consensus was that they can use AI to a limited extent to re-word their own content for conciseness. However, some less rigorously managed AI sites may host predatory software from contract-cheating companies. Student work uploaded to these sites can be accessed unknowingly by other students, potentially leading to plagiarism issues.
- Co-Produce Assessments Using AI: Co-creating assessments with students empowers them to take ownership of their learning and fosters a spirit of adventure and exploration. Educators were encouraged to draw on AI to co-create new assessments or adapt current ones. One example was to ask students to respond to an assessment brief using an AI-generated scenario that they had created using the brief and the intended learning outcomes.
- Teach Responsible AI Skills: It was argued there is value in staff and students co-learning how AI can be responsibly leveraged. For example, reducing the number of follow-up questions by optimising the quality of the initial prompts can help minimise the environmental impact of AI. One teaching activity proposed was to give the class a topic and have them collaborate in groups to refine possible questions for AI into five excellent prompts.
- Embed AI Use into Marking Rubrics: To effectively evaluate students’ AI skills, marking rubrics could be adapted to potentially reward those who document their AI use appropriately or demonstrate creative application of prompts.
- Using Research Trails: Dr Yvonne Jacobs at the Royal Holloway University of London talked about how, before generative AI, she required students to write a short paragraph at the end of their submission about how they found their sources (e.g. databases and keywords used) and checked the quality. She suggested that incorporating and expanding this approach to include AI interactions could enhance good academic practice and be drawn upon during misconduct hearings.
Amidst all these ideas, however, Gonsalves (2024) spoke urgently about the need for simplified AI guidelines with illustrative examples and for faculty to communicate and enforce them consistently. This was based on their research indicating that non-adherence to AI use declarations in assessments were largely rooted in confusion and fear about the consequences of doing so.
First published: 6 December 2024
<< News