Our November community meeting followed the usual Lean Coffee format and a lively discussion was held, covering three areas:
AI-Assisted Marking Tools: A Slow yet Steady Integration
The conversation revealed a cautious but growing interest in AI-assisted marking tools. While some institutions are experimenting with proprietary solutions, many are awaiting enhancements to existing systems like Turnitin’s GradeMark, which is expected to incorporate AI features by 2025. Concerns were raised about the adaptability of these tools to local academic standards, particularly the compatibility of referencing styles, which might not align well with Americanised systems currently prevalent in AI tools.
Despite the potential benefits, there’s a clear lag in fully integrating AI into popular grading tools such as Canvas’s SpeedGrader, which currently does not use AI. This slow adoption rate opens opportunities for institutions to develop and test their own AI-assisted solutions tailored to their specific needs and constraints.
Navigating Academic Integrity in the Age of AI
A significant portion of the meetup discussion centred around academic integrity. Participants shared approaches to ensuring that students understand and can account for the work they submit, particularly in the context of increasing use of generative AI technologies. Some institutions have implemented work review meetings where students are required to demonstrate their comprehension of the submitted work, moving into academic integrity cases if they fail to do so.
To combat the challenges of verifying AI’s role in student work, some universities are incorporating AI usage declarations on coursework cover sheets. This measure aims to foster a culture of honesty while accommodating the educational use of AI. The discussions highlighted the need for clear and specific guidelines to reduce ambiguity about what constitutes acceptable AI use in academic work.
Challenges and Future Directions
The integration of AI in education is not without its challenges. Financial constraints, particularly in smaller or less-funded institutions, limit the ability to trial and adopt new technologies. Privacy concerns and data protection are also significant hurdles, with institutions wary of using tools that may compromise student privacy.
Looking ahead, the discussions emphasised the importance of supporting students in using AI tools safely and responsibly. Ensuring equitable access to AI technologies, especially through the provision of free tools, is critical to prevent any disadvantage based on financial or geographic barriers.
The discussion highlights a blend of enthusiasm and caution for integration of GenAI into HE, raising practical challenges and ethical implications. Discussions like these are invaluable in helping the community understand and shape their responses.
Our next meeting is on 10th December at 3.30pm – do come along and join the discussions.
Links shared in the meeting:
- Graide Pilot blogpost
- Full article: Addressing student non-compliance in AI use declarations: implications for academic integrity and assessment in higher education
- BPP shared policy documents which are available on our previous blog, Navigating the Future: HE policies and guidance on GenAI
Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.
For regular updates from the team sign up to our mailing list.
Get in touch with the team directly at AI@jisc.ac.uk