We are seeing the first guidance to students on the use of ChatGPT and AI tools in assessment starting to emerge. Getting terminology correct on this is a little challenging, so we are going to explore this.
Much of the guidance is, for the time being at least, telling students not to use AI in assessment. We aren’t making a comment on this advice here – that is very much an academic issue, although we do think that these statements are going to be a holding position while assessment and curriculum are reviewed to reflect a world where AI is pervasive.
There are some risks in the wording of these statements that might not be immediately obvious though, so we’ll explore that specific issue in a bit more detail now.
Risks in getting the terminology wrong include:
- Inadvertently banning applications such as Microsoft Office or Grammarly.
- Inadvertently classifying the use of common features in tools such as Microsoft Office 365 as academic misconduct.
- Missing applications other than ChatGPT that use generative text that you’d wish to include.
We’ll suggest that perhaps an alternative approach might avoid some of these issues. Clarifying acceptable student behaviour rather than specifying prohibited technology might work better.
The problem lies in how embedded AI is in our everyday tools already, and how this is likely to develop over the next few months.
We’ll have a look at some phrases that might cause problems now.
Prohibiting AI and AI tools
- “Students must not use AI tools in assessment”
- “Use of AI in assessment is prohibited”
The problem with this is that so many tools we use are AI tools, or contain large amounts of AI functionality, and this is accelerating all the time.
As an example, Microsoft regularly add new AI features to Microsoft Office, and many more are coming. The ‘Editor’ function includes a range of AI features, includes tone suggestions, tools to rewrite text to make it more concise, and a generative AI feature to create summaries. Microsoft talk about this as ‘everyday AI’
Grammarly is another commonly used application, which may not sound like an AI tool, but actually makes extensive use of AI.
As well as tools used in the writing of an assignment, students are likely to use many other tools which make use of AI, and again, this is increasing all the time. Academic examples of these include research tools such as Scite and Elicit.
We should also be mindful of the fact that AI tools are useful in supporting accessibility and so need to be careful not to inadvertently disadvantage particular groups. AI tools currently in use include Grammarly Premium (often used by dyslexic students) and AI-based transcription tools. Discussions on how tools such as ChatGPT might help accessibility are already underway.
- “Use of ChatGPT in assessment is prohibited”
It might seem tempting just to ban ChatGPT, but the core AI model behind ChatGPT has been available to developers for well over a year and is integrated into many other applications, typically marketed as writing tools. This includes tools such as Jasper, WriteSonic, Rytr and many, many more.
Creating a list of banned applications that cover all generative AI-based applications is an almost impossible task, and putting the responsibility on the student is problematic, as it’s very hard to tell what technology these actually use.
Prohibiting text generated by AI.
- “Students must not submit text generated by AI”
- “Use of generative text tools is prohibited”
On the face of it, this seems simpler, but then again there are problems with the detail. As I write this in Microsoft Word, I have predictive text enabled, and AI is indeed generating some of the words that I’m writing (it just wrote the word ‘writing’ for me then!).
I make extensive use of Grammarly and often utilize it to revise sentences, improving their readability and rectifying mistakes. Yes, that last sentence was rewritten for me by Grammarly. It’s been generated by AI. This is almost certainly not the kind of use case we actually want to prohibit though.
ChatGPT excels in improving the clarity and readability of text. It’s highly likely that the features it provides for this purpose will soon be integrated into Microsoft Office. And yes, I did get ChatGPT to rephrase that for me, asking it to make it clearer – this sort of functionality is built into many writing tools already and again is not the sort of use case we are probably trying to prohibit, especially, as we noted above it’s so widely used to help with support for, for example, dyslexic students.
So AI-generated text is already part of the tools that we use every day, and these features are only going to increase. Arguably the first two examples we looked at weren’t created by ‘generative AI’ as the term is commonly used at the moment, but that’s more of an implementation detail, and something that’s going to be very hard for students to determine, as hopefully shown by the third example.
As an aside, going forward there is the very real risk that text modified/generated by everyday writing tools is likely to be flagged by AI detector tools, if we end up going down that route – almost certainly not what we want.
So what should we do?
Trying to give a technical definition of what students can do is fraught with problems.
We’ve explored the academic regulations and policies of a number of universities, and many contain guidance around proofreading, grammar checking tools and contract cheating that could easily be adapted or applied directly to AI. These regulations and policies describe the behaviours that are unacceptable rather than trying to give a definition of technical tools that are prohibited, and this seems a better approach, especially given the pace of change in AI at the moment.
Plain language phrases along the lines of “Attempting to pass off work created by AI as your own constitutes academic misconduct” should avoid the pitfalls of trying to ban hard-to-define technology, for example as seen in this guidance from UCL.
We’re interested in feedback on this, so get in touch if you disagree with this approach or have any suggestions or additional ideas.
Find out more by visiting our National centre for AI page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.
For regular updates from the NCAI sign up to our mailing list.
Get in touch with the team directly at NCAI@jisc.ac.uk