Categories
Advice and Guidance

Means, Motive, Opportunity: A Composite Narrative about Academic Misconduct

Academic misconduct seems to have been brought into sharp relief over the last few months with a huge amount of hyperbole around large language models, or “generative AI”. Probably the most talked about was the arrival of ChatGPT3 just before Christmas 2022, social channels quickly lit up with discussions about how it will disrupt education, often with polarising views. But academic misconduct has always been a concern in education. The most common question is “why do students cheat?”, but that just deflects from a range of other issues. It also diverts us from another question, “why do academics cheat?”  

For several weeks I have been gathering anonymous stories from academics and students. In this post, I have sought to weave together some of the student and staff voices, and other non-sourceable data to create some understanding of how large language models and so-called generative AI have been used. However, as interviews continued, other topics in adjacent areas were discussed and presented in the research context too. These are where some of the voices may be viewed by some readers to have crossed a line into academic misconduct.   

The term “composite” here refers to the narrative being constructed from multiple voices and other sources. This composite narrative aims to create an honest and open understanding of the cultural use of generative technologies, and those adjacent activities that were brought to light in the course of the research, without fear of prosecution for actual or perceived misconduct.  

I want to start out by saying that I believe no one starts out to cheat or behave dishonestly. That may be seen as naive, but I do not want to work in a system that focuses on “policing behaviours”. But I think it is worth looking at academic misconduct through a familiar police adage that writers of fictional crime seem to constantly employ “Means, Motivation, Opportunity”. 

First, here are two composite stories that use some of the accounts I collected during the research, and again, these are fictionalised, but pull together real voices. 

Sam  

Sam is a second-year student who is studying a STEM subject. They enjoyed the first year as none of it “counted”, but as they started their second year, they feel more pressure as the work now “counts” toward their final degree classification. Added to this they work part-time in a local supermarket, but the job is part-time in name only some weeks as to make ends meet, they sometimes do more than 30 hours work. Sam feels that the academic environment they are studying in is highly competitive, they want to do well, and they are thinking of doing post-graduate studies, and feel they need the grades to do that. “Every assessment is important and judged” and in Sam’s mind, they are all equally weighted. Sometimes lots of assessments from different modules come all at once. In January, with a deadline approaching, Sam started using the ChatGPT tool to expand some of their work in word length, and to make their drafts into better grammar. This felt like a good time-saving device and did not get flagged by a plagiarism detector. As weeks went on some of the assessments started to feel like “everything was coming at once”, at this point Sam started thinking about using it to alleviate their workload. “I got an essay online, just a poor example really, I knew it would get flagged in the checker, but I also knew I could rewrite it quickly to suit the exact essay title, using ChatGPT, I did this by doing it in small sections where I had the references to back up the argument, only took me an hour to do the whole thing, and I did it for a module I didn’t like, I wouldn’t have done it for XXXX, I enjoy that.” 

Jamie 

Jamie is employed as a researcher at a university and also teaches undergraduates in a STEM-based subject. They became aware of ChatGPT through people talking about it on Twitter. Initially viewed as just another tool that they might have to learn, or go to staff development sessions on, they started seeing people posting up examples of outputs. Admittedly they considered most of the outputs they saw as mediocre, and in some specific examples wrong. But they started to take it a little more seriously when they saw people using it for things like grant applications. The first time they used it was for shortening an abstract for a paper to the required length (they reduced over 700 words to exactly 500), and “it was still readable and carried the point well”. After the Christmas break and with a lot of teaching commitments upcoming they “used it to write an assignment brief and marking scheme. it took half an hour to finesse it, but it took the drudgery out of trying to formulate verbs and adjectives”. As a new employee, they were also under pressure to complete a professional accreditation. “As a scientist, I don’t particularly like writing reflections and feelings, I like that I can, with a few of the right prompts, get the chatbot to write a fictionalised reflection that looks about right, where’s the harm, it just ticks a box”. 

Means 

Technology has provided students with lots of ways to access and share information, making it easier than ever to fall into accidental collusion or plagiarism, or actually just cheat. Essay mills offer students the chance to have their essays written for them by professionals, this is not often cheap, sometimes essays are more expensive depending on grade or level of risk for plagiarism detection. This means that students no longer need to spend time researching and writing their essays. Instead, they can simply pay for an essay and submit it as their own work. This presents a significant challenge for educators, as detecting this type of cheating can be incredibly difficult especially if the actual author has checked and tested it for plagiarism. But to get that level of essay can be expensive. Not all students will have the means. But in the case of Sam, they bypassed the expensive element – the good quality paid-for essay and just used a similar lower quality essay and used ChatGPT, which is currently free to everyone, gave them an accessible means to rewrite and improve it.  

In the case of the academic, Jamie, they had to write a reflection on a piece of their teaching practice, they do not like writing in this way, and they decided that having access to ChatGPT would provide the means of doing something they did not want to. Jamie was clear that the work they submitted was “fictionalised” (their words). 

Motive 

For a lot of students, the pressure to succeed academically can be overwhelming, family, peers, friends, and actual or imagined future employers can all contribute to how a student experiences the pressure to succeed. Logistically, pressure on a student can occur when students perceive or experience assessment congestion, where there are multiple assignments due at the same time or exams are scheduled close together. And with financial pressures, this stress can be further compounded, especially if students also need to work while studying to support themselves. As a result, students may feel that they have no choice but to take some shortcuts in order to keep up with their workload and maintain their grades. It may be that academic misconduct is not necessarily malicious, but rather a desperate attempt to cope with mounting pressures. The motives for Sam were many, assessment congestion, perceptions of needing to compete and working while studying. 

For the academic, Jamie, the motivation was less about work pressure and more around the perception of the work they had to submit. They felt the exercise was a “tick box” – the institution employed them to do research, and yes, they have to do a little teaching, but the research is the important thing. The university wants them to have this accreditation, but it is just a tick box, so they justify their approach to writing their reflections using a tool to generate fiction.  

Opportunity 

Poorly designed assessments can provide students with ample opportunity to engage in academic misconduct, or if the students can not relate the assessment to the real application of what they are learning. It is harder to design good authentic assessments that will not allow extensive use of a tool such as ChatGPT, or allow the purchase of essays online, but it can be done. The opportunity to engage in academic misconduct is greatly increased when we design assessments that don’t promote critical thinking and problem-solving, or allow the students to demonstrate what they have learned in authentic ways.  

When it comes to the opportunity for academics to move more along a continuum toward academic misconduct we need to look at the same issue – are we designing, in the case of Jamie, an authentic and relevant assessment for their accreditation? Certainly, on social media and at events we have seen academics talking about using ChatGPT to help them write their arguments to achieve various forms of professional accreditation in their teaching practice. In a discussion with Professor Sally Brown, Emerita Professor of Higher Education Diversity in Teaching and Learning at Leeds Beckett University, she saw issues even before the advent of the chatbot technology  

 “I think the trouble with [some] Fellowship applications is that people now adapt quite a formulaic approach, and there’s a lot of coaching going on so the extent to which [some] fellowships are an individual achievement is sometimes stretched a bit” 

The opportunity for people to provide a response in a way that will probably achieve their goal is much reduced when we don’t design to assess effectively.  

 Means, Motive, and Opportunity 

Means, motive, and opportunity are key elements to consider when we think about the assessment of students or our own professional accreditation. Staff and students have always had motive and opportunity. But the means have always been expensive. ChatGPT and similar tools have now lowered the bar to access the means. This coupled with increased workloads on staff and students, the pressure for some students of both work and academic achievement and the perception of assessment congestion and other logistical issues, have led to motives increasing. Finally, our design of how we assess or accredit gives students and staff the opportunity to use these tools in a way which ignores the learning process to a greater or lesser extent.  

We need to ask ourselves serious questions. 

  • How can we create assessments that promote genuine learning, critical thinking, and problem-solving, rather than encouraging students to find ways to cheat the system? Can you cheat if it is an authentic assessment (one where you would be performing a task after education and have access to the tools)? Are our own professional accreditation methods robust enough to negate the use of generative AI? 
  • To what extent should we hold students accountable for academic misconduct when the pressures of assessment congestion and balancing work and getting an education may be driving their behaviours? 
  • Are we modelling the right academic behaviours, we see chatbots being used for a range of academic activities and accreditation, why shouldn’t students shortcut some of the work and go straight to the end goal?  

If we can’t control the Means, ChatGPT, is out of the bottle, and out of our hands – cost may eventually be a limiting factor but not at the moment- then we need to look at the other two aspects, motivation and opportunity. What can we do to reduce pressure on students and reengage them with learning, what can we do to take away the motivation to use these tools in ways that don’t help their learning outcomes? Finally, we need to look at what and how we are assessing and how we do it – reducing the opportunity.   

Footnote 

Over the course of January through mid-March confidential and sometimes anonymous text-based interviews took place through a range of media, including WhatsApp, Messenger, anonymous forms, and DMs on other social media, Approximately 40 students across STEM and humanities were included, and 27 academic, and academic support/professional services staff also contributed data. The author is grateful for their honesty and trust.   .


Find out more by visiting our National centre for AI page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the NCAI sign up to our mailing list.

Get in touch with the team directly at NCAI@jisc.ac.uk

 

By Lawrie Phipps

Senior research Lead, Jisc

3 replies on “Means, Motive, Opportunity: A Composite Narrative about Academic Misconduct”

Very interesting article Lawrie. I wonder if you could ask Chat Gpt how we could have more authentic assessment. I’m half joking. For me one option would be for students to upload their notes to show how they’ve come to their conclusions. But I fear that unless a written assignment is done in exam conditions then that form of assessment is dead. But it raises the opportunity to do so much more with assessment types, technology can expedite this. A marker could use the tool to assess the work and pose questions to students based on their own work, which they’d need to answer online, so the assessment process is extended. I’m sure there’s other ways that assessment can use technology to innovate and be able to assess learning outcomes.

Leave a Reply

Your email address will not be published. Required fields are marked *