Categories
Student perceptions

Student concerns around generative AI

Two women sit working at a shared laptop in an office setting.
Photo by Christina @ wocintechchat.com on Upsplash

Following our initial Student Perceptions of Generative AI report last year, we recognised the need to continue the discussion with students/learners as the technology continues to evolve.

Over this past winter, we have run a series of nine in-person student discussion forums with over 200 students across colleges and universities to revisit student/learner perceptions of generative AI. Our goal was to understand if and how views on generative AI have shifted, identify emerging usage and concerns, and explore the developing role students/learners want these tools to play in their educational experience.

As institutions grapple with developing policies and guidance and the complex pedagogical shifts, capturing the authentic student voice remains crucial to inform responsible AI integration that both empowers students/learners and maintains academic integrity.

We’ll now focus on the concerns students attending our forums raised around generative AI:

Information Literacy and Education

The increasing prevalence of misinformation created was raised together with generative AIs ability to create plausible untruths.  Students/learners want to be able to distinguish easily between reliable and unreliable information and understand that the ability to sift through AI-generated information is as crucial as any traditional academic skill.

There was a diversity of views around information literacy with some students/learners reporting that they felt they had the relevant skills to critically evaluate outputs and others wanting their institutions to support them to develop this vital skill.

AI Plagiarism and Detection

Students/ learners we spoke to understand the need to distinguish between those who cheat and those who do not.  However, there was a strong feeling that the lack of clear guidance on how they should use generative AI responsibly and ethically could lead to increases in misuse due to errors of interpretation.  There was a strong request for clear guidance that was fair to all.

Students stated they were concerned about the known bias in detection tools against non-native English speakers and felt institutions using detectors needed to respond to this and take concrete steps to mitigate this bias.  They also felt a way to challenge decisions was needed, where they feel they have been disadvantaged

Dependence and Originality

The students/learners mentioned a range of concerns such as the risk of becoming dependent on generative AI to produce written content and losing the ability to create from scratch. They were also concerned that using these tools to research new areas could lead to a lack of ability to critically evaluate resources.

Concerns about over-reliance on AI for tasks such as writing essays and/or producing research etc, could potentially lead to a decrease in critical thinking skills.  Students clearly stated a need to ensure that they did not lose out on intellectual development by using generative AI tools inappropriately or excessively.

Students/learners expressed a need to retain their individuality and unique voices, and articulated a fear of how this would be impacted as use of generative AI tools increased.

Ethics and Bias

Students/learners raised the issue of ‘Is any use of generative AI in education considered cheating or not?’ as it gives an advantage over those not using the technology.  There was a range of opinions on this with some stating that they didn’t use generative AI for any educational purpose and others avoiding use on assessments, due to the lack of clarity on this point.

Students recognised that there are inherent biases in generative AI systems, often reflecting disparities in race, gender, and socioeconomic status.  They were concerned biases would be exacerbated with the increasing employer use of AI to sift candidates.  Students/learners felt quite strongly about the need for critical review of AI-generated content to avoid perpetuating stereotypes.

Students/learners felt quite strongly that where an institutions approach to generative AI isn’t consistent then it will disadvantage some students – they desire a fair universal approach. They also raised the issue of increasing Digital inequity with those having the ability to pay having access to better tools.

Data, Privacy and Copyright

Students/learners we talked to, were in the main aware of the risks of AI systems containing or exposing personal data, but levels of concern over this varied tremendously, with many assuming they were covered when using these tools by GDPR, and/or their institutions policies.

Students discussed the trade-off between privacy and efficiency when using generative AI, with some expressing resignation about the loss of personal data privacy, whilst others were more sanguine about the trade off, particularly those creative students wanting to develop an audience for their work.

Copyright was raised as a concern from two angles: ‘Who owns work co-created with generative AI tools?’ and ‘How can I ensure that I am not inadvertently plagiarising someone else’s work, without crediting or paying them, when I co-create using generative AI?’.

AI Skills and Employability

Students/learners in our discussion forums were concerned about acquiring the necessary generative AI skills for future workplaces due to potential bans or restrictions on these tools by their institutions.

They raised the issue of how they can keep pace with generative AI development and how these would be embraced and embedded in current policies and teaching practices by their institutions.

AI’s Impact on Human Behaviour and Society

Students/learners were deeply concerned about the potential for generative AI to influence human behaviour, particularly regarding the potential for misinformation spreading and increasing criminal activity with deep fakes.  One of the alarming examples discussed was the rise in celebrity deep fake porn.

Developments since Spring 2023

Reflecting on the student dialogues from Spring 2023 to Summer 2024 reveals a journey demonstrating that student/learner perspectives are still evolving and priorities emerging.

Students/learners have moved to a more determined request for information literacy and employment related skills, with many remarking that this would impact choices going forward.

Conversations around ethics and equity in AI use were initially a mix of unease and a plea for regulation. Now, students have moved to a more sophisticated understanding that while AI’s potential to advantage some is undeniable, it needs a responsible approach that promotes fair access and addresses the built-in biases that perpetuate societal disadvantage.

Copyright and lack of understanding around ownership continues to be an issue with more sophisticated nuances around creative use.

These shifts illustrate students/learners are becoming more empowered and articulate in their demands for an educational environment that harnesses generative AI responsibly — one that equips them with the necessary skills and critical faculties to flourish in a future where generative AI is an inextricable part of employment and society.

Thanks

We’d like to thank the following institutions for supporting our student generative AI discussion forums:

  • University of the Arts London
  • Belfast Metropolitan College
  • University of Bolton
  • Gateshead College
  • Glasgow College
  • Midlands Innovation
  • Northern Regional College
  • Queens University Belfast
  • Southern Regional College
  • University of Ulster

Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *