Legacy post: AI is a fast-moving technology and unfortunately this post now contains out-of-date information. The post is now available just for those that need to reference older articles.
Generative AI is a part of AI that creates new content from text, images, or speech. You can use generative AI tools for many purposes. They can create questions, lesson plans, summaries, feedback, or ideas for project titles. We must address and overcome the myths, challenges, and limitations of generative AI. In this blog post, we will aim to debunk some of the common myths surrounding generative AI in education, looking into the difficulties and restrictions.
Common Myths Surrounding Generative AI in Education
There are many misconceptions and misunderstandings about generative AI in education, such as:
Myth 1: Anyone can use generative AI.
The answer is continually changing. We found for most AI tools users need to ber 13 with parental permission. However, Google Bard announced in November 2023 that teens will be able to use Bard from the age of 13. Google has said it has added some guardrails to protect users. Google has said the tool has been trained to recognise topics that are inappropriate for teens to help prevent unsafe content from appearing. We need to consider age requirements before asking learners to use specific tools.
Myth 2: Generative AI can create perfect and original content.
This myth overlooks the limitations and risks of generative AI. AI that generates content cannot create perfect and new content. Instead, it estimates content using a given prompt and training data. ContenTeachers and students still need to use their own knowledge and judgement to check AI generated content.
Myth 3: AI detectors can distinguish between human and AI generated content, to check if learners have used AI.
There are many AI detectors available, but they can give false positives. Text written by non-native English speakers is often mistakenly identified as AI-generated rather than human-written. Another point to consider is that if generative AI has been used for a perfectly reasonable purpose by a user. For example, to get feedback and improve writing it will be picked up as AI generated.
Myth 4: Generative AI tools such as ChatGPT and Google Bard are learning from our prompts.
This is partly true, some tools use user inputs and outputs to learn. We conducted research into terms and conditions and AI tools using user inputs for training. Some AI tools will hold the data for 30 days and not use it to train.
AI tool providers, like Microsoft Copilot (was was called Bing Chat Enterprise), protect the data of A3/A5 license users and organisations. Microsoft does not save chat data and no one at Microsoft can see organisational data. Prompts and other data are not used to train the generative AI tool. It is important to check where the data is going.
Myth 5: Qualification bodies have banned the use of all AI.
No, from our research qualification bodies have not banned the use of all AI. However, they have issued some guidance and regulations to ensure the integrity and quality of assessments that involve AI. Here are some of the key points:
The Joint Council for Qualification (JCQ) released a document to help teachers and assessors handle AI during assessments. It’s called “AI Use in Assessments: Protecting Qualifications.” The document covers topics such as: what is AI, how AI can be used in assessments, what are the benefits and risks of AI, what are the best practices and ethical principles for using AI, and what are the consequences of AI misuse.
The Department for Education (DfE) has released a policy paper on “Generative Artificial Intelligence (AI) in Education.” This paper talks about the DfE’s view on using generative AI, such as ChatGPT or Google Bard, in education. The paper discusses generative AI in education, including opportunities, challenges, and responsible use.
Myth 6: Generative AI has bias and can create false information.
This is true. AI reflects existing data bias, due to the data it has been trained on. For example, if someone asks an AI image generator to create a picture of a nurse, it will probably show a female nurse. To avoid bias, give clear instructions about the role and persona when prompting generative AI tools. A generative AI hallucination is a situation where a generative AI tool produces information that is incorrect or misleading as if it were true. This can happen because of various reasons such as the quality or quantity of the data used to train the model. Teachers and students should use their own knowledge and judgement to verify AI generated content.
Myth 7: It is possible to redesign all assessments to outwit generative AI.
In reality, no, although some assessments can be redesigned to escape the risk of generative AI. However, learners prefer different types of assessments. It wouldn’t be suitable for them to only take exams, which would also increase their workload. With such rapid developments, it is very difficult to stay ahead. A NCAITE working group led by University College London collaborated to provide assessment suggestions, ‘Designing Assessment in an AI enabled world.‘ They provide educators with ideas for redesigning assessments.
Myth 8: Anything can be input into generative AI.
It is important to understand what happens to data before using any generative AI tool. In some cases, once data is entered it will no longer remain confidential. This is more likely to be the case with a free tool, so you should never enter any personal or confidential data into a system unless you are sure that your university or college has appropriate contracts in place to ensure data security.
At the moment, the main generative AI solution you are likely to come across with an appropriate contract that will guarantee data security is Microsoft Copilot (Bing Chat Enterprise), although other solutions that are provided by your institution, such as TeacherMatic or Blackboard’s course generator will also have appropriate contracts in place. This is likely to be a fast moving space with many more systems arriving, so if in doubt check with the appropriate person in your institution.
We hope this blog post helped you debunk myths and offered some solutions. Please feel free to leave your feedback, questions, or suggestions in the comments below.
Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.
For regular updates from the team sign up to our mailing list.
Get in touch with the team directly at AI@jisc.ac.uk