Publishing an intro to generative AI is a challenge as things are moving so quickly. However, we think things have now settled down enough for us to bring together information in a single place, to create a short primer. We aim to publish this as a more formal guide that will be updated regularly, but we are posting an initial version as a blog post to get feedback on whether it is useful and if there is other information you would like included.
Version 1.1 – May 22 2023
Table of Contents
2. An Introduction to the Generative AI Technology
2.2. Microsoft Bing Chat, Google Bard and Anthrompic’s Claude
2.5 Microsoft Co-pilot and everyday AI
3. Impact of Generative AI on Education
3.1.1 Guidance on advice to students
3.1.2 The role of AI detectors
3.2 Use in Learning and Teaching
3.2.1 Examples of use by students
3.2.2 Examples of use by teaching staff
3.2.3 Examples of uses to avoid
3.3 Adapting curriculum to reflect the use of AI in work and society.
1. Introduction
Generative AI tools such as ChatGPT are already having a significant impact on education. These tools are posing considerable challenges around assessment and academic integrity, but also present opportunities, for example saving staff time by helping with the creation of learning materials or presenting students with new tools to enhance the way they work. The impact of generative AI is being felt far beyond education, and it is already starting to change the way we work. This presents more challenges and opportunities, in making sure education prepares students for an AI-augmented workplace, and that assessments are authentic yet robust.
The primer is intended as a short introduction to generative AI, exploring some of the main points and areas relevant to education, including two main elements:
- An introduction to Generative AI technology
- The implications of Generative AI on education
2. An Introduction to the Generative AI Technology
Key Points |
|
Whilst we do not need a detailed technical understanding of the technology to make use of it, some understanding helps us understand its strengths, weaknesses, and issues to consider. In this section, we will be looking into the technology in a little more detail.
This is a fast-moving space, and the information here is likely to age quickly! This edition was written in May 2023, and we aim to update it regularly to take into account significant developments.
This guide starts by looking at AI text generators, also known as Large Language Models (LLMs).
2.1 ChatGPT
ChatGPT has grabbed most of the headlines since its launch in November 2022. It was created by a company called OpenAI, which started as a not-for-profit research organisation (hence the name) but is now a fully commercial company with heavy investment from Microsoft. It is available as a free version, plus a premium version at $20 a month, which provides faster, more reliable access, as well as access to its latest language models and features, including plugins, which changes its behaviour significantly.
ChatGPT is based on a machine learning approach called ‘Transformers’, first proposed in 2017, and is pre-trained on large chunks of the internet, which gives it the ability to generate text in response to user prompts, hence the name ‘Generative Pre-trained Transformer’. Whilst OpenAI provided some information on the approach for training ChatGPT, they haven’t so far, released any information about GPT4, the latest model released in early 2023.
In its standard mode, without plugins, ChatGPT works by predicting the next word given a sequence of words. This is important to understand, as it is not in any sense understanding your question and then searching for a result and has no concept of whether the text it is producing is correct. This leads it to be prone to producing plausible untruths or, as they are often known, hallucinations.

As it stands today, the free version of ChatGPT doesn’t have access to the internet, so can’t answer questions beyond its training data cut-off date of September 2021. Users paying for the ChatGPT Plus service have access to a version that can access the internet.
ChatGPT Plus customers also have access to plugins which extend ChatGPT’s functionality. For example, a Wolfram plugin allows users to ask questions which are answered by Wolfram Alpha, which excels at mathematical and scientific information. Initial testing suggests this might resolve the issue of ‘hallucination’ in these domains. Many other plugins are available, and more are being developed.
OpenAI makes its service available to other developers, so many other applications make use of it, including many writing tools such as Jasper and Writesonic, as well as chatbots in popular applications such as Snapchat.
2.2. Microsoft Bing Chat, Google Bard and Anthrompic’s Claude
Although ChatGPT has got most of the hype, there are other players in this space, and this number is likely to increase.
Microsoft has introduced Bing Chat, which is available for testing, and is based on Open AI’s GPT-4. It’s focused on searching for information and does have access to the internet. It takes your question, performs one or more web searches based on your question, and will then attempt to summarise and answer, giving references for the sites it uses.

Bard is Google’s equivalent and is available for testing. Like Bing Chat, it can access the internet, but unlike Bing Chat, it doesn’t provide references for the sites it’s used to give its answers.

Claude is more similar to ChatGPT and is produced by Anthropic, and is likely to be built into many applications going forward.
At the moment, both Google Bard and Claude feel a long way behind ChatGPT in terms of capabilities, although at the time of writing, Google has just announced significant improvements, so this may change quickly.
2.3. A summary of key capabilities, limitations, and concerns around ChatGPT and other Large Language Models.
In considering generative AI, it’s important not only to understand its capabilities but also its limitations. We also believe users should have a broader understanding of the societal impact of generative AI. Some of the key themes are summarised here:
Capabilities | Limitations | Concerns |
|
|
|
2.4 Image Generation
It’s not all about text – image generation tools have made huge progress too, particularly with Midjourney, DALL-E 2 and Stable Diffusion.
As for the text generators, these have been trained on information scraped from the internet, and there is a lot of concern about the copyright of the training material.
These work in a similar way to text generators – the user gives a prompt and one or more variations of images are produced.
Image generation capabilities are being incorporated into general AI services, so Bing Chat, for example, can also generate images, using OpenAI’s DALL-E 2.
2.5 Microsoft Co-pilot and everyday AI
There is a lot more to generative AI than just accessing it via the ChatGPT web service. The developers of major AI services such as OpenAI make their services available to other developers. One of the most significant announcements has been from Microsoft, who are incorporating generative AI across the Microsoft 365 tools, under the name ‘Co-pilot’. At the time of writing (May 2023) the licensing terms haven’t been confirmed, but it is likely that this will put generative AI directly into the hands of all our staff and students with access to Microsoft 365. Google have made similar announcements about their office tools.
2.6 Beyond Chatbots
Whilst the focus has been on chatbots and image generation, generative AI is finding it its way into a wide range of tools and services that can potentially be used in education. As an example, Teachermatic can create a range of resources for teachers, including lesson plans and quizzes, Gamma can create websites and presentations, and Curipod can create interactive learning resources. Many of these applications are built on tools provided by OpenAI and powered by the same technology as ChatGPT.
3. Impact of Generative AI on Education
Key Points |
|
The impact of generative AI in education is still unfolding and is likely to for some time to come. Key areas include assessment and academic integrity, its use in teaching and learning, use as a time-saving tool, and use by students.
Jisc has convened a number of working groups with representatives from universities and colleges to help us collate and present more detailed advice in these areas, particularly around sector-level advice, assessment and advice to students, and we’ll share more information from these groups over the coming weeks.
Here we will now give an overview of the core themes.
3.1 Assessment
Initial discussions about generative AI have focused on assessment, with the concern that students will use generative AI to write essays or answer other assignments. This has parallels with concerns around essay mills. These concerns are valid, and whilst essays produced wholly by generative AI are unlikely to get the highest marks, their capability is improving all the time. The ability isn’t just limited to essays. ChatGPT is also highly capable of answering multiple-choice questions and will attempt most forms of shorter-form questions. It will often fall short, especially when answers are highly mathematical, although this will not be obvious to the student using the chatbot service.
There are three main options, each with their own challenges. There is broad acceptance that ‘embrace and adapt’ is the best strategy in most instances.
Strategy | Approach | Challenges |
Avoid | Revert to in-person exams where the use of AI isn’t possible | This moves away from authentic assessment and creates many logistical challenges. |
Outrun | Devise an assessment that AI can’t do. | AI is advancing rapidly and given the time between the assessment being set and it being taken, AI might well be able to do the assignment when it is taken. |
Embrace and adapt | Embrace the use of AI, discuss the appropriate use of AI with students, and actively encourage its use to create authentic assessments | Balancing authentic assessment and the use of generative AI with academic integrity is a challenge. |
The immediate action is for all staff to engage with generative AI and try it themselves, learning how their assessments will be impacted. Alongside this, institutions will need to consider their strategic approach to AI, review, and update policies, and communicate guidance to students.
For higher education, this aligns with guidance provided by QAA, and for both higher education and further education, general guidance has been provided by the Department for Education. In addition, the Joint Council for Qualifications has produced guidance for protecting quality of qualifications.
3.1.1 Guidance on advice to students
We are seeing the first guidance to students appear. Getting the wording on this can be challenging, and is discussed further in a Jisc blog post ‘Considerations on wording when creating advice or policy on AI use’. Our key messages are:
- Don’t try to ‘ban’ specific technology or AI as a whole.
- Describe acceptable and expected behaviours and provide examples.
- The detail of advice will vary between subjects, so supplement general advice with tailored advice for the subject.
A Jisc-convened working group is looking at best practices in this area and will be sharing its outcomes shortly.
3.1.2 The role of AI detectors
Discussions about academic integrity inevitably include the role of AI detection software. There are a number of tools in this space, but for most UK universities and colleges, Turnitin’s newly released AI detection tool will be the obvious tool to consider. The key things are:
- No AI detection tool can conclusively prove that text was written by AI
- These tools will produce false positives.
- The tools won’t be able to differentiate between legitimate and other use of AI writing tools.
Best practice for using such tools is still being considered and developed across our sectors, and we will aim to provide examples as these develop.
A Jisc blog post explores the concepts and considerations around AI detection in more detail.
3.2 Use in Learning and Teaching
We are seeing a wide range of ideas for how to use generative AI in learning and teaching. It’s worth remembering at this point that it is a fast-moving space and many of the tools are still under development or at the beta stage. Whilst exploration makes a lot of sense, it’s also worth noting that individuals shouldn’t feel the need to rush into using it in teaching – things are changing rapidly, but now is definitely the time to explore and start planning.
3.2.1 Examples of use by students
We welcome the fact that student voices have been brought into the discussion around generative AI, for example ‘Generative AI: Lifeline for students or threat to traditional assessment?’. Jisc has run a number of student panels and will be reporting the findings in more detail in the near future.
Examples of uses by students.
Use | Considerations |
To formulate ideas, for example creating essay structures | Generative AI tools are generally effective in producing outlines as a starting point for an assignment. |
To provide feedback on writing | Generative AI will proofread and correct text for students, in a similar way to grammar tools. It will also provide feedback on style and content. Students will need clear advice on when this should be declared. |
As a research tool | A good understanding of the tool and its limitations is crucial here, particularly the tendency for generative AI to give misinformation. |
Generating images to include in assignments. | The best image generation tools come at a cost, and students need to be aware of copyright concerns. |
In section 2.3 we noted that digital inequality is a concern. For example, students that pay for ChatGPT Plus get faster, more reliable access, as well as access to the latest model, and those that pay, for example, for MidJourney image generation will get more and better images than most of the free options. At the moment there are limited options for licensing these tools institutionally, but we expect this to change, and consideration should then be given to licensing options to avoid inequality.
3.2.2 Examples of use by teaching staff
If used appropriately, generative AI has the potential to reduce staff workload, for example by assisting with tasks such as lesson plans, learning material creation, and possibly feedback. The key words here are ‘appropriately’ and ‘assisting’. In section 2.3 we noted some of the limitations of generative AI, such as incorrect information, so as things stand today, generative AI can assist by, for example, generating initial ideas, but its output should always be carefully reviewed and adapted.
Use of generative AI may be by directly using a general chat app such as ChatGPT, or maybe via an application built on top of generative AI, such as Teachermatic. In the former case in particular staff will need good prompting skills to make the most of generative AI.
Use | Consideration |
Drafting ideas for lesson plans and other activities | The output may be factually incorrect or lack sound pedagogical foundations. Nonetheless, it may be a useful starting point. |
Help with the design of quiz questions or other exercises. | Generative AI can quickly generate multiple-choice quizzes and assessment ideas, but they should be reviewed carefully as above. |
Customising materials (simplifying language, adjusting to different reading levels, creating tailored activities for different interests) | Generally, when asked to customize material, generative AI won’t introduce new concepts, and so is less likely to introduce factually incorrect information. |
Providing custom feedback to students. | At the moment, generative AI should not be used to mark student work, but it can be a useful tool for assisting with personalized feedback. |
3.2.3 Examples of uses to avoid
In general, you should avoid posting any work that isn’t your own, and any personal information into ChatGPT, as it might well be used as part of the training data.
Things get a little more complex when considering tools built with ChatGPT, and it is almost certain that generative AI built into tools like Microsoft 365 will have the same data privacy protection as the rest of Microsoft 365.
Use | Reason |
Marking student work | There is no robust evidence of good performance, although ChatGPT will confidently do this if you ask. |
Detecting whether work is written by AI | ChatGPT might claim it can detect whether it wrote the text, but it can’t |
Anything involving personal information | You should never put personal information into any system that your University or College hasn’t got a proper contract in place with and made a full assessment of its data privacy policies etc. Generative AI services like ChatGPT are no exception. |
3.3 Adapting curriculum to reflect the use of AI in work and society.
We are seeing a broad acknowledgement that work will change too, but understandably limited action around this at the moment.
OpenAI has done some initial research on the impact of LLMs and jobs and has estimated that ‘around 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of LLMs, while approximately 19% of workers may see at least 50% of their tasks impacted.’ A 2021 report by the Department for Business, Energy and Industrial Strategy looks more broadly at jobs affected by AI, both in terms of those likely to be automated and growth areas.
In the short term, we expect generative AI to be quickly adopted into courses where it quickly becomes the norm in the workplace. For example, visual arts make use of generative AI, and computer coding makes use of generative coding tools.
4. Regulation
Given the pace of change, regulation has struggled to keep up. In the UK, the government has published an AI white paper which aims to balance regulation and enabling innovation. The paper makes explicit reference to generative AI and large language models. The US government have produced a Bill of Rights and the EU’s AI act is working its way through the European Parliament. The proposed UK approach will look to regulate the use of AI, rather than AI technology itself, through the work of existing regulators, against a common set of principles. As progress is made in this area we will endeavour to highlight any areas of particular concern for education.
5. Summary
Generative AI is progressing rapidly and is likely to have a significant impact on education for the foreseeable future. Keeping up with the advances is a challenge, and balancing authentic assessment and academic integrity is increasingly complex. Nonetheless, with care and an increase in staff and student knowledge, there are substantial gains to be made. This guide aimed to give a broad introduction to generative AI. Much more has been written on the topic, and for those that wish to explore further, we have included a range of resources for further reading.
Keeping updated:
To keep up to date with the work of Jisc’s National Centre of AI in Tertiary Education, join our Jisc mail list: aied@jiscmail.ac.uk.
Further Reading:
Generative artificial intelligence in education – Department For Education (March 2023)
Maintaining Quality and Standards in the ChatGPT Era: QAA Advice on the Opportunities and Challenges Posed by Generative Artificial Intelligence – QAA (May 2023)
ChatGPT and Artificial Intelligence in higher education – Unesco (April 2023)
Artificial Intelligence (AI) Use in Assessments: Protecting the Integrity of Qualifications – JCQ (March 2023)
Relevant Jisc Blog Posts:
Staff and student use:
Generative AI: Lifeline for students or threat to traditional assessment?
Means. Motive, Opportunity: A Composite Narrative about Academic Misconduct
Considerations on wording when creating advice or policy on AI use
AI Detection:
AI writing detectors – concepts and considerations
A short experiment in defeating a ChatGPT detector
Bias and other ethical considerations:
Exploring the potential for bias in ChatGPT
Using Generative AI:
Getting started with ChatGPT Part 1 – Understanding ChatGPT
How to Explore AI Image Generation
Change log
V1.1 – 22 May 2023
- We’ve added a table of contents.
- Section 2.1 (ChatGPT) has been updated to reflect the fact that all ChatGPT Plus users have access to plugins and web browsing.
- We’ve added a CC BY-NC-SA license
License: CC BY-NC-SA
2 replies on “A Generative AI Primer”
Re your claim it is “important to understand” that ChatGPT “works by predicting the next word given a sequence of words” – according to, as it were, the horses mouth: “ChatGPT’s responses are not solely based on predicting the next word in a sequence. The model has been trained on a wide range of text from the internet and can generate coherent and contextually relevant responses based on that training”.
While your claim can be thought accurate in the context of how the model operates (i.e. it uses autoregressive generation), surely it’s not the whole story and misses the crucial AI direction-of-travel – the integration of systems. Would an imaginary alien watch Earthlings writing text observe “they just write one word at a time”? Of course there’s lots more going on and is it the ‘lots more’ which it would be useful to be considering here?
For example, the claim that “as it stands today, ChatGPT doesn’t have access to the internet” is, as of March 2023, out-of-date given that newly released OpenAI plugins now enable ChatGPT real-time internet access (e.g. the Chrome WebChatGPT browser extension).
In short, is it helpful to infer that all ChatGPT does is ‘predict the next word’?
Thanks for the feedback. I agree integration is really important. Now plugins are available to all ChatGPT Plus users I’ll include this in the next update to the guide.