Categories
Advice and Guidance AI in Education Community

HE Generative AI Literacy Definition

A top down view of a group of students studying at a table using a variety of devices including laptops, phones and tablets.

Download a Welsh Language version of this resource

AI literacy is essential for navigating the rapidly evolving landscape of generative AI (GenAI). We have framed our approach around three fundamental areas—Terms, Tools, and Tasks— to create a comprehensive approach to understanding and applying GenAI effectively.

Adopting this model should ensure that staff members are not only equipped with a theoretical understanding of how Gen AI functions and its broader implications but are also able to integrate GenAI tools into their tasks responsibly and ethically.  

This definition is aimed at setting a foundational level that all staff should aim to achieve to support practical, informed, and responsible use of GenAI tools.

AI literacy refers to the ability to understand and use generative AI (GenAI) responsibly and ethically across three fundamental areas: Terms, Tools, and Tasks. This competency requires you to develop an understanding of GenAI processes and outputs, recognising the capabilities and limitations of various AI models, and applying this knowledge practically in workplace tasks.

Effective AI literacy ensures staff members can integrate GenAI tools into their daily lives, fostering informed and ethical usage. By committing around ten hours to both study and hands-on application, individuals can develop a robust understanding of GenAI and enhance their proficiency.

Generative AI Outputs and Processes:  

A general understanding of how GenAI models function, including capabilities and limitations. Reading A Generative AI Primer may help.

It takes time to develop an understanding of GenAI capabilities, limitations and tools.  This is not something that can be developed just by reading – you will need to use the tools to develop your understanding.  

Understanding generative AI is a gradual process that cannot be mastered in a single session. It’s important to dedicate time to read about and experiment with these tools progressively. Revisiting them regularly and committing at least 10 hours will help you gain a solid grasp of their capabilities and limitations. Try to apply them to your everyday tasks. Additionally, collaborating with others can provide fresh perspectives and enhance your understanding of the subject. Observing how others use it for specific tasks, e.g. through a micro-case study, can also be a great way to learn and develop AI literacy.

Terms

Vocabulary to help you talk and read about generative AI

Basic Terms

    • Generative AI: Artificial intelligence systems that generate new content, from text to images, based on learned data.
    • Model and LLM (Large Language Models): An AI system designed to understand, predict, and generate human-like text based on the data it has been trained on.
    • Prompt: The input given to an AI model to generate a specific output.
    • Context window: The amount of text or data an AI model can consider at one time when generating its output.
    • Deepfakes: Techniques for creating highly convincing fake images, audio, and video, often used to generate realistic but false representations of people in media.
    • Hallucination: When an AI model generates incorrect or nonsensical information that is not supported by its training data.
    • Bias: The tendency of AI-generated outputs to reflect and perpetuate the underlying prejudices present in the training data.
    • Indeterminacy: The inherent unpredictability in the output of generative AI models due to their complex nature
    • Intellectual property and copyright: The awareness that AI-generated outputs may be based on copyrighted materials, which may incur risks and ethical considerations depending on how they are used
    • Chatbot: A computer program designed to simulate conversation with human users, especially over the Internet.
    • Conversational user interface: A way of interacting with computers which simulates human conversation. This may be through a visual interface or verbal interaction, and can be through a dedicated interface or embedded within a wider system such as email or word processing software.

Intermediate Concepts

  • Tokens: The pieces of text (words or parts of words) that AI models like LLMs process.
  • RAG (Retrieval-Augmented Generation): A method where the AI model retrieves information from a database to help generate responses.
  • Pretraining vs Finetuning: Pretraining is the initial training of an AI model on a large, diverse dataset. Finetuning is subsequent training on a more specific dataset to specialise the model’s responses.
  • Model benchmark: Tests designed to evaluate the performance of AI models under various conditions.
  • Prompt techniques: Methods used to craft prompts that effectively guide AI models to produce desired outputs.
  • CoT (Chain of Thought): A technique where the model is prompted to “think aloud” as it solves a problem, helping it reach a more accurate conclusion.
  • Personas: Predefined character profiles used to shape the style and tone of a model’s outputs in dialogue applications.

Advanced Topics

  • Key models and their makers:
    • GPT family (OpenAI): A series of increasingly sophisticated models known for their broad generative capabilities.
    • Claude family (Anthropic): Designed to be safe and easy to use in practical applications.
    • Gemini family (Google): Focuses on multimodal abilities, integrating text, image, and possibly other data types.
    • Llama family (Meta): Known for efficient scaling and performance.
    • Mistral (Mistral): Specifics might vary but generally part of newer developments in AI.
  • LLMs vs diffusion models: LLMs generate text based on statistical likelihoods, while diffusion models generate images through a process of adding and then iteratively refining random noise.
  • Foundation model vs Frontier model: Foundation models are broadly capable across many tasks, trained on vast data. Frontier models are newer models pushing the limits of AI capabilities.
  • Meta knowledge: Advanced AI systems that can train other AI models, potentially reducing the need for human intervention in training processes.
  • Distinguish between key companies and their reputations: Different companies have distinct reputations based on their products, research focus, and ethical considerations in AI development.
  • On-device use: The capability for AI models to operate directly on a user’s device, enhancing privacy and speed by not relying on cloud processing.
  • APIs: Interfaces that allow developers to access the functionality of AI models programmatically for use in their own applications.

Understanding these terms can greatly enhance your ability to engage with the latest discussions and developments in the field of generative AI.

 

Metaphors for AI and pros/cons

Different metaphors for artificial intelligence (AI) can illustrate its role and the implications:

1. AI as an Intern

Pros: 

  • Learns Over Time: AI improves with experience, similar to an intern.
  • Supports Staff: It handles routine tasks, boosting overall productivity.

Cons:

  • Limited Judgement: AI might lack the nuanced decision-making of experienced workers.
  • Needs Supervision: Like interns, AI systems require monitoring to manage errors.

2. AI as a Partner, Assistant, Co-Creator

Pros:

  • Collaborative: AI works alongside humans, blending analytical strength with human insight.
  • Spurs Innovation: It can generate new ideas and simulate outcomes, aiding creativity.

Cons:

  • Dependency Issues: There’s a risk of becoming too reliant on AI.
  • Ethical Dilemmas: Joint decisions between humans and AI raise questions about responsibility and openness.

3. AI as Prediction vs Relation

AI as Prediction:

Pros:

  • Improves Forecasting: AI is great at using large data sets to predict future trends.
  • Helps with Risk Management: By foreseeing problems, AI aids in strategic planning.

Cons:

  • Data Dependence: Predictions are limited by the quality of the data used.
  • Struggles with Surprises: AI may falter when faced with unexpected conditions.

AI as Relation:

Pros: 

  • Focuses on Interaction: This view emphasises AI’s role in engaging with users and learning from interactions. 
  • Tailors Experiences: It adapts to individual preferences, enhancing service and satisfaction.

Cons:

  • Complex to Build: Crafting AI that effectively interacts like a human is challenging. 
  • Privacy Issues: These systems often need personal data, raising concerns about security and privacy.

Each metaphor helps define what we can expect from AI and points out the potential challenges in integrating AI into everyday activities. Whichever metaphor one is using, a common principle is to keep the human at the centre, taking responsibility for how the tool and output are used.

Critical Evaluation: 

GenAI tools are subject to bias and producing false information (hallucinations) therefore reviewing the accuracy and relevance in the information created by generative AI is crucial.  Apply critical thinking to AI outputs to determine their validity and reliability.  Ask questions such as:

  • Is it true?
  • Is it complete?
  • Is it biased?
  • Is it overly generic?
  • Is it fit for purpose?
  • Is it overly repetitive?
  • Does it end up contradicting itself?

Safe, responsible use:

Complying with your institutional guidance on use of GenAI to ensure data privacy and security.  Ensuring sensitive information and student work can not be used to train GenAI models.

Ownership: 

Knowledge of copyright and intellectual property concerns related to AI-generated content, including ownership rights and usage permissions.

 

Tools

Range of AI Tools: 

Recognising different GenAI tools are available, understanding their specific strengths and weaknesses, and selecting the best fit for the specific use.  Evaluating tools before use, reviewing and understanding terms and conditions and thinking critically, beyond the sales pitch.

 

Strengths Weaknesses
ChatGPT-4o The original

Fast

Uncluttered user experience

Falls back on old GPT 3.5 when you hit free limit

No corporate data protection features

Copilot (formerly Bing chat) Commercial Data Protection (A3 & A5 licences)

Data not used for model training

Powered by GPT4

Cites sources (to a degree!)

Multimodal (images as well as text)

No access to ChatGPT features like GPTs

No access to those under 18

No chat history

Google Gemini Powered by new Gemini model

Multimodal (images as well as text)

Access to the internet

A nice ‘double-check response’ feature

No corporate-level data protection features

Data used for model training

No access to under 18s

Anthropic Claude One of the most powerful models

An interesting ‘artefacts’ feature allowing interactive creation of certain visual outputs such as diagrams and websites

No access to the internet.

Limited free access.

 

In-depth tool knowledge: 

Focusing on gaining detailed knowledge on the preferred institutional tool such as Copilot rather than a superficial understanding of many tools, which will help increase understanding and appropriate use.

Institutional tools usually have an enterprise licence which will provide data protection and also support equity. 

New developments: 

Recognising when new GenAI capabilities have been added to existing tools and evaluating them before use, ensuring that data privacy and security are not compromised.

 

Tasks

Application in Work: 

Use GenAI effectively to support work-related tasks, such as improving communications, drafting session plans, generating ideas, summarising research papers, and analysing data.

Examples include:

Acknowledging use of GenAI: 

Understand when to acknowledge or reference GenAI use.  This is likely to change depending on use:

  • Work related tasks such as improving communications, drafting session plans, generating ideas, summarising research papers is unlikely to need acknowledgment
  • Academic teaching use is likely to need acknowledgement 
  • Academic writing such as research papers will need referencing
  • Writing published on public sites such as University web pages may or may not require acknowledgement depending on context.

Evaluate GenAI output: Check to make sure the content is fit for purpose, accurate, complete, and as free of bias as it can be, editing the output as needed. 

 

Contributors

We’d like to thank our contributors to this working group:

  • Ailie Spence – University of East Anglia
  • Amy May – University of Nottingham
  • Chris Hack – Coventry University
  • Dominik Lukes – University of Oxford
  • Husna Ahmed – Royal Agricultural University
  • Kevin Campbell-Karn – Buckinghamshire New University
  • Kirsty Edginton – Architectural Association School of Architecture
  • Mary Jacob – Aberystwyth University
  • Matt Townsend – Cardiff University
  • Richard Nelson – University of Bradford
  • Vincent Bryce – University of Nottingham

Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

Leave a Reply

Your email address will not be published. Required fields are marked *