Advice and Guidance

Principles for the use of AI in FE colleges

Artificial intelligence is already transforming education, the workplace, and society.  

These principles have been created by Jisc in partnership with the Association of Colleges (AoC) Technology Reference Group. They are intended for colleges to adopt as a statement of intent, guiding strategic direction. They aim to help colleges navigate challenges and maximising the opportunities of AI.  They are centred around fair and responsible use of AI, giving a framework to provide learners with the AI skills they need to thrive, and to allow college staff to take full advantage of AI their daily activities.

These principles build on wider activity, such as advice from the Department for Education and Office for AI, alongside approaches taken in the university sector, led by the Russell Group.

A shared set of principles across colleges will benefit all. They will help provide equality of opportunity to learners regardless of location, and will free colleges to concentrate on the providing advice and guidance that are more specific to their staff, learners, and curriculum.

Whilst principles are universal, the best advice and guidance for staff and learners should be based around the tools and systems your staff and learners have access to, and to courses within a college.  Jisc have created an example of learning guidance that could be used to support these principles.

Principles for the use of AI in FE colleges

Our colleges transform lives, and as part of this we are committed to ensuring our learners and teachers have the skills they need to thrive in a world increasingly influenced by AI.

Of course, this is not without its challenges, and there are many ethical and societal concerns about AI, including the impact of bias, on the environment, and its contribution to misinformation.  We aim to ensure our learners are equipped for this world.

We follow a similar approach to the one adopted by the Russell Group.  There are of course differences in mission and aims, so the principles are similar but focussed on college needs.  The Russell Group called for collaboration across sectors, which we welcome, and have included a similar principle.

AI safety, the prevention and mitigation of harms from AI, for our learners and staff is of primary concern. The UK government has set out five principles for the responsible adoption of AI in its white paper ‘A pro-innovation approach to AI regulation’.  Whilst this paper is still under consultation, the principles align well with broader discussions about responsible use of AI and we have framed our third principle around this.

Our principles:

1. Colleges will place safe, ethical and responsible use of AI at the forefront of considerations

2. Colleges will provide learners with the skills they need to make appropriate use of AI tools in their studies and thrive in an AI enabled workplace and wider world.

3. Colleges will ensure staff have the skills to maximise the value of AI, to help reduce workload and support effective learning and teaching

4.. Colleges will aim to ensure all learners have access to the AI tools that they need.

5. Colleges will ensure academic integrity is maintained, whilst allowing learners to develop the skills they need.

6. Colleges will work collaboratively to share best practice as the technology and its application in education evolves.

1. Colleges will place safe, ethical and responsible use of AI at the forefront of considerations

1.1 Safety, security and robustness

Colleges will place the safety of learners and staff at the forefront of the use of AI.  This includes ensuring all systems are fully evaluated before being used, and that they are appropriate for the age group of the learners, including obtaining informed parental consent when needed.

Learners will understand how AI will be used and they will be supported to make informed decisions about their own use of generative AI, including considerations about how their data might be used for model training, and how any personal data might be used.

Colleges will also consider intellectual property rights, including learners work, which, following DfE Guidelines, should not be used to train generative AI models, without appropriate informed consent or exemption to copyright.

1.2 Transparency and explainability

Colleges will be transparent about their use of AI, and provide information on how, when, and for which purposes an AI system is being used.  As well as aligning with the principles here, this also reflects the wishes and concerns of learners expressed in the Jisc report on student perceptions of generative AI.

Education institutions should also be open and transparent, ensuring the learners understand when AI is used to create learning resources, support learning delivery or within the assessment and monitoring process. Learners will also be informed how their personal data is being processed using AI tools.

Explainability in this context refers to explaining how the AI system makes its decisions.  Colleges will ensure that all AI systems they deploy have some explanation of how they work, obtained, for example as part of the procurement process.

1.3 Fairness

Colleges will ensure AI systems used will ensure fairness for all, including considering issues around bias, data protection and privacy and accessibility. This will be built into the procurement and selection process of AI tools used within colleges, ensuring no learner is disadvantaged through the use of inappropriate or ineffective AI tools.

1.4 Accountability and governance

As with any IT system, AI systems should have a clear governance structure, with a clear line of accountability for their use. As AI systems performance may change over time, for example when the underlying AI models change or encounter new types of data, extra measures need to be put in place to periodically review the performance of any AI system, and this will be built into any AI project.

1.5 Contestability

AI systems in colleges are likely to be increasingly used in a way that directly impacts on outcomes for learners.  This includes, for example, if used to assist in marking, exam proctoring, or the use of AI detection in assessment processes. Colleges will ensure learners and staff have clear guidance on how to contest the output of any AI system if they feel they have been unfairly disadvantaged.

2. Colleges will support learners to develop the skills they need to make appropriate use of AI tools in their studies and thrive in an AI enabled workplace and wider world.

2.1 AI skills and literacy

AI is evolving at a rapid pace, and therefore, while teaching learners to use the AI tools of today is valuable, this needs to be supplemented to include a broader AI literacy, to enable learners to critically evaluate tools of the future.  Advice from the department for education supported by guidance from UNESCO, provides an understanding of the limitations, reliability, and potential bias of generative AI, the impact of technology, including disruptive and enabling technologies and creating and using digital content safely and responsibly.

2.2 AI Workplace literacy

Whilst many AI skills in use in education translate directly to the workplace, a broader understanding of where AI fits into the workplace will also be needed, for example the understanding of data privacy and cyber security issues.

Colleges will work with employers, and initiatives such as the Innovate UK BridgeAI skills guidance, and other key stakeholders to ensure their learners are acquiring the AI skills needed.

2.3 AI Citizens and the Wider World

As well as preparing our learners for studies and work, we will help them become AI Citizens equipped to navigate the use of AI in their everyday lives. AI is becoming embedded into the services we all use on a daily basis, and is impacting on broader societal issues, such as our democratic processes, climate and environment, and the way we consume and share information.  We will ensure the students have the critical AI skills to navigate this world safely and confidently.

2.4  Assessment for an AI enabled world.

Authentic and relevant assessment, both formative and summative need to be aligned to this aim. Colleges will work with awarding bodies, through the Federation of Awarding Bodies, to move towards a consistent approach for the use of AI in assessments, with the aim of making assessments authentic and relevant to an AI enhanced workplace and society, for all learners.

3. Colleges will ensure staff have the skills to maximise the value of AI, to help reduce workload and support effective learning and teaching

3.1 Saving time

Initial pilots and reports, including by Jisc and Department for Education, are showing that the promise of this technology in helping staff save time is being born out in practice. 

Alongside making tasks quicker, activities that were challenging before because of time constraints become possible.  Examples include improved differentiation for learners, using AI to create resources in multiple ways and using AI to create formative assessment resources and materials. 

We aim to ensure this benefit is felt by all staff, by providing access to the AI tools they need, and the training they need to take advantage of them which will improve their wellbeing.

3.2 New learning and teaching opportunities

We are already seeing examples of how AI can present new learning and teaching opportunities.  Many of these are gathered in the Department for Education’s Generative AI in education Call for Evidence: summary of responses, and include, for example, providing guidance on coding, helping learners optimise designs in engineering subjects, creating interactive simulations in sciences, and creating interactive conversations in language learning, ideas generation for English, and step by step explanations in maths.

4. Colleges will aim to ensure all learners have access to AI tools that they need.

4.1 Equality of access to AI tools

AI tools have the potential to improve equality, for example by providing proof reading and feedback expertise to all, and by enabling learners to obtain resources in a format and time that supports them.  However, this will only be possible if access is available to all. Whilst there is a perception that generative AI is free to access, those that have the means to pay often have access to a much wider range of tools and will be at a significant advantage.  Similarly, we will work to ensure access isn’t restricted for learners with learning difficulties and/or disabilities. As colleges, we will work to level this playing field as much as possible.

4.2 Equality access to data and devices

Of course, there are some more foundational issues that limit access to AI, including data and devices. We acknowledge this, and again will work towards levelling access as much as possible.

5. Colleges will ensure academic integrity is maintained, whilst allowing learners to develop the skills they need.

 5.1 A college wide approach

Aligning with advice from Department for Education, colleges, working with awarding organisations will continue to take reasonable steps where applicable to prevent malpractice involving the use of generative AI.   A mixed approach for this is needed, with clear guidance, well designed assessment and appropriate use of AI detection tools being core, working with partners such as awarding bodies, as noted in section 1.3

5.2  Clear guidance to students

All colleges will provide clear guidance to learners on appropriate use of AI in their assignments. This includes general principles and guidance, along with more specific guidance at assessment level. Jisc has a template set of guidance to help with this.

5.3 Appropriate use of technology such as AI detection

Whilst AI detection tools may have a part to play in maintaining academic integrity, they are by no means a full solution.  As co-creation of content becomes the norm, and authentic assessment incorporate AI aligns to this, any use of AI detection and what actually is being detected needs clear guidelines. There is a risk AI detection can unfairly discriminate and can compound existing bias, therefore, users of AI detection need a clear understanding that such systems cannot conclusively prove text was written by AI, generate false positives, and are easy to defeat.  Where they are used, staff will be given training and guidance to help understand these limitations.

6.Colleges will work collaboratively and share best practice

Colleges support the Russell Group’s call for collaboration between universities, learners, schools, FE colleges, employers, sector and professional bodies.

The size and speed of change means we will be stronger if we work together.  Best practice is still emerging, and we will work together to share what works, and what doesn’t.  This includes contributing to events, to Jisc’s library of good practices, and looking outside the college sector, to learn from and share ideas with businesses, universities and schools. 


The Russell Group’s principles on the use of generative AI tools in education for providing a template for principles of AI in education.

The Department for Education’s ‘Generative artificial intelligence (AI) in education’ lays the foundation for expectations of colleges.

The UK Government’s AI white paper ‘A pro-innovation approach to AI regulation’ provides us with principles for safe and responsible use of AI

Jisc ‘A Generative AI Primer’ lays out the main challenges and opportunities of AI in education.

Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at

2 replies on “Principles for the use of AI in FE colleges”

I found this information very useful; it does not however appear to align with the JCQ regulations which require colleges to report any misuse of AI directly to the awarding body. At my college there is a feeling that this will create a massive increase in workload for staff and may ‘demonise’ students who are unsure about the use of AI in an assessed piece of work. We were hoping to manage this initially at an internal and informal level; this suggested approach however appears to be out of line with JCQ regulations. Curious to know how other providers are approaching this matter.

Hi Liz, we try to align with awarding bodies in section 5. We don’t reference the JCQ advice explicitly as at time of writing, their advice hadn’t been updated – the text was agreed before the Feb update. JCQ talk about misuse, rather than use of AI, and for example, talk about appropriate acknowledgement of AI tools, referencing, and ensure word is in a student’s own work. There are lots of other ways that student can use AI in their work other than actually writing the text that would align with the goal of ensuring students use AI responsibly – see for example We are about to pull together a stream of work from our FE working group focussed on this, particularly around detection, so we’ll share what we find. I agree with your reading though, any misuse (not use) must be reported if the student has signed a declaration.

Leave a Reply

Your email address will not be published. Required fields are marked *