Advice and Guidance

From Principles to Practice: Taking a whole institution approach to developing your Artificial Intelligence Operational Plan

The AI Maturity model for Education, there are five sections explaining 5 stages of maturity for an institutions AI maturity journey. The stages, beginning with the earliest at the far left of the model are: 1. Approaching and understanding Interested in AI Understanding how it has impacted or is transforming other sectors Experimenting and exploring 2. Experimentation and pilots within existing processes and with existing AI enabled tools. Data culture to support AI emerging Responsible AI processes established Operational 3. Institutional AI principles established A systematic approach to staff AI skills and literacy Use of everyday AI task-specific Institution wide AI used for one or more processes across an organization e.g. chatbots for a specific purpose or adaptive learning systems Embedded 4. AI embedded in strategy Data maturity allows AI to be considered for all new systems and processes Mature processes to manage the lifecycle of all AI products, including procurement and continuous monitoring 5.Optimised/Transformed AI is supporting the delivery of learning that optimises opportunities and outcomes for all learners The right tasks are automated, freeing staff time for creativity and human interaction. Below the maturity model is a horizontal arrow labeled "Data Maturity" indicating the progress from left to right through the stages of AI maturity in education
AI Maturity model for Education – click to enlarge

We’ve been delivering a lot of sessions recently around taking a whole institution approach to moving into operational Artificial Intelligence (AI) and therefore thought it would be useful to share some thoughts to support those embarking on this journey.

Adopting a comprehensive, institution-wide approach to operational AI is vital to ensure that all departments and stakeholders are aligned and contributing to a unified vision. It facilitates the integration of AI across the institution and enhances innovation. By involving staff and student/learners in the operational process, you can address diverse needs and perspectives, fostering an environment of collaborative learning and adaptation. This holistic approach helps in navigating the ethical and practical challenges associated with its deployment in an educational setting.

Setting up an internal AI working group to lead the move to operational is key.  A cross functional working group will drive AI efforts cohesively across the institution and promote responsible innovation.  It will serve as the central hub for AI-related activities, ensuring a coordinated effort across various departments. Providing expertise and oversight to achieve goals.  A cross functional working group facilitates communication, increases engagement, and mitigates risks.

While there is variation in how principles and policy, and guidance are interpreted, for the purpose of this discussion, we define them as follows:  When moving to operational AI, three levels are crucial: principles, policy, and guidance. Principles provide the ethical foundation, outlining core values and setting the vision. Policies translate these principles into specific, actionable rules, ensuring compliance and proper use of AI. Guidance offers detailed, practical advice for daily application, helping staff implement policies effectively. This structured approach ensures ethical, efficient AI integration aligned with institutional goals.

If not, then that should be the initial focus.  The Russell Group Principles are a good starting point. For the purposes of this blog, we have assumed the following set of principles, which this blog is structured around:

  • We will provide our students/learners with the skills they need to make appropriate use of AI tools in their studies and thrive in an AI enabled workplace.
  • We will ensure staff have the skills to maximise the value of AI, to help reduce workload and support effective learning, teaching and administration.
  • We will ensure academic integrity is maintained, whilst allowing students/learners to develop the skills they need.
  • We will ensure use of AI is safe, ethical and responsible.
  • We will aim to ensure all learners have access to the AI tools that they need.

The pace of change in the field of AI presents particular challenges around planning.  Whatever approach you use, it needs sufficient flexibility to be able to accommodate and respond to the fast-paced advancements and changes inherent in the field of AI.

This involves not only staying informed about the latest developments but also having the flexibility within the plan to adjust in real-time as the technology evolves.  Striking the right balance between a well-structured approach and the agility to adapt as needed is key to success.

Staying informed about the latest developments in this dynamic field is crucial, yet it’s important to recognise the challenge of keeping abreast of every advancement. To manage this effectively, prioritise key areas for updates. Cultivating a culture of continuous learning, while assigning manageable responsibilities across the team, will help minimise stress and keep your operational plan current and focused.

You might want to take a mandated approach to reviewing assessment for example but encourage use in learning and teaching.

We’ll now look in more detail at the specific areas of the plan, based on our principles, above.

We will provide our students/learners with the skills they need to make appropriate use of AI tools in their studies and thrive in an AI enabled workplace:

Our recent student discussion :

  • Students/learners want their institutions to help them gain the skills to make responsible and effective use of AI, not limited to specialised AI programmes.
  • Students/learners desire a focus on critical thinking and creativity in their education.
  • Students/learners are concerned about their employment prospects and want support from their institutions to prepare them for their futures in the AI enabled world.
  • Students/learners want clear and fair guidance on ethical AI use.

Consider how you might respond to these student needs, what can you plan to put in place immediately, short, or longer term?

Tailoring your approach for different student/learner groups is essential, particularly those under 18.  Learners under 18 will require special considerations regarding content suitability, ethical use, and data privacy.

For PhD students, the approach to integrating AI tools should be research oriented. These students require tools that can assist in complex research, data analysis, and in-depth study.

Questions to consider:

  • Do you have relevant policies and guidance in place
  • If you have policies in place, are they clear and concise, with no ambiguity, for easy understanding?
  • Were students/learners involved in creating the guidance?
  • Are regular review points scheduled?
  • Are policies and guidance clearly signposted and easily visible to all students/learners?
  • Are critical thinking, creativity and other key employability skills embedded across the curriculum?
  • Are accessibility needs considered?
  • Do you have an approach to ensuring students gain the AI skills they need?

We will ensure staff have the skills to maximise the value of AI, to help reduce workload and support effective learning, teaching and administration:

You will probably need to take a different approach for different roles, but there is a core set of needs that apply to all staff:

Feedback from our recent interactions indicate that staff are looking for support to:

  • Develop their knowledge and skills on AI and in particular generative AI, giving them the skills to use the tools right now.
  • Understand the potential applications of AI tools to their work areas.
  • Learn what works well from existing best practice.
  • Stay up to date with the evolving world of AI.
  • Overcome difficulties in adapting to new technologies and AI systems.

All roles
Consider how best to support staff development in this rapidly evolving world.  How will you engage all staff in this key change?  Engaging staff in the development stage will ensure that you have staff buy in and commitment.  It is also crucial to address concerns staff may have regarding job security, emphasising AI as a tool to augment rather than replace.

Teaching staff needs:

  • Pedagogical Integrity: Ensuring AI tools align with educational standards and without compromising teaching quality.
  • Technical Proficiency: Addressing the challenge of integrating AI into teaching methods.
  • Student/Learner Engagement: Ensuring strategies are in place to keep student/learner interest and interaction in an AI-augmented learning environment.

Research staff needs:

In this blog we are focussing more on the challenges around generative AI and how it can save time in research administration, rather than the broader issue of AI in research.

  • Publishing process: How to comply with the AI policies of journals etc.

For teaching and research staff do you need to develop a whole new series of regular CPD sessions focused on specific areas of generative AI or can you embed sessions into existing learning and quality networks?

  • Increasing productivity: How to integrate generative AI tools in everyday operations.
  • Technical proficiency: The challenge of incorporating new technologies within their professional areas.
  • Data Management: Challenges in managing and understanding data to ensure it’s ready for use by AI.

Is there a schedule to review and update existing policies and guidance to include AI and specifically generative AI considerations?   Can you encourage a culture of innovation and recognise and reward innovative uses of AI in workflows?

Questions to consider include:

  • Is there a structured programme of staff training in place?
  • Do staff have time to explore, experiment and understand AI tools?
  • Have staff been involved in developing policies and guidance?
  • Are regular reviews planned for all policy and guidance?
  • Do you have a network of mentors or AI champions?
  • Do you have a plan for staff to keep up to date in a manageable way?

We will ensure academic integrity is maintained, whilst allowing students/learners to develop the skills they need:

Since the announcement of ChatGPT in November 2022 there has been a focus on academic integrity and how to ensure students/learners are not using generative AI tools in a way that breaches academic integrity.   Clear and unequivocal guidance is needed to prevent malpractice using AI.  QAA have provided guidance for HE, and for both higher education and further education, general guidance has been provided by the Department for Education. In addition, the Joint Council for Qualifications has produced guidance for protecting quality of qualifications.

How to maintain academic standards whilst responding to the needs of students/learners to adapt education to reflect the AI enabled world, they are living in is a challenge.  Areas of concern include:

  • Appropriate use of AI detection technology
  • Support to transform assessment to embrace AI
  • Integrating generative AI whilst maintaining intellectual growth

Questions to consider include:

  • Have you reviewed and if necessary, updated your academic integrity policy
  • Is it clear and explicit to students/learners without ambiguity?
  • Is there up to date assessment guidance in place for staff?
  • Are there regular reviews of awarding body and PSRB guidance?
  • If you are using AI detection tools is training and guidance in place?
  • Are you supporting staff to review and redesign assessments for this AI enabled world?
  • Is there guidance on integrating AI across curriculum?
  • Is good practice shared?

We will ensure use of AI is safe, ethical and responsible:

This section highlights the commitment to prioritise safety and ethical considerations in AI use, emphasising these concerns:

  • Both students/learners and staff are concerned about safe and fair use including data privacy and bias.
  • Anxiety around the proliferation of misinformation and increase in deep fake creation.
  • Concerns about understanding when, how and why AI tools are used, particularly around lack of transparency and explainability.
  • Apprehensions about Intellectual Property Rights (IPR) and copyright when using AI tools.
  • Governance staff are cautious of risks from AI tools gaining new capabilities and utilising new types of data.
  • Governance staff key concern is to understand and mitigate generative AI’s inherent risks.

Questions to consider include:

  • Are there selection criteria in place in procurement to ensure AI tools selected have some information about how they work and how data is used?
  • Do procurement criteria ensure no students/learners are disadvantaged by using inappropriate or ineffective AI tools?
  • Is there clear guidance in place for students/learners on how to contest the output of any AI system if they feel unfairly disadvantaged?
  • Do you plan to regularly review AI tools in use?
  • Are students/learners made aware of how, when, and why AI tools are used, particularly in the teaching space?
  • Is digital literacy embedded into curriculums, teaching the skills needed to discern credible information sources?
  • Have your existing policies been reviewed and if necessary revised to incorporate the acceptable use of AI tools, specifically focusing on compliance to intellectual property rights?
  • Do you have guidance and training in place on acceptable use and intellectual property rights?
  • Are staff and students/learners aware of when data might be used for model training purposes?
  • Have risk logs been updated since the advent of generative AI and regular reviews planned?

We will aim to ensure all learners have access to the AI tools that they need:

AI tools can enhance equality by offering universal access to resources like proofreading and tailored learning formats. However, true equality depends on ensuring all students have equal access. While generative AI might seem free, those who can afford it often access a broader range of tools, gaining an advantage.

  • Are you working proactively to minimise the disadvantage created by the issue of those that can pay gaining advantage by having access to a greater range of enhanced AI tools?

To support you in developing your own operational plan we have provided a downloadable checklist here.

We welcome your feedback and contributions.  If you have any other topics or viewpoints that you think we should consider in this discussion, please feel free to share your thoughts and comments below.  All comments are welcome.  Collaboration is crucial in enhancing our understanding and ensuring our approach to generative AI implementation is inclusive and impactful.


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at

5 replies on “From Principles to Practice: Taking a whole institution approach to developing your Artificial Intelligence Operational Plan”

Hi Ben, apologies that there have been issues with the checklist link for you. We’ve now updated the way the resource is hosted and this should resolve the download issue.

Leave a Reply

Your email address will not be published. Required fields are marked *