Categories
Advice and Guidance

AI Procurement Due Diligence

Business intelligence analyst dashboard on virtual screen. Big data Graphs Charts

Why AI Due Diligence is Crucial for Education and Research

Artificial intelligence (AI) holds immense potential to revolutionize the educational landscape by enhancing teaching methods, personalizing learning experiences, and streamlining administrative tasks. These advancements can lead to more effective learning environments, tailored educational pathways for students, and more efficient operations within educational institutions. However, the implementation of AI in education also brings significant ethical, societal, and legal considerations that must be addressed. Concerns such as privacy, bias, accountability, research integrity, and intellectual property (IP) rights become particularly critical in the context of AI, as these technologies often rely on vast amounts of data and complex algorithms that can inadvertently perpetuate or exacerbate existing biases and privacy issues.

Proper due diligence is essential to navigate these challenges, especially given that comprehensive legislation or guidelines for AI use in education are still emerging. Institutions and procurement bodies need clear and detailed information on the utilization of generative AI solutions or features, including their development processes, the models they employ, and the downstream use of inputs and information.

This is included in Jisc’s AI Maturity model. As institutions move to the ’embedded’ stage, we expect appropriate processes to be in place for the entire lifecycle of AI products, including procurement.

This detailed scrutiny aims to facilitate a better understanding and mitigation of potential risks associated with AI deployment. Additionally, it is crucial to ensure that the use of AI in educational and research settings does not infringe on IP rights and that the data used in AI models is appropriately managed to maintain research integrity and protect proprietary information.

As highlighted in the blog post “Regulating the Future: AI and Governance“ by our Jisc colleague Manya Sikombe, the overall objective is to strike a delicate balance between fostering innovation and ensuring the responsible development and use of AI. This involves implementing robust frameworks and practices that promote transparency, accountability, and fairness in AI applications, thereby protecting the interests of all stakeholders involved. By addressing these ethical and societal considerations proactively, we can harness the transformative power of AI in education while safeguarding against its potential pitfalls. Furthermore, ensuring that AI systems respect IP rights and uphold research standards is vital for maintaining the trust and integrity of academic institutions.

In addition to ethical concerns, the integration of AI in education and research must also consider the implications for IP rights. AI systems that generate content or analyze data must do so in a way that respects the ownership and proprietary nature of academic and research outputs. Institutions must be vigilant in protecting their intellectual property and ensuring that AI tools do not inadvertently lead to IP violations. This includes establishing clear guidelines on how AI can be used in research, ensuring proper attribution, and maintaining control over how AI-generated content is disseminated and utilized. We recommend the blog of another Jisc colleague Ben Taplin Guidance on resisting restrictive AI clauses in licences . It highlights how such clauses can impede legitimate educational and research activities and emphasizes the importance of a unified sector stance to strengthen negotiations with publishers.

By implementing comprehensive due diligence processes and robust regulatory frameworks, we can ensure that AI is used responsibly in education and research. This will enable us to maximize the benefits of AI while minimizing its risks, fostering an environment where innovation can thrive alongside ethical and legal integrity.

Key Questions

Jisc’s Due Diligence questions are used by our Procurement and Supplier Management team as well as other Jisc teams that negotiate and license agreements institutions need to support academic research, teaching and learning, and corporate needs. This rigorous due diligence process is applied across various procurement methods, including frameworks, Dynamic Purchasing Systems (DPS), and direct brokered negotiations. The information requested and analysed covers areas such as supplier information, financial stability, insurance coverage, modern slavery compliance, information security, and data protection. By thoroughly vetting these aspects, we aim to ensure that any solutions are not only innovative and effective but also ethical and compliant with all relevant regulations and standards.

Jisc Licensing, together with the Artificial intelligence (AI) team and Procurement and Supplier Management team have worked together to develop these questions, which were kindly revised by our strategic groups. They are intended to be dynamic and will be reviewed to reflect advances in technology or legislation. Please inform your relationship manager if your institutions would like to receive any news related to the update of these questions.

The following sources were used to create the proposed questions that should be answered by the developers or representatives of the AI system or solution being queried:

1 Outline which AI features of your system use third party AI models, and which use your own proprietary or in-house AI models.  Please provide details of any third-party technologies used, including the name of provider and an outline of the features used.   Note that for major suppliers in the LLM supply chain, such as OpenAI, Google DeepMind, Anthropic, etc., due diligence should be conducted separately. There’s no need to request information about them from all third-party providers built on these large language models.
2 Where you are either creating your own model or fine tuning a third-party model, how is performance defined and measured? Include details of initial training and monitoring over time.  

(UK AI Principle: Safety, security and robustness)

3 What data do your AI models require for initial training or fine tuning? If you are using third party models, you should only describe data that is unique to your application.  

(UK AI Principle: Safety, security and robustness)

4a/4b Is data from user interactions with the system utilized to enhance model performance, and if so, please elaborate on the mechanisms involved? Furthermore, could you provide clarification on whether institutional data is integrated into external models?  

(UK AI Principle: Safety, security and robustness)

5 What features does your solution have to make it clear when the user is interacting with an AI tool or AI features?  

(UK Principle: Safety, security and robustness)

6 Could you please provide comprehensive information about the safety features and protections integrated into your solution to ensure safe and accessible use by all users, including those with accessibility needs and special education requirements?  

(UK Principle: Safety, security and robustness) 

7 Can you specify any special considerations or features tailored for users under the legal majority age? UK Principle: Safety, security and robustness) 
8 What explainability features does your AI system provide for in its decisions or recommendations?  

(UK Principle: Safety, security and robustness) 

9 What steps are taken to minimize bias within models your either create or fine tune?  

(UK Principle: Fairness robustness) 

10 Does your company have a public statement on Trustworthy AI or Responsible AI? Please link to it here.  

(UK Principle: Accountability and governance) 

11/

11a/

11b/

11c

 Does your solution promote research, organizational or educational use by:

A)    Not restricting the use of parts of your solution within AI tools and services

B)    Not preventing institutions from making licensed solutions fully accessible to all authorized users in any legal manner;

C)    Not introducing new liability on institutions, or require an institution to indemnify you especially in relation to the actions of authorized users

 

 (Gartner, Inc, ICOLC statement and legal advice obtained by Jisc)

12 Does your solution adequately protect against institutional intellectual property (IP) infringement including scenarios where third parties are given access to and may harvest institutional IP? (Gartner, Inc and ICOLC statement)

Collaboration and Transparency: The Path Forward

The integration of new technologies and resources in education and research requires a collaborative approach to ensure these tools are negotiated, developed and deployed responsibly. By addressing key questions and involving stakeholders from various sectors, institutions can make informed decisions that align with ethical guidelines and protect the interests of researchers, content providers, students, and educators. Participation in Jisc’s online communities, open members’ meetings and events, surveys, and strategic groups is essential in this collaborative effort.

Together, we can embrace AI with confidence and ensure it serves as a force for good in education.


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

By Luciana Piccoli and Michael Webb

Head of Corporate Information Systems, Jisc Licensing
Director of Technology and Analytics, AI Team

Leave a Reply

Your email address will not be published. Required fields are marked *