Understanding AI in Education

AI Bias and Explainability Webinar Resources and Further Reading

Webinar image

In January 2023 we held a webinar to introduce the concepts of bias and explainability in AI, and how we should consider these when thinking of using AI tools in education.

This post is primarily for attendees and viewers of the webinar and aims to list the resources we mentioned, so the resources are presented largely without commentary.

We’ve also included a few extra useful resources that expand on some of the topics.

Context – our maturity model

We started by looking at how the topic fits into our maturity model for AI in education, and how our guide ‘A pathway towards responsible, ethical AI’ can be used as a guide to help consider risks, mitigations, and opportunities:

Bias In AIEd

We then explored what bias was, looked at some examples, and looked at examples of how it was caused in AI. We’ll share some of the resources we looked at for this now.

  • To explore bias in education in more detail we’d recommend: Algorithmic bias in Education (Baker and Hawn, 2021)

We briefly looked at an example of non-AI Algorithm bias – the UK 2020 A-levels, to help us understand this kind of bias also exists outside of AI.

We then looked at some general examples of bias in AI:

We looked at more examples of bias in learning analytics, which we noted was one of the more mature uses of AI in education, and therefore many more examples are available.  These are some examples we showed in the webinar:

  • Many algorithms predict higher course failure rates for African American students compared to white American students, but the results varied by college course. (Hu & Rangwala, 2020)
  • An algorithm incorrectly predicted that female students would perform better than male students at the undergraduate level. (Yu et al., 2020)
  • Massive open online courses (MOOC) dropout prediction algorithms performed worse for female students as compared to male students. (Gardner et al., 2019)
  • If a student’s personal background is taken into account, models that predict college achievement are more likely to anticipate unfavourable results for students from lower-income homes. (Yu et al., 2020)

We then looked at a resource developed by the Institute for Ethical AI in education, which covers questions to ask suppliers about bias.

Exploring Explainability

We explored a diagram showing interpretable models which was from an Amazon Web Services document:

And the tool we used to show what a system with no explainability features looks like:

As part of this, we also looked at a tool that appeared to be a version of ChatGPT that could explain its sources, but in fact, it’s doing something very different.

And for those that want to dig much deeper into Explainability in Education, a paper we have mentioned before:

Exploring ChatGPT/GPT3

We then explored ChatGPT in more detail. We’ll produce a separate blog post in this, so for now we’ll just list the resources mentioned in the webinar.

Some of the research on Bias in GPT3

We looked at some of the research into bias in GPT-3, noting that because it had been released in 2020 there was much more research into this than into ChatGPT, and these are just some examples. The last one was the one we showed an actual example we had reproduced.

The moderation API

We ended by exploring the moderation API, which can also introduce bias. We’ve included the link to the Time report on the treatment of the workers involved in the data labeling.


As noted at the start, this post is intended to support our webinar on bias and explainability rather than as a standalone post. Let us know if we have missed any references that you’d like further information on.


Find out more by visiting our National centre for AI page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the NCAI sign up to our mailing list.

Get in touch with team directly at


Leave a Reply

Your email address will not be published.