Figure 1 – Andrew’s examples of how applications of facial recognition fit into his Creepiness Matrix
When Jisc’s self-paced Artificial Intelligence and Ethics course was first launched back in 2021, ChatGPT hadn’t yet appeared on the scene. OpenAI was still a niche name. Generative AI tools weren’t generating headlines—or policies. So you’d be forgiven for wondering whether a course developed pre-ChatGPT might feel out of date.
In practice, the opposite is true.
Created by the late Andrew Cormack, Jisc’s former chief regulatory advisor, this self-paced module offers something that’s arguably even more important today than it was when it launched: space to think.
Figure 2– Andrew’s examples of how language-oriented AI tools fit into the creepiness matrix
The course covers foundational ideas that have only grown more relevant with the rise of generative AI. One particularly enduring example is Andrew’s creepiness matrix, a deceptively simple tool designed to help us think through how comfortable—or not—we should feel about different uses of AI. It asks us to weigh up factors like the visibility of the system, the amount of user control, and the scope of its function.
That framework has sparked fresh conversation in the course forums, especially now that tools like ChatGPT and Gemini have shifted from novelties to near-daily companions for many. At first glance, these tools might seem to tick the “low-creepiness” boxes: they’re visible, time-limited, and single-function—open a chat, ask a question, close it when you’re done.
But as participants have pointed out, the reality is a little murkier.
Several contributors have reflected on the fact that newer iterations of generative AI systems have begun to remember parts of conversations or carry context across sessions—features that add convenience but also reduce user control. Others have raised concerns about how prompts and inputs might be stored or used to train future models, feeding a growing sense of uncertainty about where the boundaries of the tool actually lie.
Despite being a product of the pre-GPT era, Andrew’s matrix still holds up as a lens through which to explore these grey areas. It gives us a vocabulary—and a mindset—for asking more searching questions of the tools that permeate our daily lives.
Another recurring thread in the forum discussions relates to agency: who’s really in charge when AI is involved?
Andrew’s original course framing explores whether AI is used as a human assistant or a de facto decision-maker. That’s an especially timely question when it comes to generative tools. Used without reflection, it’s easy to fall into a “content bashing” mode—letting the AI do the work, copy–paste, job done. But as many participants have shared, the most powerful uses of generative AI are those where the user remains firmly in the driving seat.
It’s not about asking the tool to create something for you. It’s about co-creating with it—asking, shaping, challenging, editing. There’s nothing passive about it. And that active, discerning role is exactly what the Module encourages throughout.
Final thoughts
It’s easy to feel like everything in AI is changing fast—and much of it is. But the reasons we care about AI, the values we bring to its use, and the questions we ask of it: these remain surprisingly constant. Andrew Cormack might have written this course before the generative AI boom, but its usefulness remains intact.
If you’re a Jisc member, you can register for the module now.
Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.
Join our AI in Education communities to stay up to date and engage with other members.
Get in touch with the team directly at AI@jisc.ac.uk