Categories
Product Pilots

Pilots going forward

We’d like to share some thoughts about the way forward with pilots.  First, we’d like to thank everyone who’s expressed interest or taken part – they have been every bit as successful as we hoped, and then some.

They have proven to be extremely popular – almost too popular. we are now getting way more expressions of interest than when we started three years ago (anyone would think AI was a hot topic!), which unfortunately also means more unsuccessful applicants, and we fully understand how time consuming this can be. We are always impressed by the quality of all the applications, but there is a real downside to this – it means that many great applications aren’t successful. I worked in education institutions for many years before joining Jisc and so was on the other side of the fence and appreciate how frustrating and time consuming this is.

We thought it would be useful to share some stats about the pilots:

Total pilots 9
Total applications 222
Total unique institutions applying 167
Pilot spaces available 51
Unique institutions accepted 47
Unique institutions applied but never successful 120
Institutions unsuccessful more than once with no pilot 20

 

We have given a lot of thought on how we can open pilots to more people going forward, and also reward those that have spent time and effort on submissions so far.

It’s worth going back over our initial aims for the pilot:

  • The overarching aim was to find out how AI could be used effectively in multiple institutions. At the time, nobody had access to AI tools, so we had to find a way to provide them so we could start learning.
  • The second aim was to identify which of those tools were most useful and work out how to make them more widely available, typically through Jisc licensing.

We are happy to be completely transparent about how this has worked.  We talk a lot to institutions to understand what issues you face, and what your priorities are, both through the wider Jisc processes, and through talking to staff at all levels when we visit.  We also initially did a series what we called ‘Deep Dives’ to gain initial more complete insights.

We also scan the market looking for any interesting applications that might help address these priorities.  We then talk to the company providing the tool to get access to fully working accounts and test them thoroughly to see if they do what they say and look like a good fit. Quite a few don’t pass this stage, or they show early promise but aren’t quite ready.

We then match the best products to the institutional priorities we see.  We’ve worked to a fixed budget for this, so we then negotiated the best pilot deal we can for our fixed budget (typically around £10k). This is why the number of participants varies between the pilots.  We pay for the licenses to avoid the participants getting held up by procurement processes, and we pay rather request free licenses, because we are expecting good commitments from our pilot partner companies.

When we first started, interest in AI was still fairly limited, and it was feasible for us to just ask who was interested and arrange a short meeting to find the most suitable matches.  As AI became a hot topic, and more and more people engaged with our work, that was no longer feasible, and we had to move to a more formal form-based approach that you see today.  The big downside here is that it requires a lot more time and effort on your part, and as the number of applicants increased, the chance of this being rewarded decreased.

So going forward we are proposing two things:

  • New pilot types that more institutions can join
  • For current style of pilots, prioritising existing applicants first.

New types of pilots – ‘practice based’

One of the reasons for the current pilot model is that initially we needed to provide institutions with AI tools if we were to learn and evaluate effect use. This is no longer the case, in that you all have access to a range of tools, be it ChatGPT, Google Gemini (was Bard) and Microsoft Copilot (was Bing).  We think there would be a lot of value pilots of using these tools for particular activities.  This might be something around creating learning resources, assessment, or perhaps some administrative process. We’d create a set of activities and resource to support the program, run, for example, a series of webinars, pilot events and community meetings – all online, and evaluate the outcome as we do at the moment.

We see two big advantages of this:

  • Everyone who can meet the required commitments to the project can join, as the scaling factor of licenses etc are removed, and we are all used to large online events, so these can scale, even the interactive ones.
  • There’s much less barrier in taking successful pilots forward in your institutions, as there’s no license costs etc.

For current style of pilots, prioritising existing applicants first

We still see a lot of value in the existing pilots, as not everything will be covered by tools you already have.  We also have a big pool of interested institutions who have already applied for previous pilots, many more than once.  So instead of putting a call out as we do now, for the time being, we are proposing that instead we contact previous applicants, and see if the pilots would be of interest.  We’d do this based on:

  • First, those that have applied the most and not been successful. As I said at the start we get so many great applicants (in fact I’m happy to say that all the applicants are good) that this won’t in any way reduce quality of pilots.
  • There are still a lot of those – more than we can fit into one, or event two pilots. So we then offer to those that have applied first for the earlier pilots.
  • Once we get there, we will still a few that are tied, so we may have to select at random.

We’ll contact you, see if you are interested in the next pilot, and if you are we’ll collect a little bit of information as we do currently, but as long as you can meet the time commitment and have a senior sponsor, you will be in.  If that pilot isn’t for you, you’ll just move to the top of the list for the next pilot.

We’ve got 17 institutions who have applied more than once, so will go through this process for a few cycles.

We realise this closes the door to new applicants for a while for the existing style of pilots, but the new type of ‘practice based’ pilots will be open all, and we think it is fair to reward the effort put in by those that have already applied.

Feedback

We think this gives a good mix of updating the pilots to follow the latest events in AI, maximising the number of participants, and reducing the time and effort required on your part.

We’ll like feedback though, especially on whether the new, first type of pilot sound of interest.  Feedback either in the comments or to NCAI@jisc.ac.uk would be welcome.


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

By Michael Webb

I colead our AI team. We support the responsible and effective adoption of artificial intelligence across the tertiary education sector, through a range of pilots, advice, guidance, and community support activities. Before joining Jisc I worked in the University sector, leading IT and educational technology services. Since joining Jisc, I have worked on a wide range of projects, including leading the development of our learning analytics service.

Leave a Reply

Your email address will not be published. Required fields are marked *