At the Jisc AI Team we’re piloting a number of promising AI solutions.
As well as allowing us to evaluate different solutions, running these pilots gives us an opportunity to learn more about how to develop and deploy AI effectively.
In this blog series – Chatbot Pilots: Learning Together – we want to give members a window into the questions we’re grappling with as we run the chatbot pilot.
In most cases, we won’t yet have definitive answers to give. That said, we very much hope this blog series will offer useful insights into what’s involved in developing and deploying chatbots.
Developing and maintaining high-quality question and answer sets
The chatbot we are piloting has been set up so that students can ask the chatbot questions, and the chatbot can respond with the appropriate answers. By performing this simple yet effective task, the chatbot could benefit colleges and universities who deal with challengingly high volumes of student queries.
Chatbots like this have three core components:
- A user interface (anything from an on-screen pop-up to a holographic avatar)
- A chatbot engine (which uses Natural Language Processing to work out what a user is asking)
- A question-and-answer set
A chatbot’s quality hinges on the performance of each of these components. None are dispensable.
For instance, an effective chatbot engine might be able to work out that a student asking:
“What is the maximum number of courses on which I can enroll?”
….is asking a question that’s equivalent to:
“How many courses can I enroll in?” or “What’s the total number of courses I can take?”
If this cluster of equivalent questions (along with the appropriate answer) has been included within the question-and-answer set, then the chatbot will be able to give its response.
But what if an unhelpful answer has been written into the question-and-answer set? Would you be satisfied if you were given the following response to this question?
“The total number of courses you can take depends on what courses you are taking, and on other factors related to your situation.”
…probably not.
Quality content is key
We have been considering two core factors whilst developing the chatbot’s question-and-answer sets:
- Comprehensiveness
- Answer Quality
A comprehensive question-and-answer will anticipate the full range of questions a student is likely to ask. And whilst a chatbot doesn’t need to know the answer to every question, its question-and-answer set should be comprehensive enough to allow it to do its job. If a chatbot is being used to support university admissions, for instance, then it needs to be able to answer the majority of questions that students might reasonably ask on this topic.
A quality answer will address a user’s question directly and in full. It will be clear and concise, whilst including all the most salient information. It will also be written in an appropriate tone (which will vary from context to context) and be short enough to fit into the chatbot’s user interface.
If a chatbot’s question-and-answer set hasn’t been set up to provide quality answers, the chatbot is unlikely to be well-received by users.
Developing comprehensive question-and-answer sets
A number of approaches can be used to help ensure that question-and-answer sets are as comprehensive as possible.
These include:
- Involving students in the development of the question-and-answer sets
- Pooling question-and-answer sets from different institutions
- Feedback mechanisms
As part of the pilot, we are particularly interested in learning how feedback mechanisms can be used to continuously improve question-and-answer sets. For instance, we have developed a student testing platform that may help us identify gaps in the current QA set. On this platform, students are given prompts, such as “ask me a question about exams” or “what question would you ask if you’d forgotten your password”. The student can then give feedback on whether the chatbot gave an appropriate response or not. And they can also give broader feedback on the chatbot’s performance.
Another potential feedback mechanism is to make use of data from the chatbot engine (including all questions students asked, and whether they got a response). By analysing this data, educational institutions could potentially identify questions that are frequently being asked but that are not yet reflected in the question-and-answer set.
Through the pilot, we hope to learn more about which mechanisms work best in practice (and how to make them work well).
Giving quality answers
From what we’ve learned so far, we have a few tips on how to write quality answers.
- Where possible and appropriate, include keywords from the question in the answer. Doing so can make interactions with the chatbot seem more natural and conversational. And it also makes it easier for users to establish whether the response is actually answering their question, rather than another question.
- Check whether any prior knowledge amongst users has been assumed in the answer. For instance, if a question about enrollment refers to a handbook or guide (which a new student may not be aware of) it might be helpful to give a link to these documents.
- College style guides are a good place to start for deciding on the chatbot’s tone and style.
We have also identified several questions around what makes a good-quality answer.
For one thing, we have seen a tension between answers being short and snappy, and answers giving all the relevant information. In some contexts, more comprehensive answers seem preferable. But in other cases, brief answers that give links to further information appear to be more appropriate. At the moment, we can make educated guesses about what students will prefer in which contexts, but through talking to students as part of the pilot evaluation, we hope to learn more about what types of responses are most satisfactory.
Moving forward
As of yet, the chatbot has not gone live with students. As students begin to use the chatbot, we will learn more about how to develop effective content. This will no doubt lead to further questions. We’ll keep you updated as we learn.
Please do contact us at ai@jisc.ac.uk if you have any questions or comments on chatbots (or on any other relevant matter).
Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.
For regular updates from the team sign up to our mailing list.
Get in touch with the team directly at AI@jisc.ac.uk