Categories
Examples of AI in education

Exploring innovations in assessment

Introducing AI into the assessment process could bring about numerous advantages. In the sphere of formative assessment, using AI to partially automate the process of marking and giving feedback could make educators’ workloads more manageable. Meanwhile for high-stakes assessment, AI could provide an alternative to summative examinations, which, it has been argued, are not always the most effective way to assess a range of skills and attributes that are desired of students.

To understand more about how learners and educators are currently benefiting from innovations in assessment, we spoke to Dr Karen Henderson about The University of the West England’s (UWE Bristol) e-assessment system, Dewis.

Led by Dr Rhys Gwynllyw, Dewis was launched in 2006 after UWE Bristol resolved to develop a robust assessment system in-house, partly due to a desire to not be locked-in to any particular commercial provider. Since then, the system has been used to assess students’ work (both summatively and formatively) across a range of STEM and Business School courses – and it has been well received by students, who frequently request that the system be rolled out across more courses.

Dewis uses an algorithmic approach, which allows for both the generation of questions (based on examiner-set parameters) and the marking of the students’ final answers. Examiners can set up the marking algorithm for any particular set of questions, and then choose the method by which the system recognises a correct answer.

In some cases the algorithm will simply compare a student’s answer to a unique correct answer; whilst in other cases, the system will analyse student answers in order to verify whether they are correct or not.

When it comes to creating assessments, Dewis automatically generates questions that meet criteria set by an examiner. As Dr Henderson notes, this can save staff “oodles of time”.

In addition to changing how students are assessed, Dewis also allows for subtle shifts in what is being assessed. Whereas students might ordinarily be graded solely based on their overall scores in assessments, with Dewis, factors such as students’ levels of engagement can also be taken into account.

With some assessments, in order to incentivise and reward perseverance, students are able to reattempt questions until they get the correct answer. They will then get a separate engagement score, which, along with their attainment score, will contribute to their final mark. In other cases, students may be given marks on a sliding scale for reattempts (e.g. 100% of the mark for a correct answer on the first attempt, 90% for a correct answer on the second attempt, etc).

Looking to the broader benefits of AI-based assessments, Professor Rose Luckin, an expert in the applications of AI in education and assessment, has argued that one of the key advantages of AI-based assessments is that it could allow skills such as persistence and motivation to be measured and recognised to a greater extent1. Given that such metacognitive skills are becoming increasingly valued by employers, the rationale for capturing these within the assessment process should be clear. With Dewis we get a glimpse of how this might be achieved at scale.

Putting aside the potential benefits of innovating the assessment process, there are reasons to question whether it is desirable to make assessments too reliant on technology. Wherever assessments are completed digitally, there could, for instance, be issues with submissions not saving correctly and getting lost. The creators of Dewis, however, have a clear response to this issue.

Dewis operates under the principle of lossless data. One example of how this works is that, as the students attempt their assessments, Dewis constructs an encoded string locally, which contains all data and meta data (including time stamps) relating to the student’s work. If the student has trouble submitting their answers (perhaps because of a poor internet connection) then Dewis will present the student with the encoded string, thereby enabling an alternative method of submission. When their connection has recovered, the student can email this string as proof of work.

This feature, Dr Henderson explains, can be particularly useful where students are unable, through no fault of their own, to press submit within the timeframe allowed for assessments – even if they have done the work in the allotted time.

Dewis also has a decisive answer to the question of how people can trust algorithmic technologies to mark students’ work if there is the potential for inaccuracies and wrong marks being awarded.

After students’ marks and grades have been given, they can go back into the system and see all the work they submitted along with the marks given for each question (another manifestation of the lossless data principle).

Not only does this engender trust in Dewis amongst students and staff, it also makes it easier for students to challenge and query their marks relative to work submitted on paper.

For the past 16 years, both staff and students at UWE Bristol have benefited from e-assessments on Dewis. Looking to the future, there are a number of exciting developments in the pipeline, which could further enhance the assessment process.

One such development is the incorporation of increased levels of personalised feedback for common student errors (CSEs). Graduate tutor and PhD candidate, Indunil Sikurajapathi, is conducting research to understand common student errors made on Dewis, focusing on first year Engineering Mathematics e-Assessment questions. To date, she has detected 65 CSEs made by students and this research is ongoing.

A further planned development is the introduction of item analysis into Dewis, which would give examiners and academics a better understanding of the difficulty and appropriateness of each question set, allowing them pitch questions at the right level while improving the reliability of assessments.

The Jisc AI Team looks forward to observing UWE Bristol’s progress, and hopes that Dewis will continue to serve as an inspiring example of how technology can be utilised to improve both formative and summative assessments.

If you want to stay up to date with the work of the Jisc AI Team, you can join our AIED JiscMail list and get involved in our AI community group.

Notes

  • Luckin, R (2017), Towards Artificial Intelligence-based assessment systems, Nature Human Behaviour

 


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

By Tom Moule

Senior AI Specialist at The National Centre for AI in Tertiary Education

Leave a Reply

Your email address will not be published. Required fields are marked *