Categories
Advice and Guidance

Generative AI and Social Inclusion

Diverse group of young people chatting in college

 

Before the release of ChatGPT late in 2022, predictions abounded on how the emergence of AI would impact social inclusion. Those with a positive outlook, for instance, forecasted that AI would make lifelong learning more affordable and higher in quality, enabling more inclusive opportunities throughout people’s lives. Others, meanwhile, warned that the digital divide could be amplified in the age of AI.  

As an active participant in these discussions, I counted myself as a cautious optimist. In 2021, I wrote Cracking Social Mobility (University of Buckingham Press), a book that explored the possible impacts AI could have on social mobility (an issue that has long been of great importance to me). Much of my optimism came from the tendency of technologies to drive down prices, making goods and services more accessible. That said, lessons from lockdown cautioned that without equitable access to devices and data, many learners stood to be excluded from the benefits of innovation. 

Three years later, AI – and generative AI in particular – is becoming increasingly mainstream. And we can now observe some of the initial and emerging impacts on social inclusion. With this aim in mind, Jisc will be producing a series of blogs on AI and social inclusion, which we hope will support the sector in delivering inclusive innovation. In this first blog we’ll take a broad look at ways AI is impacting social inclusion, both positively and negatively 

How is generative AI positively impacting social inclusion? 

As a City of Sanctuary, Hull welcomes many displaced families, meaning Hull College is committed to ensuring incoming students, for whom English is often not a first language, are empowered to learn and thrive. The college’s innovative use of Microsoft Translate has been an integral part of efforts to enhance inclusivity. By using these AI tools, thousands of ESOL learners have been supported in accessing learning – resulting in increased attendance, retention and satisfaction amongst students. The college has been recognised for its pioneering work, winning The Association of College’s Beacon Award for the use of Digital Technology in Further Education. 

From Jisc’s discussions with colleagues and learners in the sector, and during interviews that led to the Student Perceptions of Generative AI report, we’ve also heard examples of students using AI in ways that overcome constraints commonly encountered by the less privileged. Financial and social means frequently open the door to services like proofreading essays/dissertations and support with cover letters or personal statements. And although students from more advantaged backgrounds are more likely to be able to draw upon their social circles or pay for additional help, the wide availability of tools like ChatGPT means a broader range of students are starting to benefit from this kind of support. 

Furthermore, AI is beginning to demonstrate its value in supporting students with special educational needs and specific accessibility requirements. Speech-to-text tools are a boon for deaf learners and are also helping students with a broader range of challenges, for instance, with taking accurate notes. Content can now be transformed into more accessible formats, personalised to meet individuals’ specific requirements. And there is even evidence of diagnostic tools being able to identify conditions such as dysgraphia at its early stages, meaning those affected may benefit from timely support without the need for stressful, prolonged investigations. 

How is social inclusion being threatened in the age of AI? 

At a societal level there are worrying indications that technological progress could leave some groups and individuals even further behind. According to a report published by the All Party Parliamentary Group for Data Poverty, 2 million households in the UK are struggling to pay for reliable internet access, 6 million young people are without proper internet access, and 4% of the population can’t access the internet due to lacking the necessary devices. If these inequities are not addressed, some of the most deprived people will be missing out on more and more as technology advances.  

Adding to these divisions, the issue of inequitable access to AI tools also needs to be considered. Our team at Jisc has estimated that purchasing licences for a modest range of AI tools (including for support with writing, image generation and presentations) could add up to around £1,000 a year. A sum that could easily exclude many learners from integrating the best and most appropriate AI tools into their practice – particularly because the free tiers often lack the functionality and scope of their paid-for counterparts.

It should also be emphasised that AI products are shaped around their users. Being excluded from accessing AI today means having little influence on the developments of tomorrow. Data is at the heart of this dynamic. Those who are using AI less are also contributing less to the training data used to fine-tune and tailor assets, such as Large Language Models. As a result, future iterations are less likely to reflect the needs and requirements of marginalised groups, creating a vicious cycle that threatens to hardcode digital exclusion. 

Building on this theme, algorithmic bias – a well-documented and longstanding concern around AI – is still proving to be a problematic aspect of generative AI. As part of a recent exploration of biases in AI image generation, I found a number of disconcerting trends. Where genders were not specified in the prompts, images of police officers were overwhelming male. Similarly, when I asked for a collage of couples (genders unspecified), the results betrayed a strong hetero-normative bias: all the images were of a man and a woman. These patterns have likely arisen in part because the data used to train many of the available image generation models will often capture historical disparities and inequities. While these biases are not always observed – I found that images of doctors, nurses and ‘parents caring for their children’ demonstrated an even gender split – manifestly biased representations of work, relationships and other areas of life are likely to frustrate progress towards a more inclusive society.  

AI Skills – the next frontier 

Even if the divides in access to connectivity, hardware and software are narrowed – there will still be the matter of divisions in digital skills to contend with. Given that disparities such as these often track socio-economic lines, it is plausible that a divide in digital skills could emerge, forming a new dimension of digital exclusion. If wealthier individuals have access to more/better AI tools, it seems reasonable to assume that they will become better equipped to wield these resources more effectively.  

That said, it is also possible that a different dynamic will unfold. It is feasible that those with the greatest access to AI tools will be the most susceptible to misuse (i.e. using AI to substitute for, rather than amplify their own skills). Or that the wealthiest students will be more heavily concentrated in most traditional institutions, which may, in turn, be slower adopters of AI tools. 

In practice, at Jisc, we’re seeing strong examples of AI innovation arising in institutions that serve students from across the socio-economic spectrum. Excellence in AI skills does not appear to be exclusive to any one type of institution. Given these positive signs, the team at Jisc is committed to continuing to learn how colleagues from across the sector, from a diverse range of contexts, are promoting inclusive innovation. 

Generative AI and the role of the educator: a social-inclusion perspective 

AI applications like Teachermatic, which simplifies lesson planning and material creation, have already demonstrated positive outcomes in pilot programs. By lightening the load, generative AI can potentially alleviate problems of retention, recruitment and wellbeing within the educational workforce: all challenges that are especially severe in areas of high deprivation.  

That said, the risk of making institutions less inclusive cannot be ignored. While it is reasonable to expect that institutions will be encouraged due to financial pressures to identify AI-led efficiencies, there is a risk that valuable human-led services will be cut back or reduced in scope – with less effective AI-based alternatives left in their stead. The loss of such services are likely to hit vulnerable learners hardest; and it is the institutions who serve the least advantaged communities who are most exposed to these financial pressures. 

Next Steps 

By continuing to conduct work on AI and social inclusion, we aim to help members identify and respond to both challenges and opportunities within this space. 

As a next step, we’ll be listening to members’ experiences of the impacts of AI on social inclusion, and will publish insights gained as part of the next blog in this series. We’ll share more information about the activities in this area in due course.


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

 

 

By Tom Moule

Senior AI Specialist at The National Centre for AI in Tertiary Education

Leave a Reply

Your email address will not be published. Required fields are marked *