Categories
Advice and Guidance

Regulating the Future: AI and Governance

In the rapidly evolving landscape of artificial intelligence (AI), the need for effective regulation has become more crucial than ever. As AI technologies continue to advance at an unprecedented pace, the ethical and societal implications of their deployment demand careful consideration and oversight. AI is revolutionising industries and reshaping the way we live and work, this has sparked a global conversation about the necessity of regulation, policy, and ensuring there are correct safeguards in place for the use of the technology.

If we look at some of the technological advances in the last decade, we see how regulation often falls behind innovation. Technology moves at such a rapid pace, often causing policy makers to play catch up. Take social media as an example, we have seen the challenges regulators have faced in their efforts to control the technology, alongside its impacts on privacy, misinformation, bias, and our data. The overall goal is to strike a delicate balance between encouraging innovation and ensuring that AI is developed and used responsibly. Regulation will be helpful in establishing boundaries for the use of this technology and will be paramount in ensuring AI safety, protecting fundamental human rights, and promoting equity and equality when AI systems are used worldwide.

A primary concern driving the call for appropriate regulation of AI is the ethics of its use. As AI systems develop, questions arise about their potential impact on privacy, bias, and accountability. AI systems are trained using the data we give them and since a lot of this data can contain biases, these systems may replicate real-world biases in a digital format. Without suitable guidelines, there is a risk of unintended consequences, such as the continuation of discriminatory practices or the violation of individual rights. This could be violations on a small scale, like AI being used within automated decision-making systems in a hiring process which inadvertently discriminate against certain groups of people. Or on a larger scale could involve the unfair use of predictive algorithms by law enforcement agencies.

Some of the fundamental issues and rights that AI regulation should aim to protect include:

Accountability
Accountability in regulating AI refers to the principle that those who develop, deploy, or use AI systems should be held responsible for the consequences of their actions and we should hold the systems we use accountable. Ensuring accountability is crucial for maintaining trust in the systems and is important for AI used in areas such as decision making.

Privacy
This involves protecting our data and ensuring confidentiality for sensitive information. AI applications often involve the collection and analysis of vast amounts of personal data. Protecting consumer data, especially in data sensitive industries areas such as education, healthcare or banking, will be important in ensuring the safe use of AI in these areas. Striking the right balance between harnessing the power of data for innovation and safeguarding individual privacy, will require clear regulations that outline the boundaries of permissible and responsible use.

Transparency
Transparency in the regulation of AI refers to the principle of making the processes, decisions, and outcomes of AI systems understandable, explainable, and accountable to stakeholders. Ensuring users can see how a service works, its functionality and understanding a system’s strengths and weaknesses. By allowing people insights into what data is being used and how, we can help build trust and confidence in the use of AI systems.

Fairness
This relates to the equitable treatment of individuals or groups by an AI system. Echoing the fact that a lot of bias creeps into AI, it’s important we ensure protected groups and communities who may not have been considered in data sets or when designing the AI systems, do not have decisions made against them unfairly.

Explainability
Explainability is about making sure the AI models are understood and can be explained across departments and organisations. This is all about making sure the technology is translatable and that we can explain what the systems do and what they output.

Regulations should ensure transparency in AI systems, allowing for scrutiny and accountability to mitigate potential biases in decision-making processes. It’s also argued that there should be more responsibility in the development and deployment of AI systems by the creators and organisations involved. With the current absence of clear guidelines, it becomes challenging to attribute responsibility when something goes wrong. A robust regulatory framework will be important for AI and its development, but at this stage of the journey ensuring the right kind of regulation is in place, to protect individuals and support responsible development will be far more achievable and beneficial.

The UK

Domestically the current approach to regulation is minimal. The government is not planning to introduce primary legislation to regulate AI. Therefore no concrete laws exist to regulate the development, implementation, and use of AI. The absence of legislation regarding AI usage results in the application of existing laws, including the Human Rights Act, the Data Protection Act, and the UK GDPR, to regulate the use of AI.

AI is viewed as a huge opportunity within the UK. The aim is to be big players in the world and we view it as a prospect to boost the economy and attract talent and companies to drive technological advancements in the region. The objective is to accelerate the adoption of AI across the UK to maximise the economic and social benefits that the technology can deliver, while attracting investment and stimulating the creation of high-skilled AI jobs.

In 2023, the UK published the AI white paper, which aims to address some of the challenges around regulation. The paper highlights the government’s ‘pro-innovation’ approach, emphasising regulation strategies which can benefit and create economic standards to attract AI companies to the UK.

The UK’s approach acknowledges that new laws are not always the most effective way to support responsible innovation, and the framework ensures that regulatory measures are proportionate to context and outcomes, by focusing on the usages of AI rather than the technology itself. This is reasonably understandable as the effects of AI vary depending on the use case and industries the systems are applied in. AI in the automobile industry has different implications to AI systems in finance or healthcare for example.

The UK framework is underpinned by 5 principles to guide and inform the responsible development and use of AI in all sectors of the economy:

  • Safety, security, and robustness
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

Regulators will be required to consider and abide by these principles when developing context specific rules and guidance for their sectors. However, the concern is that this approach and the absence of a primary industry cutting AI regulation creates uncertainty and inconsistency. Conversely, the government argues that by rushing to legislate too early, they risk placing undue burdens on businesses, and regulations lagging behind this fast moving innovation. The framework set out in the white paper is deliberately designed to be flexible and adaptable.

Recently the UK government abandoned plans to create an industry led agreement on a new AI copyright code of practice. This was a voluntary framework intended to address the copyright issues posed by generative AI. The aim was to strike a balance between AI developers’ desire to access data to train their AI models and content creators’ rights to control and commercialise access to their copyrighted works. Such an approach to governing AI will rely on collaboration between government, regulators, and business.

Similar to the white paper framework, this hesitancy in clear guidelines around issues like copyright and generative AI, indicates that an alternative solution which doesn’t include any formal law could emerge to address the challenges AI brings. Solutions built on collaboration and having greater transparency over the data which developers use to train their AI models. This emphasises the pro-innovation stance the UK government wants to take; AI development is high in the pecking order, and they are not yet prepared to draw red lines and have concrete legislation at this moment in time.

Notably, this approach could position the UK as an innovation hub for AI technologies. Companies may see it as an attractive location for research, development, and experimentation due to fewer regulatory hurdles and faster approval processes. Attracting AI companies to test and deploy their technologies more rapidly. Notably with fewer regulatory barriers, organisations could be more inclined to adopt AI solutions in various sectors. This early adoption may lead to improved efficiency, productivity, and competitiveness on a global scale from the UK. The government’s prioritisation of innovation in its race to become a global leader in AI has created concerns that commercial ideas are driving the response to the regulation of AI. It is with hope however that individual rights will be balanced with the commercial interests, which support economic growth.

The EU

The European Union (EU) has been proactive in establishing a more unified approach to AI regulation. The European Commission proposed the Artificial Intelligence Act. Legislation which aims to create a synchronised regulatory framework across EU member states. The EU AI Act classifies AI systems according to risk, and the proposal outlines rules for high-risk AI applications, stressing transparency, accountability, and human oversight. Examples of high-risk AI areas include vehicles, law enforcement and education. Additionally, the act prohibits certain practices that are considered a clear threat to the safety, livelihoods, and rights of individuals, examples include social scoring for general purposes or emotion recognition systems at work and education.

The AI act is primarily a rules based regulation, which looks at the nature of AI use; it introduces new rules that impose obligations on providers and users based on the level of risk involved. It’s a new legislation which also introduces heavy penalties. Non-compliance with the AI Act may lead to significant fines of up to 35 million or 7% of an organisation’s annual global turnover.

The act touches upon issues including transparency and explainability, expressing that high-risk AI systems must be transparent, provide users with information on a system’s capabilities, any limitations, and inform users when they are interacting with an AI system.

When comparing the approaches of the EU and the UK, they appear contrasting. The UK adopts a vertical approach to regulation, relying on the expertise of existing regulators and their extensive sector knowledge, to tailor the application of principles to the specific context in which an AI system is used. In contrast, the EU adopts a horizontal approach, with legislation applying across all AI systems situated or used within the EU. Although different there are points of overlap, both acknowledge risk, and although the UK are unlikely to do anything initially, they will eventually. It could be said both approaches miss out on the benefits of the other, therefore the best approach could be somewhere in the middle.

The United States

In the United States, AI regulation is categorised by a more distributed, market driven approach. Their emphasis is on fostering innovation and competition, with a reliance on industry self regulation. However, this approach has sparked many concerns regarding issues such as data privacy, bias, and accountability. Considering that the US used a self regulation approach to social media technology, it is obvious that there can be a few issues with this style of regulation.

  • Inconsistency: Self regulation sometimes leads to inconsistencies in policies across different platforms. Each platform has its own rules and standards, creating a fragmented approach. The inconsistency can result in confusion for users, as they can encounter different standards for content moderation, privacy protection, and other critical issues across various platforms.
  • Limited Accountability: Without external oversight, social media companies have had less accountability for their actions. This can lead to a lack of transparency and responsiveness to user concerns. There is a risk of platforms prioritising profit over ethical considerations.
  • Insufficient Data Protection: Self regulation may not adequately address data protection concerns. Social media companies often collect vast amounts of user data without clear guidelines on how it should be handled. This can lead to privacy breaches, unauthorised data sharing, and the potential for exploitation of user information for targeted advertising or other purposes.

Currently, in relation to AI there are currently no horizontal US federal laws, but rather several state AI laws. AI governance is a priority in the US and the government is making a shift from historically having a free market approach to technology regulation. Similarly to the UK, US AI policy proposals attempt to balance AI global leadership while protecting individuals from harm.

The US and UK recently signed a Memorandum of Understanding to collaborate on developing tests for advanced AI models. This partnership aims to align scientific approaches, improve evaluation and testing, and promote AI safety globally. The UK and US AI safety institutes will collaborate on research, safety evaluations, and guidance for AI safety. Institutes will develop shared capabilities through information sharing, cooperation, and personnel exchanges. This collective effort from the UK and US on AI safety underscores the importance of international collaboration in addressing AI risks. By sharing expertise, conducting robust evaluations, and promoting responsible AI development, both nations are making attempts to lay the groundwork for safe AI usage.

As shown, different global standards can exist in the regulation of AI. China as an example, intend to issue its own guidelines seeking to ensure the technology does not challenge their well established censorship regime. There are cultural nuances which affect how countries deploy and govern the technology, and different agendas they want to meet, which creates an argument for having large international standards to help create cross border principles.

International collaboration could be essential in regulating AI effectively, there are common themes across regions, ethical considerations, transparency, accountability, and fairness of the systems. Given the global nature of AI development and deployment, consistent regulations across borders could help to avoid fragmented approaches that may hinder innovation and help to create a set of ethical standards that have no geographical boundaries.

A counter-intuitive approach to regulation

There is an argument for a counterintuitive approach to AI regulation, which involves shifting the focus from legal frameworks to encouraging developers to prioritise building safe and ethical AI systems voluntarily.

The prevailing approach to AI safety in most of the top AI companies focuses on ensuring AI safety through attempts to mitigate unacceptable behaviour after an AI system has been built. However, there is evidence suggesting this approach doesn’t work, in part because of the lack of understanding of internal processes in current AI systems. We cannot ensure that behaviour conforms to any desired checks, except in a trivial sense, because we do not understand how the behaviour is generated in the first place.

One approach to regulation could be ensuring AI safety be built in by design. It should be possible for developers to say with confidence that their systems will not exhibit harmful behaviours, and to back up those claims with formal arguments. It could also be beneficial to consider legislation that draws red lines about the types of outputs AI systems should not be capable of generating.

This would move the onus onto AI developers and away from the law, which in turn could create more accountability. Within high-risk technology industries like the nuclear and aerospace industries, there are safeguards, prohibitions and red lines used in regulation. The key point of this is that the onus is on developers and industry stakeholders to proactively address safety and ethical concerns in development and deployment, not regulators.  Creating these responsibilities for organisations and developers  in AI could lead to high-confidence assertions based on assumptions that can be checked and developed.  Such an approach could allow for greater flexibility and agility when responding to the quick advancements and ethical challenges AI brings. Developers can adapt their practices and incorporate ethical considerations into AI development processes more efficiently than traditional regulatory frameworks which take time to pass.

The UK’s Educational Landscape

According to the recently developed AI Act by the European Union, the use of AI systems within education and vocational learning will be classed as a high risk area.

The UK education sector is increasingly incorporating AI to enhance teaching and learning experiences. However, this integration requires a careful examination of ethical implications. The rapid adoption, driven by economic factors, is seen as a problem because it may worsen existing educational inequalities. Balancing the benefits of AI driven education with the need for data protection and privacy is a key challenge. The UK government is recognising these challenges and is exploring ways to regulate AI in education to safeguard students and educators.

Within education, the legal issues surrounding AI relate to responsible use and student consent. The collection, storage, and analysis of student data by AI systems, raises concerns about data privacy and security. Other areas of significance include bias in assessment and decision making, AI algorithms could unintentionally emphasise biases present in training data, leading to unfair assessments and recommendations. Generative AI creates questions around intellectual property and ownership. In summary, the safe use of AI in education will involve many considerations and we will need to ensure that AI systems align with educational values.

Policy needs to be considered within education, creating ethical guidelines for the use of AI in educational settings. Transparency and openness in communicating how AI is utilised needs to be encouraged amongst all stakeholders, including staff, students, parents, and educators. This could create a safe and collaborative environment where everyone can understand, trust and benefit from the technology.

It is suggested that the sector should establish clear guidelines on how student data should be collected, stored, and used and ensure compliance with existing data protection laws (e.g. GDPR) and develop specific regulations tailored to AI in education.  Although the widespread use of AI in certain educational areas may currently lack proper regulation, this is due to a lack of clear guidelines from the law itself. As regulation evolves and case law develops, the advice and implications will become clearer for the sector.

Summary

One simple AI regulator, one simple solution to regulation which covers everything is a scenario unlikely to happen because the solution is not straightforward. Instead, a more effective approach is likely to be multidisciplinary in nature. Combining robust guiding frameworks with tailored expertise, relying on existing regulators with domain specific knowledge to adapt policies to their appropriate contexts.

Having a common set of approaches across sectors would facilitate connectivity, ensuring there are no major gaps in how we govern AI technology. But as shown, the landscape of AI regulation worldwide is diverse, reflecting the complexities of trying to balance technological advancement, ethics, and regulation.

Although this current environment may seem uncertain, there will likely be more concrete proposals for how AI is governed in the future. We see how countries are looking at their specific contexts, but 2023 also saw nations working together to regulate the use of AI. 2023 saw global nations including the UK and EU, reaching a world-first agreement known as the Bletchley Declaration at the AI Safety Summit 2023. We will likely see other similar collaborations and the development of international standards continue into 2024 and beyond. The future of AI governance should have a shared understanding of both the opportunities and risks posed by AI and the need for governments to work together to meet the most significant challenges.

Regulations and concrete policies will arrive in due course, but it’s important that the speed of regulation and governance keeps up with the pace of AI technology. As AI continues to evolve and shape our sectors and societies, finding the right balance between innovation and regulation remains a key challenge. It is essential for stakeholders to collaborate, share best practices, and work towards common principles to ensure that AI technologies are developed and deployed responsibly, for the benefit of all. 


Find out more by visiting our Artificial Intelligence page to view publications and resources, join us for events and discover what AI has to offer through our range of interactive online demos.

For regular updates from the team sign up to our mailing list.

Get in touch with the team directly at AI@jisc.ac.uk

Leave a Reply

Your email address will not be published. Required fields are marked *