On the 18th of July 2022, a proposal for a new set of UK regulations on the use of artificial intelligence (AI) was set out by the UK Government.
Regulators are being required to implement new “core principles” with the intention of encouraging innovation and fostering public trust all while steering clear of excessive regulatory hurdles imposed on businesses.
The new AI Policy Paper summarises the UK Government’s approach to regulating the use of this revolutionary technology in the UK in light of its potential hazards and opportunities so developers are certain how they can deploy AI systems and ensure that consumers are confident that they adhere to security measures.
The proposed regulations will also enable businesses to share relevant data about testing their AI tech and introduces parameters to avoid unfair bias.
The multiple laws, regulators and different bodies which are currently in place and which address AI related risks and rules lead to an inconsistent approach, with discrepancies, overlaps and loopholes in the existing AI framework. This often discourages developers and the society to trust the use of AI. If the regulatory framework around AI in the UK does not keep pace with the ever-evolving technology, innovation could be curbed, making it increasingly challenging for regulators to protect users at large. By addressing the regulations around the use of AI and implementing a proportionate, risk and outcomes-based approach, the UK Government aims to give organisations more clarity and confidence in the use of AI technologies.
The UK Government’s approach outlined in the AI Policy Paper consequently intends to promote a unified, balanced and flexible regulatory framework allowing AI to remain embraced in the UK while increasing its efficiency and potential to expand.
With a context-based approach in mind, the UK Government proposed six core principles, focused on governing the use of AI rather than implementing a new framework of individual rights.
The six principles are overarching and are expected to apply to any organisation within the AI lifecycle. The UK government’s initial proposals for these principles will be that businesses must:
- Ensure that AI is used safely;
- Ensure that AI is technically secure and functions as designed and uses data that is high-quality, representative and contextualised;
- Make sure that AI is appropriately transparent and explainable. In some high-risk circumstances, regulators may decide to prohibit decisions that cannot be explained. The UK Government suggests that transparency might, for example, require businesses to explain the nature and purpose of the AI, what data, logic and process it uses and how it is accountable for the AI’s decisions;
- Consider fairness;
- Identify a legal person to be responsible for AI;
- Clarify routes to redress or contestability.
Light touch regulatory approach
The AI Paper describes its proposals as “light-touch” regulation and wants any new regulation to take account of specific contexts and to be coherent across different sectors, as well as being as simple as possible, without the introduction of AI legislation. It describes its approach as less centralised than that taken by the EU.
Rather than making new law, the UK Government anticipated that the aims of the AI Paper will be pursued by a regulator-led guidance, risk assessment, measures and access to sandboxes. Regulators such as the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA), the ICO, Ofcom and the Medicine and Healthcare Products Regulatory Agency (MHRA), will accordingly be required to interpret and apply the above core principles.
Less intrusive alternatives are also being encouraged, including self-regulation and the creation of so-called regulatory sandboxes where developers can ascertain the safety and reliability of their AI technologies before officially launching them to the wider marketplace.
In other words, the UK Government are considering implementing the principles “on a non-statutory basis” which could be supplemented by clear guidance. This approach would be kept under review. However, the UK Government does not rule out the need for legislation as part of the delivery and implementation of the principles. The UK Government expects to work closely with the Digital Regulation Cooperation Forum and other regulators and mechanisms to ensure coherency and support of innovation.
The UK regulators will have to introduce rules that define who is legally responsible for the decisions of AI, and make sure that individuals and groups are given ways to contest AI’s decisions and obtain redress where a wrong decision has affected them.
The UK Government says that it will publish a White Paper for consultation in late 2022 to set out a proposed framework, and its implementation and monitoring.
The call for views and evidence will be closing on 26 September 2022, and the responses will be included into the White Paper. Find out more here.
Please contact Jose Saras and Xavier Prida if you have any questions about the data protection implications around AI technologies.
The material contained in this article is only a general review of the topics covered and does not constitute any legal advice. No legal or business decision should be based on its content.