var _gaq=_gaq||[];_gaq.push(['_setAccount','UA-9920655-1']);_gaq.push(['_trackPageview']);(function(){var ga=document.createElement('script');ga.type='text/javascript';ga.async=!0;ga.src=('https:'==document.location.protocol?'https://ssl':'http://www')+'.google-analytics.com/ga.js';var s=document.getElementsByTagName('script')[0];s.parentNode.insertBefore(ga,s)})()

The new draft of the Artificial Intelligence (AI) Act: what could this mean for the UK?

On the morning of 14th June 2023, the European parliament adopted its negotiating position on the AI Act, the world’s first comprehensive AI law. The act aims to provide a legal framework for ensuring AI is used in a safe and transparent way protecting individual’s rights per the EU Charter of Fundamental Rights, which includes the right to privacy, human dignity, a fair trial, and gender equality.

Defining a risk-based approach

The Act aims to achieve this by using a risk-based approach and splits AI systems into one of four different risk categories.

Unacceptable risk

The Act outlines a list of prohibited systems and practices that pose a significant risk by AI systems to the EU’s fundamental rights. These include:

  • Systems and practices that have a significant potential to manipulate individuals through subliminal techniques, or exploiting vulnerabilities of specific vulnerable groups where physical or psychological harm is likely to occur.
  • AI- based social scoring for general purposes carried out by local authorities, due to the risk of discriminatory outcomes.
  • AI used to employ real-time biometric identification in public spaces for the purposes of law enforcement — although there are limited exceptions — due to the risk of evoking a feeling of constant surveillance and the impact it may have on freedom of assembly.

High risk

The Act identifies these systems as posing a high risk to public health and safety or the fundamental rights of an individual. AI systems within these categories are not prohibited, but they do have to comply with mandatory requirements relating to the quality of data used in training AI systems, technical documentation and record keeping, transparency and access to information for end users, and a ‘conformity assessment’ (an assessment carried out by the organisation showing that they comply with the new requirements for high-risk AI systems).

Low and minimal risk

Low and minimal risk systems consider specific risks of manipulation of an individual’s behaviour. The focus for these AI systems and practices is on transparency. Therefore, where systems interact with humans or generate or manipulate content, this should be disclosed to the user so they can make an informed choice to continue to use the system or practice.

So what? It’s EU legislation and, you know, Brexit?

While this is EU draft legislation and therefore will not be directly made law in the UK, there is a little piece of the legislation which could have significant impact on UK business’s ability to maximise opportunities though developing AI technologies. This is where the legislation specifically outlines that the draft law applies to, ‘…providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union’.

What this means is that if a UK business intends to sell, promote, or use the AI software within the EU, then that software must meet the requirements identified within the legislation. With Europe accounting for around 18% of the global GDP, it’s a significant market to lose access to.

While there is a direct link to businesses that build software that uses AI to benefit other users, there may be wider implications for businesses that want to use AI to create internal efficiencies — for example using generative AI to create marketing content. You could argue that the primary purpose of marketing is to influence an individual’s behaviour. Will content created by generative AI need to say it was created this way? Will this reduce the impact of the content making the use of generative AI less attractive?

Along the same lines, will emails or reports written with the aid of AI need to include a similar disclosure, further reducing the usefulness of generative AI as a business tool? I appreciate that emails and reports are not always designed to persuade an individual, but businesses may feel it’s best to have such a disclosure on all emails and remove any risk of falling foul of the legislation.

What do I need to do?

AI has been used in businesses for years, with the most recent excitement being prompted by the introduction of generative AI — AI that can be used to create original content, not simply identify trends, or make predictions based on specific data sets. While I fully support the need for regulation within the AI space, the new legislation does pose a significant risk of stifling the efficiency gains those businesses working in, or with, the EU, can fully realise with AI technology.

Businesses in the UK need to be aware of how their staff are using this technology and understand the wider benefits that it presents. They need to begin writing policies that achieve the compromise between maximising the efficiency gains available and protecting themselves from falling foul of the incoming legislation. By being prepared now, when the regulations do come into effect, you’ll be able to maintain business as usual while your competitors scramble to catch up.

Have a question about this? Let’s talk!

Author: Nathan Davis, CT: Evolve

Chiene + Tait LLP | Established 1885 - © 2025 | Registered in Scotland No. SO303744