Artificial Intelligence (AI) has recently become increasingly popular given its capabilities of transforming various sectors. Recognising its immense potential and the need to ensure that “AI systems are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly”, the European Commission (EC) has taken a significant step by proposing an Artificial Intelligence Act in 2021.

It aims to set out the general framework on AI in the area. This centralised approach is different from the UK’s sector-based approach as set out in its recent White Paper. Therefore, it will be interesting to see the potential impacts of these regulations on the development of AI and achievement of the countries’ ambitions in this sector.

Key Highlights of the EU Artificial Intelligence Act

Scope

The Act applies to providers of AI systems established within the EU or in a third country who place AI systems or put them into service in the EU, and users of AI systems based in the EU. It also applies to providers and users of AI systems outside of the EU, but where the output produced by those systems is used in the EU.

However, it does not apply to AI systems developed or used exclusively for military purposes. Nor does it apply to public authorities in a third country. International organisations or authorities using AI systems in the framework of international agreements for law enforcement and judicial cooperation are also exempt.

Definition

“Artificial intelligence system” is defined under the Act as “software that is developed with [specific] techniques and approaches [listed in Annex 1] and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.

Annex 1 sets out a list of techniques and approaches currently used to develop AI. It includes machine learning, logic and knowledge-based systems, and statistical approaches. AI system is defined broadly to cover both standalone and component AI technologies, as well as future AI technological developments.

Risk-Based Approach

The Act adopts a risk-based approach to AI regulation. In particular, it categorises AI systems into four levels of risks (i.e., unacceptable risk, high risk, limited risk, and minimal risk) based on their potential impacts on fundamental rights and users’ safety. Depending on its risk level the AI system will be subject to corresponding obligations.

  • Unacceptable risk: These AI practices are prohibited as they are considered harmful and a clear threat to people’s safety, livelihoods and rights. Examples include AI systems that exploit vulnerabilities of certain groups of natural persons, social scoring by public authorities, real-time remote biometric identification systems in public spaces, except in limited circumstances.
  • High risk: High-risk AI systems are those that create adverse impact on people’s safety or their fundamental rights. These are categorised into two main groups, namely: (i) systems used as a safety component of a product or falling under EU health and safety harmonisation legislation (e.g. toys, aviation, cars, medical devices, lifts); and (ii) systems deployed in eight specific areas identified in Annex III (e.g., biometric identification, critical infrastructure, education, employment, public services, law enforcement, border control and administration of justice), which the EC could update as necessary through delegated acts. These are generally subject to various requirements, including conformity assessment, risk management, testing, technical robustness, data training and data governance, transparency, human oversight, and cybersecurity.
  • Limited risk: These may include systems that interacts with humans (e.g., chatbots), emotion recognition systems, biometric categorisation systems, and systems that generate or manipulate image, audio or video content (e.g., deepfakes). Providers of these AI systems need to meet limited transparency obligations compared to high-risk AI systems.
  • Low or minimal risks: AI systems not falling under the other risk categories shall be considered low risks and are generally not subject to any legal obligations under the Act. However, there may be codes of conduct with which providers of low-risk AI systems may choose to voluntarily comply.

Transparency and accountability

Transparency is a key principle of the Act and obligations will vary subject to the level of risk. Providers of high-risk AI systems must provide detailed documentation on the system’s capabilities, limitations, and data used. They must also establish mechanisms for human oversight and be accountable for the AI system’s behaviour and outcomes.

Data

The Act encourages the use of high-quality, unbiased, and diverse data to avoid discriminatory outcomes. In addition, personal data must be handled securely and in line with existing data protection regulations such as the General Data Protection Regulation (GDPR).

Formality

Providers of high-risk AI systems may need to undergo third-party assessment or register their systems in an EU-wide database managed by the EC before they can be placed on the market or put into service within the EU. They must also comply with existing third-party conformity frameworks if applicable (e.g., for medical devices). Self-assessment and CE marking can be used for AI systems not currently governed by EU legislation to show that they comply with the new requirements.

Sanctions

Non-compliance with the Act may be subject to administrative fines of up to €30 million or 6% of the total worldwide annual turnover), depending on the severity of the infringement.

Conclusion

The EU’s draft Artificial Intelligence Act represents a significant milestone in the regulation of AI technologies. It seems that the EU is taking a centralised approach by regulating AI under a separate and general framework. This may help ensure clarity and transparency. However, there remain ongoing debates and challenges. Striking the right balance between regulation and innovation will require careful and continuous evaluation and adaptation as AI technologies evolve.

Globally, countries are in various stages of regulating AI. Each may take a differing view on the best approach. But it is important for setting an initial line in the sand about regulations of AI. In addition, although the Act may not have a direct effect on the UK following Brexit, it may still have an impact on UK businesses including those operating in the EU and providing services to EU users. It would also be interesting to see whether the UK will go further or take up any suggestions from the draft Act.

Due to the fast pace of changes and developments in this area, it is crucial that you take comprehensive legal advice on requirements applicable to you and your business. You should also ensure these are integrated early into your AI models to ensure compliance and avoid breaches.