Artificial Intelligence software (AI) is a technology that can perform “human intelligence” like tasks, such as perceiving, learning, and problem-solving. Examples of AI include Alexa, self-driving cars, chatbots and targeted marketing tools. Given the broad extent of its applications and possibilities, AI has and will continue to fundamentally change how we live and do business in the digital age. AI is an area of rapid technological change that poses novel challenges to the legal framework, which is struggling to catch up with it.
Developing landscape
The potential uses for AI and machine learning are seemingly endless. Yet, there is an ever-growing concern that regulation is needed as soon as possible to keep up with this fast-changing environment. Currently, there is no specific regulation in the UK aimed directly at regulating AI. However, as with all fast-evolving areas, this is set to change soon.
The UK is expected to release a white paper setting out its position on AI regulation in “early 2022”. It has already set up various task forces, institutes, and frameworks to try and set the UK in the right direction of trying to become world leaders in the development of AI.
We have recently seen the EU publish its own proposal for AI regulation. It has chosen to adopt a strict regulatory approach to its use. This may be different to the UKs plans. It has historically said that they would like to take a slightly more relaxed approach, with commerciality in mind, to entice users into the UKs jurisdiction. We shall, for now, have to wait and see what is proposed and how this is regulated in the UK.
Key Legal Issues with AI
Whilst there is no specific regulation currently in place, there are still key legal principles to bear in mind when dealing with AI and machine learning. The Equality Act 2010, the Humans Rights Act, Consumer Rights Act 2015, and Data Protection Legislation must still be adhered to.
We often find new and innovative technologies basic legal principles have to be revisited to ensure, for example, a properly constituted contract is in place. Some of the key potential issues that need to be considered include how you deal with liabilities caused by the negligence of the AI system.
Several contractual areas need clarification over responsibility. They include:
- Who is responsible for how the AI system is used and applied?
- Who is responsible for the output of the AI?
- Where does liability fall if something goes wrong?
This is likely to be different depending on what the AI system is being set up to do and how this is being applied. There should also be provisions for cybersecurity breaches and procedures for disruption or failure of the AI system.
Data Law
AI requires a large amount of data (including personal data) to build and enhance its functions. As a result, data protection and security are central issues in AI. Businesses engaging in AI must comply with relevant legal and regulatory requirements on data protection, security, and privacy. This includes the EU General Data Protection Regulation (GDPR) and Data Protection Act 2018. It is important that the business carefully considers the flow of data and what data it expects to be able to own and use to operate the software.
In cases where the company and the AI systems operate in different jurisdictions, it is important to consider whether and how the rules differ between countries. For example, certain countries and US states have biometric data privacy laws or laws that restrict the use of AI and automated decision-making involving personal information.
In Europe, it is essential to establish whether the provider should be characterised as a data processor or data controller for the purpose of the GDPR. Therefore, specific measures may need to be adopted to safeguard data subject rights, improve security and privacy, and ensure compliance with relevant jurisdictional requirements.
Intellectual Property Law
The development of AI, particularly machine and cognitive learning, also raises intellectual property (IP) law concerns.
Terms and conditions should include terms on ownership, assignment, licensing, liabilities, and use of all intellectual property required to operate the AI. It is often a complex web of content, technologies and material required for the AI to function properly. Machine learning software, which uses content to learn new processes and develop the software, throws up significant issues around the ownership of IP.
AI systems may also generate new inventions capable of patent protection. Thus, contracting parties should again expressly include provisions regarding ownership, assignment, and licensing of AI-generated inventions and patent rights in their agreement.
An AI system is not considered a natural person and thus cannot be “inventor” under Patents Act 1977. In September 2020, the UK High Court dismissed a patent application where the inventor was listed as the machine itself. It said that an “inventor” had to be a natural person. The European Patent Office and the US Patent and Trademark Office have also made similar decisions.
Other Liability
AI products are susceptible to negligence claims, given the duty of care is likely to extend to most scenarios in which AI is used. Other uses of AI such as self-driving cars, autonomous vehicles and robotics may also give rise to disputes about product liability and even claims in nuisance. For this reason, those developing AI products should take specific legal advice on their offering and work closely with their insurers to ensure they are properly covered.
Competition Law
Where the development of AI involves pricing algorithms or similar tools that respond to changes in the market, companies should also monitor and deal with antitrust and anti-competition risks. This is important in relation to anti-competitive agreements such as price-fixing agreements, collusions and cartels, especially agreements reached between AI systems themselves. This is a highly complex and specialist area of the law.
Conclusion
AI is going to be implemented widely across society. Developers and innovative businesses are already rushing to capitalise on this space. Therefore, it is paramount for all business leaders to keep on top of the fast-moving regulatory landscape and look out for the new rules likely to come into place this year.
It is also fundamental for businesses and developers to ensure thought is given to existing laws and regulations that already apply to the use of AI. This will ensure compliance from the initial concept through to development and delivery.
If you have any questions on the regulatory compliance of current or future AI applications, please reach out to the Tech Team here at A City Law Firm for specific advice.