untitled-design-4

News & Events

;
Insight

Legal Liability Options for Artificial Intelligence

Background and Context

This article provides a whistle stop tour of the legal liability issues for Artificial Intelligence (AI). Briefly, AI can be defined as “technologies with the ability to perform, tasks that would otherwise require human intelligence” (definition from the Government’s Industry Strategy White Paper). AI itself is already here in many ways such as Siri on our iPhone, OK Google, or Amazon’s Alexa in our homes. They use the information they have about us to make decisions about what information we will find useful. For example, choosing when to provide traffic or weather updates because they have learned the time of day we commute. AI is also used to make important decisions about us and this is only set to increase with time. For instance, AI can already make decisions about whether an individual can receive a mortgage, get to the next round of a job interview or make decisions on the road. It is therefore important to consider the consequences when AI makes a decision that causes harm.

The Legal Liability Problem

The question to consider is who has legal responsibility when an AI system or device makes a decision that results in harm? There is currently no regulatory framework which answers this question applicable to a broad range of AI. There are multiple options ranging from insurer, product manufacturer, user, or even the AI itself.
It is also possible that different types of legal liability may be appropriate for different forms of AI. For example, different liability models may arise for a computer software system that makes decisions about job applicants compared to autonomous systems in cars.

Legal Liability Options for AI

Data Protection regulation
Data protection law already offers individuals some protection against how automated decision making uses their personal data. UK law demands that individuals may not be subject to a significant decision based solely on automated processing unless that it is required by law. In addition, individuals are given transparency as to when a significant decision has been made about them solely based on automated processing.

Contractual

Some AI systems may be subject to contractual liability. For instance, if your Roomba (a home vacuuming robot) malfunctions and damages your carpet, you may be able to claim some kind of compensation (subject to limitations in the normal terms and conditions).

Negligence

It could be argued that the general principles of negligence can apply to the widespread use of AI. Liability in negligence arises where there is a duty of care. It seems logical that a person who has suffered loss because of a decision made by AI may be owed a duty of care. However, it may be unclear who has this duty. The AI is not responsible for its own actions because it is not a legal person. Liability could therefore rest with the owner, the manufacturer, the user or the service provider. Whilst there is potential for recovery of loss when AI malfunctions, using the negligence route, the exact method is not yet clear. 

Legal personality (allowing for vicarious liability and criminal liability)

There is some debate as to what extent AI could be given legal personality. It could be held accountable for its actions similarly to an individual or a company. It also would mean that the individual’s in charge of the AI could be legally responsible for the AI’s actions through vicarious liability (in the same way that the employer is responsible for the actions of their employees in the course of their employment).
Some sort of criminal liability may be attributable to AI should it take the form of a legal person. This may be similar to how a company can be criminally liable. For example, a company can be liable for corporate manslaughter under the Corporate Manslaughter and Corporate Homicide Act 2007.

Insurance

Insurance is the chosen answer for autonomous vehicles under the Automated and Electric Vehicles Act 2018. Insurance is a great option where we would already expect to insure the products (for instance, before they become autonomous). However, insurance may not be an appropriate answer for products we would not normally insure.

Animal Liability

Under the Animals Act 1971, where an animal runs on to another person’s property and causes damage, the animal’s owner is liable for this damage. This is strict liability, so there is no need to prove negligence or intent (on the human’s or animal’s behalf!). It may be appropriate that some forms of physical AI (e.g. robots) could have similar legal framework put in place.

Conclusions

As the technology is rapidly evolving, so too is the law that sits alongside it. The debate is still ongoing as to whether AI should take some form of legal personality and any conclusions on this may change how the world implements both future and currently integrated AI systems. This area of law will evolve in the coming years and different forms of AI at likely to take on different forms of legal liability models.

 

These notes have been prepared for the purpose of an article only. They should not be regarded as a substitute for taking legal advice.

Get in touch

Talk to us about your legal challenges and discover how our expert, pragmatic legal advice and broad commercial acumen can help.