Determining Liability for AI Generated Works

Much of the discussion on artificial intelligent (AI) generated works have been based on ownership and ownership rights to the works generated by AI and not focusing on the liability obligations of AI generated works.  As stated by Chris Temple: “What is new in today’s marketplace are tasks and functions being undertaken without human input or intervention. In a situation where decisions are made and solutions are executed by artificial intelligence without human involvement, how does the law assess potential liability when things go wrong? And perhaps more importantly, how will the law assess liability in the future?”

One main issue that centers around this discussion is whether the legislature or the courts will take the lead in developing new guidance for AI-generated works.  In order to fully clarify liability obligations for AI generated works, the best option will be for individuals and entities to have contractual provisions “which covenant that the technology will operate as intended, and that if unwanted outcomes result then contractual remedies will follow”. These contractual provisions would help them limit their liability exposures from AI generated works.

Another liability obligation that comes up under tort is that of negligence; where certain questions like: “Who is responsible? Who should bear liability?  arise. In the case of AI, is it the programmer or developer? Is it the user? Or is it the technology itself? What changes might we see to the standard of care or the principles of negligent design? As the AI evolves and makes its own decision, should it be considered an agent of the developer and if so, is the developer vicariously liable for the decisions made by the AI that result in negligence?”.

In order to address these unending questions regarding where liability begins and ends, the major issue for the courts would be the test of reasonable foreseeability, that is, “whether a reasonable person will predict or expect the general consequences that would result because of his or her conduct, without the benefit of hindsight”. In a case where there is a lack of foreseeability, the law might replace the negligence test with that of strict liability, which would provide that a defendant will still be held legally responsible when neither an intentional nor a negligent act has been found and it is only proven that the defendant’s act resulted in injury to the plaintiff.

In conclusion, it is important for organizations to brace themselves for both the positive and possible negative consequences like the liability that AI generated works could bring to them.  With guidance from the legislature or the courts, it would get easier to ascertain who should be liable for any claims arising from AI generated works.

Eniola Olawuyi is an LLM student at Osgoode Hall Law School studying Intellectual Property Law.