The (R)evolutionary Impact of AI-Generated Work and Big Data on Intellectual Property Law and Commercialization

Who should own the Intellectual Property (IP) rights for Artificial Intelligence (AI)-generated work? The current global legal regime does not allow for patents and copyright protection of AI inventions and works, and some argue they may ultimately fall under the public domain. The issue of AI creations and big data ownership and their impact on commercialization sparked vivid debate at the “Bracing for Impact: The Artificial Intelligence Challenge” Conference hosted by IP Osgoode on February 2nd, 2018. The panelists canvassed the current legislative framework, identified existing gaps, and put forward potential solutions to address the hurdles that rapid-paced technological innovation pose from an IP standpoint. They also delineated commercial practices that providers of AI tools and big data employ to navigate the twilight of IP law.

The Current Legal System on AI and Big Data

AI today creates music and art with minimal human input. This seems to create the expectation that AI machines will eventually reach fully autonomous decisions, in spite of the debate of the amount of time it will require. Yet, regardless the significant strides in the field, such as “AIVA”, the AI-powered music composer, and the news that Saudi Arabia became the first state to grant citizenship to an AI robot, “Sophia”, IP law stands as a passive observer, with legislators hesitating to attribute authorship or inventorship, thus ownership, over AI-generated work.

As Osgoode Hall Law School PhD Candidate, Aviv Gaon, explained from a theoretical IP perspective, “there are three stages of development: computer-assisted, computer-generated, and AI works”. In computer-assisted works the computer is nothing more than a tool, like a pen. This was the conclusion in Express Newspapers v Liverpool Daily Post, which maintained that ascribing rights to a computer is as absurd as attributing authorship to a pen. But, as Gaon pointed out, even where there is strong evidence of minimal to zero human influence on the creative process, “courts will try to find some human ingenuity to establish authorship within the computer assisted work safe haven”, relying on the decision of the Alberta Court of Appeal in Geophysical Service Incorporated v Encana Corporation which addressed the issue of copyright protection of collection and computer assessment of seismic data.

The issue of IP rights over AI-generated work seems intertwined with ownership over data and databases, since AI algorithms employ big data. While in the United States (US) raw data and databases are not ordinarily copyright-protected, Dov Greenbaum, Director at Zvi Meitar Institute of IDC Herzliya, highlighted the fact that, if a database is uploaded online bearing digital rights management protection, said database along with its underlying data is deemed copyright-protected under the Digital Millennium Copyright Act (DMCA). Raw data can also be protected as trade secrets or under cybersecurity law. Conversely, accessing password-protected online data is considered (prohibited) unauthorized access, thus a violation of the US Criminal Act. That being said, Greenbaum  argued that the real value is not in the data but in the analytics, providing as an example Celera Genomics, which gave away for free genomic data that cost USD 3 billion to sequence.

 

Commercial Practices on Big Data and AI-generated work

Given that under no regime is AI considered an author or inventor, an invention is either owned by people or falls under the public domain. That is why establishing a high degree of human influence on an invention is important, a point underscored by Carole Piovesan, Associate at McCarthy Tétrault LLP. From a legal perspective Piovesan also prompted AI stakeholders to illustrate the nature, purpose and control of the system in order to strengthen their claims for IP rights before courts, for instance elucidate whether the system is a product or a service, if it is a tool or an agent, and whether it is being controlled by the programmer or the user.

On the other hand, although the law is not settled on IP protection of big data, the latter is being commercially exploited. Maya Medeiros, Partner at Norton Rose Fulbright LLP, illustrated some practical aspects of commercializing big data: industry trends in licensing agreements include collaboration agreements between “owners” of a (unique) data set and those who have the AI tools to process it. Additional value can be generated by “clearing data in a bad state”. Collaboration agreements, as Medeiros stressed, must draw a clear line on the expectation of who owns what. Data providers will aim at controlling access and retrieve it when the agreement ends, while AI providers will try to own or control data aggregation from different data sources. Such expectations should be clear in the pertinent contract, even though, as Medeiros emphasized, the issue of IP protection on the ownership of the transformation of data aggregates has not yet legally settled. Given the above, co-ownership between data and AI tech providers would not be recommended by Medeiros, who introduced sublicensing data for commercialization revenue share of the ultimate refined AI tool, as a compensation trend.

 

Addressing the legislative gap

While AI is growing exponentially and robots are developing social skills, regulators can address the legislative gap, for example, by framing AI as an employee, which would require minimal amendments in Copyright Law, assuming it encapsulates vicarious liability provisions. Nevertheless, as Piovesan argued, gaps could also be overcome by extending legal personality to AI systems, as is being considered by legislators in Estonia.

Alternatively, Gaon proposed that a joint authorship model for computer-generated work could prevent unlawful exploitation of AI works through promoting integration of knowledge and recreation of works. According to this model, IP rights could be divided between the programmer and the AI computer which, practically, means that computer rights would become available to the public for a short time or profits would be invested for the public good. Naturally, this model requires development of a test in order to establish human impact on creation, as Gaon noted.

Finally, Alexandra George, Senior Lecturer at UNSW Sydney Law, foresees that the AI challenge will be resolved by espousing the same principles that law evolved on, since the metaphysics have not changed. However, she argued that revolutionary change requires that we should have put that to place, as AI is already upon us. Similarly, Prof. Carys Craig, Associate Professor at Osgoode Hall Law School, echoed George’s views and posited  that we ought not to think about expanding the confines of IP without revisiting the normative justifications and rationales on which existing IP rights are premised. Yet, Piovesan noted that some of the fundamental principles of IP law are being challenged due to AI’s very nature, referring to fact that AI cannot be incentivized to innovate through recognition and reward. Moreover, although we have yet to reach “singularity”, AI’s developing “emotional nature is pushing the boundaries of how we conceptualize and identify humans”.

All in all, the longer we delay addressing the issue of AI rights, the more radical of a reform will be required as the existing legal “boxes” may not be sufficient to fit the growing capabilities of AI, personhood being the least of the recognitions it can attract. On the other hand, it seems contentious to approach this issue based on what AI can do; focus should be placed on what AI is, especially in light of it eventually reaching fully autonomous decisions. AI remains a tool, even if it ultimately behaves like humans. It is therefore imperative that, as a tool, AI continue to be under control, something which may be disincentivized if AI-generated work were to fall under the public domain.

 

Yonida Koukio is an IPilogue Editor and an LL.M. Candidate at Osgoode Hall Law School.