“Artificial Intelligence Systems as Inventors?” – The Max Planck Institute on Machine Autonomy and AI Patent Rights

Photo by KirillM (depositphotos)

Emily Xiang is an IPilogue Writer, the President of the Intellectual Property Society of Osgoode, and a 2L JD Candidate at Osgoode Hall Law School.

 

The emergence and rapid development of highly advanced technologies affects virtually every aspect of our day-to-day lives, and in the absence of more explicit legislative authority, the time has finally come for judiciaries to wrestle with the subject matter.

In July 2021, the Federal Court of Australia affirmed in Thaler v Commissioner of Patents [2021] FCA 879 that artificial intelligence (AI) systems may be deemed “inventors” under Australian patent law. Since this decision, numerous scholars have questioned the validity of the Court’s reasons, including researchers at the Max Planck Institute for Innovation and Competition. In their position statement entitled “Artificial Intelligence Systems as Inventors?”, Dr. Daria Kim, Dr. Josef Drexl, Dr. Reto M. Hilty, and Peter R. Slowinski expound on the evolving case-law in this subject area. They identify some of the assumptions made by the Federal Court of Australia regarding the technical capabilities of AI systems and question the potential consequences of attributing ownership rights to non-human entities in the absence of a more comprehensive analysis.

The principal question at hand is whether non-human entities, such as AI systems, should have legal capacity. Employing a social welfare perspective in their position statement, Kim et al. contend that, since patent protection imposes welfare costs, such a finding of entitlement to legal rights can only be justified by benefits that, in the absence of legal protection, would not occur. The authors also propose a number of counter-arguments against the reasoning of Beach J., found in paragraph 10 of the Thaler decision:

“First, an inventor is an agent noun; an agent can be a person or thing that invents. Second, so to hold reflects the reality in terms of many otherwise patentable inventions where it cannot sensibly be said that a human is the inventor. Third, nothing in the Act dictates the contrary conclusion.”

Firstly, Kim et al. argue that classifying “inventor” as an “agent noun” is insufficient to conclude that non-human entities can be considered inventors under patent law. Instead, highly advanced AI systems can only reasonably be thought of as “tools” that carefully adhere to algorithmic instructions and computational techniques in order to achieve certain ends, and such adherence should not be equated with the act of inventing. The Court fails to understand that, in most cases, AI systems require the application of meticulous adjustments and human decision-making.

In paragraph 126 of the Thaler decision, the Court notes that “machines have been autonomously or semi-autonomously generating patentable results for some time now”, and that by according AI the label of ‘inventor’, one is simply “recognizing the reality” of the state of the art. However, Kim et al. point out that it has not yet been proven that AI systems can act and invent “autonomously”, and that the Court in this case made an unfounded assumption. As one commentator from the American Intellectual Property Law Association noted, “the current state of AI technology is not sufficiently advanced at this time and in the foreseeable future so as to completely exclude the role of a human inventor in the development of AI inventions”. Moreover, there is also the danger that such a finding regarding autonomous or semi-autonomous inventing on the part of AI might promote public misinformation about the current state of AI technology.

Kim et al. explain that the silence of Australian patent law (and other jurisdictional patent frameworks) on the subject of non-human inventors cannot be interpreted as implicit recognition, and that such an attribution could contribute to legal uncertainty. After all, it only invites further questions regarding how and when the Australian patent system must now recognize AI systems as having “autonomously” invented, as opposed to where AI techniques are being employed as mere tools to achieve a certain end.

Finally, Beach J. in paragraph 56 of the Thaler decision justifies the Court’s broader view of the concept of “inventor” by stating that to do otherwise would “inhibit innovation not just in the field of computer science but all other scientific fields which may benefit from the output of an [AI] system”. However, it is important to note that surges of patenting activity involving AI techniques and application have been occurring even in the absence of such recognition, and that taking patent protections away from human inventors may in fact disincentivize the filing of AI-related patent applications.

Legal frameworks globally remain ambivalent on the status of AI as inventors. For instance, the US District Court for the Eastern District of Virginia recently opined in Thaler v Hirshfield that AI machines cannot be considered “inventors” under the US Patent Act. However, as cutting-edge technology continues to develop at alarming rates, courts around the world should expect to further engage with and clarify the law in this area.