Artificial Intelligence and Intellectual Property: Non-Human Innovation and the Ownership of Rights

Initiatives are needed to regulate this emerging area of technology lest global competition be sever


Sam Norris

3 years ago | 7 min read

With AI on the rise, IP law finds itself tied in knots, unable to utilise its traditional statutory tests in response to intelligent, innovative algorithms.

In this article, I give a run-down of the main contentious issues relating to ownership of rights. Ultimately, I find that more initiatives are needed to regulate this emerging area of technology lest global competition be severely skewed.

Image: under Creative Commons license

Last year, the music AI known as Endel was signed by Warner Music. This AI, a specialist in composing mood music, digests personal data such as location, heart rate and weather to create fully original soundscapes tailored to each individual with the Endel app.

Intellectual Property law has always tussled with this kind of fast-developing technology. Ever since the printing press required more stringent protection of authors’ rights, initiating conversations around proto-copyright, IP has had the tricky task of responding to ever-accelerating developments in technology-assisted creation.

Its latest adversity is Artificial Intelligence. Broadly defined, AI is a construct of code that is able to learn and respond to complex stimuli.

Whereas traditional algorithms require all foreseeable parameters to be mapped out by their coder, AI can improve itself and respond to unfamiliar situations beyond the boundaries of its own coded function. As such, AI can actively invent new processes based on its own intelligent capacity.

Accordingly, AI presents serious problems for IP law. Up until now, the “intellect” in Intellectual Property has been the preserve of human beings. AI, however, now also possesses the intellectual ability to do “work” of a creative or inventive nature.

Some key questions arise: to what extent should rights be afforded to protect the work, what should these rights be, and to whom do they belong?

Clearly, it is possible for an AI to create things that are worthy of being protected, whether by a patent or through copyright. We see this with AI such as Nissan’s DriveSpark, launched in 2016, which assists in drafting new vehicle designs, as well as creative AI such as MuseNet, a deep neural network that creates its own classical and jazz compositions.

Indeed, AI is capable of completing tasks that humans cannot achieve with their limited brainpower, lending itself to work in industries where complex processes are required.

Under copyright, AI creations such as MuseNet’s compositions would satisfy the conditions set out in s. 3(1) Copyright, Design and Patents Act 1988.

Similarly, AI that invent new technical processes in industry would satisfy the test for patentable subject matter under s. 1(1) of the Patents Act 1977, as well as the EPC 2000. It is indisputable that an AI could produce literary or musical works as defined in the CDPA, and their inventions very clearly adhere to tests such as industrial application in the case of patents.

Conceptually, the nature of AI invention is not considerably different from human invention. An AI takes in a set of data, or responds to a stimulus, uses a defined set of rules, and innovates based on what it knows. Humans fundamentally do the same: artists study the Old Masters to hone their technique before making original works, and inventors often have highly specialised knowledge of their field before they are able to develop new things.

The more contentious issue relates to ownership of rights. If an AI creates or invents a new piece of music or algorithm, who owns the rights over this creation? In copyright, s. 178 of the CDPA states that, where creative works are computer generated, the creator is “the person by whom the arrangements necessary for the creation of the work are undertaken”. In such a scenario, the coder of an AI would theoretically have a claim to the IP of an AI’s invention.

But this logic immediately runs aground: in the case of AI, the programmer only has a removed involvement in creating the work. Granted, the programmer may have had significant input through implementing operational constraints, commands and learning patterns, but the AI itself has fundamentally created the new process.

For example, in some cases, advanced AI could reasonably create something totally unforeseen to its programmers. Given this, it would be disproportionate to attribute rights to the programmer for simply facilitating the creation of something unforeseen to them.

It becomes clear that the CDPA 1988 makes little sense in the context of AI: indeed, the Act was drafted long before AI was even conceivable, when computers were instruments as opposed to the active architects of emergent processes.

Until now, the law has been unchallenged on the issue, but a handful of recent patent applications in the UK and EU have begun to probe the legal framework. Most notably, a recent patent application was made by Stephen Thaler for an AI named DABUS.

Thaler made the application to the EPO arguing that the EPC contained no requirement for the inventor to be human, and that as the AI’s owner, he was therefore assignee of the rights.

In line with the UKIPO’s response to an application by Thaler, the EPO uncompromisingly asserted that the inventor must be a “natural person”. It did not dispute the argument that DABUS was the inventor of the work, but refused to accept the AI as an inventor for the purposes of granting a patent.

Its reasoning was threefold: first, that this approach was consistent with nearly all domestic law in European countries; second, that the presence of a human inventor was required in order to establish the inventor’s legitimacy; and third, that the inventor must be human in order to benefit from rights.

There are a number of policy considerations underpinning such an approach. Looking at the incentivist justification for IP, which sees IP as a framework to encourage innovation by preserving economic reward, we can see that it makes little sense to treat the AI as the inventor as it cannot enjoy the benefit of exclusion or utilise its rights commercially.

Its programmers, on the other hand, would have obvious uses for such rights. But applications to grant the programmers themselves patents would still likely fail as technically speaking they have not “invented”, merely facilitated the creation of new processes.

This looks likely unless they could prove that their programming was involved enough so as to have substantively contributed to the invention of the new work.

This is where the law stands currently, but there is a real danger that such an approach will become outdated very soon. Most technology forecasters are quick to stress AI’s exponentially increasing rate of advancement. In a short amount of time, AI innovations will be abundant, and as things stand, their inventions will be unpatentable.

Looking farther into the future, the very distinction between natural and non-natural persons becomes problematic. Intelligent AI are already engaging in activities typically reserved for flesh-and-bone humans: speech, work, problem-solving. The famous robot AI, Sophia, is now an actual citizen of Saudi Arabia and Innovation Ambassador for the United Nations Development Programme. It is well known that citizenship and personhood are closely related concepts legally. Ultimately, it is conceivable that AI would be able to reap the rewards of their own innovations as beings with similar rights and freedoms to humans.

This aside, there is nonetheless an imperative for IP law to respond more adequately to the issue of AI invention in near term. As Huw Jones notes in his decision on the UK DABUS application, AI inventions upset the intricate balance of incentivised innovation that is held in place by IP. An AI cannot be made to share information about its invention with or without a patented monopoly over its industrial use.

There is also no requirement for its programmers or organisational owners to do so. Given that most advanced AI is created by huge tech companies — Google, Microsoft, Amazon, IBM, Tencent — we could see a situation where advanced AI, operating at an exponentially accelerating work rate, create huge amounts of innovative industrial processes kept entirely out of the reach of the rest of the world.

Leaving the law alone therefore places massive power into the hands of tech giants, which could be dangerous for global competition.

There are two potential solutions. First, the law could grant patents to AI inventors in exchange for public disclosure. We have already seen, however, that this makes little sense in practice as AI cannot benefit from a patent monopoly, and it is disproportionate to allow its programmers to do so.

Second, a different route would be to implement a global imperative to place AI innovations into the public realm. This would require the creation of some kind of open platform, probably based online, that operates across domestic jurisdictions as a repository of AI creations and inventions with industrial application.

It would likely have to be a WIPO initiative. While this would neutralise some of the problems of AI invention, there are many further issues surrounding such an approach.

For instance, it may be difficult for an AI’s owner to distinguish between their original code and the AI’s newly developed process. Would then the entire piece of code be required to be disclosed? And how would this scheme be monitored, regulated and enforced?

It may also be very difficult to coordinate such an initiative internationally. Until legal frameworks are challenged more frequently on this issue, it is unlikely that IP authorities will turn their heads and take collective action. As we have seen, however, it may be too late by the time they do.

IP has a real opportunity to get ahead of the curve on this issue, but it will be down to domestic and international legal institutions to take the initiative and create some form of unique regulation for AI innovation.


Created by

Sam Norris







Related Articles