‘Legal minefield’: The risk of commercialising AI-generated images

20 Sep 2022

Image: © besjunior/Stock.adobe.com

Jonathan Løw of JumpStory says the legal risk may fall on the end user if their AI-generated image enters a copyright dispute.

AI-generated images have rapidly become more prevalent this year as new text-to-image models become available.

These models, which can create images based on a text prompt, can offer new possibilities for users – including the right to use their generations for commercial purposes.

OpenAI made this claim in July when it expanded the beta of its text-to-image generator, DALL-E 2.

“Users get full usage rights to commercialise the images they create with DALL-E, including the right to reprint, sell and merchandise,” OpenAI said.

However, legal concerns have been raised around these AI-generated images, such as who truly owns the images and if they might infringe on existing copyrighted works.

Jonathan Løw is the co-founder of JumpStory, which uses AI to find original, authentic images that can be legally used. Løw told SiliconRepublic.com that using AI-generated images for commercial purposes could put people at risk of being sued.

“Right now the legal minefield is still not packed with mines, because legal has a tendency to follow after technological disruption. But the minefield is there, and it’s real,” Løw said.

Copyrighted images

Text-to-image generators such as DALL-E, Stable Diffusion and Midjourney are able to understand the relationship between an image and the words used to describe it.

When a user types in a text prompt, these AI models are able to create an image based on how they interpret the text, combining different concepts, attributes and styles. To make this possible, the models are trained using a massive amount of images.

For example, OpenAI said its DALL-E 2 is trained on around 650m images, from a mix of publicly available sources and “sources we licensed”.

However, the company has not made the dataset public, leading to concerns that copyrighted material could be within it. OpenAI said in a GitHub post in April that it had taken measures to prevent copyright issues from occurring.

“The model can generate known entities including trademarked logos and copyrighted characters,” it explained.

“OpenAI will evaluate different approaches to handle potential copyright and trademark issues, which may include allowing such generations as part of ‘fair use’ or similar concepts, filtering specific types of content, and working directly with copyright/trademark owners on these issues.”

Løw said the datasets that these AI models are trained on are “crucial”, as the models could potentially create new images that “mirror” an original, leading to risks of copyright infringement.

“You can’t just scrape other people’s work to generate your own and then claim ownership of this afterwards,” Løw said. “It doesn’t matter how advanced or intelligent your AI code is.”

An analysis on some of the data used to train the text-to-image generator Stable Diffusion suggests that some of these training images may be copyright protected.

Of the 12m images analysed, around 47pc were sourced from only 100 domains, with the largest number of images (around 8.5pc) coming from Pinterest. The analysis also found that images from famous artists were included, along with images of celebrities and political figures.

Lack of fair use

IP law expert Bradley J Hulbert recently told TechCrunch that AI-generated images could cause various problems from a copyright perspective. He said that artwork that bears a resemblance to a “protected work” such as a Disney character or logo needs to be “transformative” to be legally protected.

If a piece of work qualifies as fair use under a legal defence such as this, then it would not be considered a copyright infringement.

However, the issue around fair use protection becomes confusing when AI is involved. An article by The Verge last year noted that “there is no direct legal precedent in the US that upholds publicly available training data as fair use”.

This was according to Mark Lemley and Bryan Casey of Stanford Law School, who published a paper in 2020 about AI datasets and fair use. This paper was supportive of the use of copyrighted material in machine learning platforms, however.

“Fair use is about more than just transforming copyrighted works into new works,” Lemley and Casey wrote. “It’s about preserving our ability to create, share and build upon new ideas. In other words, it’s about preserving the ability to learn – whether the entity doing the learning is a person or a robot.”

‘You can’t just scrape other people’s work to generate your own and then claim ownership of this afterwards’
– JONATHAN LØW

Meanwhile, a decision issued by the US Copyright Office in February implies that AI-generated images can’t be copyrighted at all as an element of “human authorship” is required.

Some online art communities have also raised issues with the ethics of AI-generated images and have started banning them from their sites.

Polish digital artist Greg Rutkowski recently claimed that many of his landscape illustrations are being used by the Stable Diffusion AI to create new images based off his work.

And an AI-generated artwork sparked debate last month after it won a prize in the Colorado State Fair’s fine art competition. The winning image was generated using the Midjourney text-to-image AI, and the creator was criticised by some for what they saw as a flagrant disregard for artistic practices. Others pointed out that the judges of the competition may not have known what Midjourney was when the piece was submitted.

Lack of ownership or legal support

While many of these AI models claim people can use the generated images for their own purposes, it isn’t always clear which party owns the images.

“When people are told that they can use their works for commercial purposes, they believe that the works belong to them, but they actually don’t,” Løw said.

OpenAI states that users can lose the rights to use DALL-E generations if they breach the company’s terms or content policy.

“We will provide you written notice and a reasonable opportunity to fix your violation, unless it was clearly illegal or abusive,” OpenAI said.

The terms and conditions of AI models such as Stable Diffusion and Midjourney imply that the ownership of the images lies with the company, even if images are being used for commercial purposes. Stable Diffusion states that by using its services, “you hereby agree to forfeit all intellectual property rights claims, worldwide”.

Løw said that the legal risk may still fall on the end user if their commercially used image enters a copyright dispute.

“Even though Open-AI, Midjourney and others claim that their images can be used commercially, their terms and conditions still state that they don’t offer you any kind of insurance of financial help if you get into legal trouble,” Løw said. “So at the end of the day you are running all the risks as a user.”

In DALL-E’s terms of use, which were last updated on 20 July, one of the sections is titled ‘No Guarantees’. “We plan to continue to develop and improve DALL-E, but we make no guarantees or promises about how DALL-E operates or that it will function as intended, and your use of DALL-E is at your own risk,” OpenAI said.

Midjourney also claims to have no liability in the event of a legal dispute in its terms of service.

“You are responsible for your use of the service. If you harm someone else or get into a dispute with someone else, we will not be involved.

“If you knowingly infringe someone else’s intellectual property, and that costs us money, we’re going to come find you and collect that money from you. We might also do other stuff, like try to get a court to make you pay our attorney’s fees. Don’t do it.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com