What legal implications await generative AI in 2023?


14 Feb 2023

Image: © XaMaps/Stock.adobe.com

As part of SiliconRepublic.com’s AI & Analytics Week, William Fry’s Barry Scannell discusses the legal trends expected for 2023 in relation to generative AI.

Click here to view the full AI and Analytics Week series.

One of the main trends in AI for 2023 will surely be the maturing of generative AI and its relationship with copyright law.

In January, Getty Images initiated high court proceedings in London against Stable Diffusion for copyright infringement and separately, the following month it was announced that Getty Images are also filing proceedings against Stable Diffusion in the US.

Separately, a class-action lawsuit has been launched in California against the generative AI systems Stability AI, Midjourney and Deviant Art. Stability AI created Stable Diffusion – the text-to-image diffusion model that platforms such as Lensa AI uses in their Magic Avatars app.

Meanwhile, the US Copyright Office is deliberating whether or not to grant copyright registration to a graphic novel which was in part created by generative AI.

The issue being addressed in both sets of proceedings is whether using copyright works to train AI constitutes infringement.

In the Californian class action, amongst other things, the plaintiffs claim that the defendants reproduced the works, prepared derivative works, distributed copies of the works, performed the works and displayed the works without a necessary authorisation.

The derivative works point is somewhat unclear. In text-to-image diffusion systems, which many generative AI technologies use, an object, such as an image, is encoded from ‘pixel space’ to ‘latent space’, and the AI then uses the ‘latent space’ from which to derive an output – not the original input.

The plaintiffs also claim that ‘passing off’ is occurring due to the ability to create art “in the style of” a particular author. They say that this has led to imposters selling fake artworks claiming to be established artists. The plaintiffs say that the defendants are liable for this in the basis of vicarious liability.

Training datasets

Many generative AI systems are trained on LAION-5B, which is one of the largest text-image datasets available today. It has been used by myriad companies to create deep learning models. One such deep learning model is called Stable Diffusion – on which new AI apps such as Lensa AI rely.

LAION-5B is a dataset of 5.85bn image-text pairs, which is 14 times larger than LAION-400M, the previous biggest openly accessible image-text dataset in the world.

According to LAION (Large-scale Artificial Intelligence Open Network): “To create image-text pairs, we parse through WAT files from Common Crawl and parse out all HTML IMG tags containing an alt-text attribute. At the same time, we perform a language detection on text with three possible outputs.”

The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata and text extractions. So LAION identifies all of those internet image files which have an associated text accompanying them.

It will be interesting to see how such cases progress in Europe, where we now have the reproduction right exception for text and data mining (TDM) in the recent Copyright in the Digital Single Market Directive. Under EU law, there are potential copyright exceptions for making reproductions for TDM for the purposes of research. Under the new EU copyright directive, unless rightsholders have expressly reserved their rights against it, TDM reproductions could also be permitted commercially.

So what does this mean? Well, organisations using these datasets to train their deep learning models need to satisfy themselves that they have the necessary copyright permissions or copyright exceptions, which would permit them to use the associated images in datasets. Otherwise there could be copyright issues. This also applies to other types of generative AI, including music.

If the datasets contain images of people, this may be personal data and potentially might constitute large scale automated processing of personal data, which comes with its own set of data protection requirements under the GDPR.

In addition to copyright considerations, organisations using large scale datasets in their AI technology should always satisfy themselves that they are compliant with data protection laws and have taken the necessary precautionary measures, such as a data protection impact assessment, when required.

Musical AI

This issue will also apply to music and Google has recently announced that it has developed MusicLM.

While there have been a number of music based generative AI systems, from Sony FlowMachines to Jukebox to AIVA, apparently none of them have achieve the reported fidelity and complexity of MusicLM. This is apparently down to limited availability of training data (music datasets are harder to come by than image datasets).

TechCrunch reports that: “MusicLM was trained on a dataset of 280,000 hours of music to learn to generate coherent songs for descriptions of – as the creators put it – ‘significant complexity’, such as ‘enchanting jazz song with a memorable saxophone solo and a solo singer’ or ‘Berlin ’90s techno with a low bass and strong kick’. Its songs, remarkably, sound something like a human artist might compose, albeit not necessarily as inventive or musically cohesive.”

However, the technology potentially raises significant copyright considerations. The research paper released by Google on MusicLM says that in an experiment, Google researchers found that about 1pc of the music the AI generated was directly reproduced from the songs on which it trained.

The Court of Justice of the EU in a relatively recent case held that sampling without authorisation can infringe a phonogram producer’s rights, however, the use of a sound sample taken from a phonogram in a modified form unrecognisable to the ear does not infringe those rights, even without such authorisation.

Google isn’t releasing MusicLM for now, with the researchers saying: “We acknowledge the risk of potential misappropriation of creative content associated to the use case…we strongly emphasise the need for more future work in tackling these risks associated to music generation.”

Given the breadth of music rights, from performance rights to distribution rights to adaptation rights to performer rights to recording rights to mechanical rights and to synchronisation rights – litigation is bound to happen.

By Barry Scannell

Barry Scannell is a consultant in William Fry’s Technology department.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.