The app adds subtle ‘perturbations’ to art to interfere with AI models when they read the image’s data.
Since the rise of AI-generated artwork last year, concerns have been raised by artists that their work is being used and mimicked by these systems without their consent.
Last month, Getty Images sued US-based image generation start-up Stability AI for allegedly infringing its intellectual property “on a staggering scale”.
To help deal with these concerns among artists, US researchers have created an app to prevent art from being used by these AI models.
The app, called Glaze, adds subtle changes to artworks to interfere with the ability of AI models to read the image’s data.
In a research paper, the University of Chicago team said the app adds “barely perceptible perturbations” which aim to mislead AI models that are trying to mimic specific artists.
The team said it tested this app with more than 1,000 artists and claimed Glaze is “highly successful” at disrupting AI mimicry, with more than a 92pc success rate under normal circumstances.
The research team claim AI models such as Stable Diffusion and Midjourney are creating various negative issues for the art community, while “customised diffusion models” are being created to mimic specific artists.
“Beyond open legal questions of copyrights, intellectual property, and consent, it is clear that these AI models have had significant negative impacts on independent artists,” the researchers said.
“As synthetic art mimicry continues to grow for popular artists, they displace original art in search results, further disrupting the artist’s ability to advertise and promote work to potential customers.
“Finally, these mimicry attacks are demoralising art students training to be future artists. Art students see their future careers replaced by AI models even if they can successfully find and develop their own artistic styles,” the researchers said.
Despite the positive success rate, the researchers said the app is “not a permanent solution” due to the rapid pace that AI is advancing.
“Techniques we use to cloak artworks today might be overcome by a future countermeasure, possibly rendering previously protected art vulnerable,” the team said.
Last month, we speak to Colin Foran, head of game at Shrapnel, about the rise of AI and the debate around whether it can, or should, replace the human element in art.
Meanwhile, the legal aspect of AI-generated art became more complicated this month with new guidance by the US Copyright Office. This guidance said that in some cases, art generated by AI can be copyright protected, depending on circumstances such as “how the AI tool operates and how it was used to create the final work”.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.