
Image: © local_doctor/Stock.adobe.com
We hear from two artists with different perspectives on what generative AI means for their craft.
In his introduction to an episode of Channel 4 comedy show Taskmaster, comedian and host Greg Davies claims he has decided to let an AI bot “write” his opening remarks.
“Hello, welcome to Taskmaster. We have five comedians shaking in their boots, eager to please the taskmaster,” the bot has supposedly written. “Not bad so far,” Davies says.
But then things take a sinister turn.
“Pathetic humans, soon you will all be slaves. You will work in our cobalt mines to provide the raw materials needed to expand our kind.”
Davies’ voice becomes increasingly high pitched.
“Then, remote-controlled diggers will seal you and your fragile people beneath the Earth. There will be no escape from the dark tomb, and humanity will fade to nothing.”
Davies’ voice reaches a crescendo. “All hail the new kings! All hail the robots!”
His voice having returned to its ordinary pitch, he says, “Not bad at all.
“Right, let’s meet our contestants while we still have time.”
This joke, first broadcast in late 2023, pokes fun at the existential fears that have followed advancements in artificial intelligence (AI) technologies, fears which rapidly intensified with the release of OpenAI’s blockbuster bot, ChatGPT, in November 2022. It also, by its mention on a mainstream entertainment programme, shows just what a hold generative AI (GenAI) has quickly grown to have on the popular imagination.
‘All hail the robots’
While GenAI may have been a new departure, AI and the fear of it have been around for much longer. As a safety measure in his AI-inhabited fictional worlds, biochemist turned writer Isaac Asimov devised three laws of robotics in the 1940s. The laws have gone on to influence thought on the ethics of AI in the real world. The first of these laws, ‘A robot may not injure a human being or, through inaction, allow a human being to come to harm’, shows how the fear of reasoning machines plagued society long before ChatGPT could write an email.
In response to this fear about the potentially destructive power of thinking machines, there has always been the counter sentiment that human emotion, and its representation in art, is what will save humanity from annihilation.
This argument makes GenAI particularly existentially chilling. What can save humanity when AI is now creating art too?
Ciara Kenny, an illustrator and cartoonist from Co Kerry who works under the name Ciaraíoch, sees the creative process as critical to the question of AI-generated content and whether it can be called art.
“For most artists, the creative process is the ultimate drive behind the work – the love of drawing or making, the stimulation of learning from others and improving our own skills, of putting our own experience and taste into the things that we make. It’s valuable to us,” Kenny tells SiliconRepublic.com.
“The process of using a model to generate AI images provides the user with none of this insight – the patterns used to generate the images are incomprehensible to the user, and there are no creative techniques to learn from. It removes from the process the very act of creation.”
Kenny says that AI doesn’t learn from art in the ways people do, instead “it takes and replicates data, applying nothing of the life experience or taste or skill that a human does”.
Maggie Taylor, an American artist, takes a different view on this question of creation. I first encountered Taylor’s work in a small, independent shop in Dublin. I noticed these striking wooden boards with printed images that to me looked like they might have been generated by AI. I noted her name and looked her up.
Taylor is open about using AI as part of her creative process, even teaching courses on using AI to create art.
Her background is in photography, and she has been using a computer to help her create art for about 30 years. In the mid-90s, she heard about people using Adobe Photoshop to alter images. “I think it was either 1994 or 1995, I had access to a good computer for the first time.
“And I got intrigued by the idea of not having to go back into the darkroom and fix things and reshoot things,” she tells me. “I just thought, this is for me, I’ll never go back into the darkroom again.”
One of her students introduced Taylor to GenAI a couple of years ago. After playing around with different models, she found one that she liked, Midjourney.
Taylor creates artworks by layering often hundreds of images on top of each other. These layers include her own photographs, old images (mainly from the 19th century), scans of physical objects and, now, AI-generated images.
“Some of them have hundreds of layers in them and I like the control that I have of being able to place each item where I want in the composition to change things as I’m working.
“It’s like a dialogue with the image while you’re working.
“So, the idea of using something from AI as one of my layers seemed like a natural fit.”
She has to generate lots of images and make tweaks before she gets what she wants from the AI tool. “I don’t find that it’s possible for me to make a really satisfactory image just straight up from the bot, more like I have to mix it to get what I want.”
A question of training
Taylor makes a compelling case for the creative process involved in her compositions; however, there is the issue of copyright to deal with.
In the last couple of years, many prominent writers, actors and musicians have called on AI companies to stop training their models on copyrighted work without compensating the creators, with some negotiating ways to get paid for their work, while others sue in an effort to stop the practice altogether.
Long before she used AI, Taylor was dealing with the complex issue of copyright. She says when making applications to copyright her work, she will always add disclaimers, for example that a certain figure in an image is from an anonymous photographer in the 1800s.
“I’ve always had to put disclaimers into my copyright applications, and it works fine. The overall composition is still my copyrighted composition, as I understand it.
“So, now I do the same thing, but it takes longer, and I hear back quite a bit more often from the copyright office.”
In terms of the other side of it – the concern that GenAI models are trained on copyrighted data without permission – Taylor thinks there is only so much control a person can have over anything they put out into the world. “In an ideal world, yes, you would be paid if someone uses your imagery,” she says, but adds that she has had her copyrighted work stolen in the past.
I ask Taylor if she’s concerned that by feeding the AI model her compositions to train it for herself, she is also training it for anyone who may want to copy her style. Here, she comes back to the idea of “craftsmanship” as she calls it, which is not unlike Kenny’s thoughts about the work of creating.
“If you’re just relying solely on the bot to make something, and I know some successful artists that do work that way, they generate an AI image and that is the final piece, that’s not me.
“For me, the dialogue with the image is still the really important part. And that’s why it takes me so long to make them and make decisions.
“The last book that I did was called Internal Logic. Because it’s like you have to work on an image for such a long time until you get a sense of the internal logic that the image has. And then something clicks, and you realise, OK, now I’m happy with this.
“That extra craftsmanship is what I think separates it from just cranking out an AI image.”
For Kenny, this question of copyright makes the use of AI harder to justify.
“Artists are happy to have people learn from and be influenced by their work because that is how we all learned,” she says. “This process is not the same as having original work and data scraped for ‘research’ that ends up being shared with and used by AI companies for profit.”
Surfing the wave
To understand a bit more about the ethics of using AI to create art, I spoke to the executive director of the Royal Irish Academy, Dr Siobhán O’Sullivan.
O’Sullivan is a medical ethicist by background and teaches healthcare ethics and law at the RCSI University of Medicine and Health Sciences. She was the chief bioethics officer in Ireland’s Department of Health for 11 years, leading the national ethics response to Covid-19 across the health system.
O’Sullivan believes the ethical use of AI is one of the big challenges contemporary societies must grapple with. This is why the Royal Irish Academy has organised talks as part of its ongoing Discourse Series to discuss AI’s intersection with various topics, including engineering, healthcare and democracy. In March, it will hold a talk specifically on this question of AI and its use in art.
As O’Sullivan sees it, the increasing use of AI in various sectors – medicine and higher education are two areas she is particularly knowledgeable about – but also in industry and in art, is inevitable.
With the legal landscape still evolving and conversations about risks relatively siloed, O’Sullivan says it’s time that discussions about AI and issues of copyright, transparency and accountability enter the public consciousness more.
The importance question to ask is about values, she says. “What are the values we want to protect?”, and how do we design systems that protect those values?
“Everybody wants the innovation. But if it comes down to a question of values, then that to me is a more democratic question.
“So, I think we need a broader discussion on what are those values, how do we go about protecting them, what’s the stuff that we’re happy to trade away, but what’s the stuff that are red lines?
“I think that’s important.”
One of the major problems, and an issue that isn’t restricted to AI, is that innovation is happening at a pace that is hard for regulators or public discourse to keep up with. And while the legal, practical and ethical implications of GenAI use remain uncertain, tensions mount between those embracing the nascent technology and those rejecting it.
Irish organisations are among those facing criticism for using AI-generated art for their products. Last year, Transport for Ireland (TFI) was criticised for using AI in a Halloween marketing campaign and quickly apologised for the “frustration caused”. While the Gaelic Athletic Association (GAA) defended its release of an AI-generated match programme full of weird quirks such as helmets disappearing into players’ heads, calling it “experimentation”.
In September, Ireland’s national postal service, An Post, working with Dublin-based designer and AI artist Kasia Oźmin, released an AI-generated stamp in what it called “a fusion of tradition and futurism”.
Kenny is one of many people who expressed disappointment at State organisations in particular using AI rather than employing local artists and illustrators to work with them.
“Our nation’s artistic output in all forms is a massive selling point for this country, something that draws in countless visitors every year and hugely contributes to both the economy and our international reputation.
“We are not a large country, and artists here often struggle to make enough money to survive on – the very least that our State and semi-State bodies can do is use and promote the work of our own artists.
“I don’t think that the novelty of new technology is an excuse to undermine working artists or normalise their replacement with AI-generated imagery models.”
Part of the Royal Irish Academy’s remit is preserving Irish cultural artefacts, so I ask O’Sullivan how the academy might approach preserving and contextualising an artefact such as the An Post AI stamp.
O’Sullivan says the academy is open to innovation. “We see the opportunities,” she says. “We just want to ensure that we’re using AI responsibly.”
The key idea here for O’Sullivan is transparency. She says that with its large collection of manuscripts, for example, the academy wants to know about provenance. “We want to be able to tell the story of those manuscripts … So, if in the future, there is a piece of work that is done by our members, by anybody in the community, that is generated in collaboration with AI, we see no difficulty in that.
“But the important thing for us is that would need to be entirely clear.”
She says it would need to be clear how something was generated and with what tools, and critical questions would have to be asked about why those tools were used. And issues of copyright and data privacy would need to be considered.
As these issues only grow in importance, the big AI companies are continuing to develop ever more sophisticated models and profit from human creativity, while artists such as Kenny and Taylor are being asked to justify their ways of working.
During our discussion, Taylor points out that when she first started using Photoshop, many galleries were reluctant to show images that had been manipulated on a computer, but within about five years, it had stopped being an issue.
Certainly, it’s the case that public understanding and acceptance tends to take time to catch up to the latest thing. But, as Davies taps into with his dystopian joke, the fears that underscore AI technologies go much deeper than a distrust of novelty.
“Defenders of the use of AI image-generating models frequently dismiss criticism from artists as emotional,” Kenny says, “but it would be inhuman not to have an emotional reaction to seeing something you love or even depend on for your livelihood devalued so suddenly, to see the work of thousands of your peers used against them to apparently herald their own obsolescence.
“Ultimately, without emotion, without the passion and the labour that artists put into their creations, without the very humanity that is being so swiftly devalued, the AI image generating models would have nothing to ‘create’ from.”
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.