How risky is AI for elections and democracy?

30 Apr 2024

Image: © bizoo_n/Stock.adobe.com

With the rise of powerful chatbots and deceptive deepfake content, experts believe AI could be used to influence elections and it is unclear if governments or tech giants can deal with the threat.

2024 is ramping up to be a big year for elections worldwide, but in this increasingly digital world the concern of technology interference is rising.

One example of technology causing an impact is social media, as platforms such as Facebook and Twitter have been used as tools to manipulate voters – concerns that still persist to this day.

But a rapidly growing form of technology is also causing concerns worldwide for its potential to create vast amounts of deceptive information. That technology is artificial intelligence – or AI.

While AI has been around in some form for decades, its capabilities and impact have surged in recent years, particularly after the rise of generative AI models such as ChatGPT. Experts have been discussing the benefits and dangers this technology presents – very much painting it as a “double-edged sword” for many sectors.

But could this technology be used to influence voters and interfere with elections? Meta’s president of global affairs Nick Clegg recently described such fears as overblown and said it was “striking” how little AI was used to disrupt elections in places such as Taiwan, Pakistan and Indonesia. He also said AI should be viewed as “a sword, not just a shield” in tackling bad content.

However, this comes from a company that has heavily invested in AI technology in recent years – and one that has created dedicated teams in response to the threat of AI misuse in elections.

Has it already begun?

Despite what people like Clegg say, there is already evidence that AI has been used to interfere in elections. A report by Microsoft earlier this month claimed threat actors linked with the Chinese government have been using AI-generated content in attempts to “influence and sow division” in multiple countries.

This report also claimed that in January, there was a surge of AI-generated content from China-linked actors to influence the Taiwanese elections – a counter to Clegg’s recent comment. However, the report also claimed there was “little evidence” that the actions of these accounts managed to sway public opinion.

So how exactly can AI be used to influence an election and how impactful could it be? Tim Callan, chief compliance officer at Sectigo, spoke to SiliconRepublic.com about the rising sophistication of deepfakes – AI-generated content that is made to resemble real events or people.

This content is usually in the form of photos, audio or videos. While this content was easier to spot in the past, the technology has advanced to the point where it is more likely to trick people.

Callan said there are many ways this type of AI-generated content could be used on politicians or political parties that “changes the impression that the average voter is going to have” on them.

“Usually, this is something defamatory,” Callan said. “They want to make that politician look bad. But it could be the opposite. There was a rather famous deepfake here in the US during the New Hampshire primary, where a deepfake voice of Joe Biden was being used.”

Callan said that if an election is a “landslide”, then its unlikely these deceptive techniques will change the final outcome, but he noted that a lot of elections are “really close”.

“And in an election that’s really close, just swinging a few votes can make the difference,” he said. “So, people credibly believe that they’re looking at a video of a politician saying something stupid, or falling asleep at the G20 Summit, or some of the other things that we’ve seen, then those things might influence elections. It’s a very real risk.”

A study last year also suggested that AI models can trick people into believing false information better than humans can. This study also referred to a certain AI model as a “double-edged sword”.

Preparing for the threat

AI technology has been making an impact in many sectors for a while now, but efforts to monitor, regulate and better understand the technology are also taking place worldwide.

A recent report from The Washington Post suggests the US Senate is looking for ways to tackle AI deepfakes in election campaigns, while various tech giants such as Meta and Google claim to be tackling the threat the technology poses to elections.

Meanwhile, governments around the world are making note of AI’s potential to interfere in elections. The EU has sent requests to various tech companies such as Microsoft, Alphabet, Meta and TikTok to see how they are handling the threat generative AI presents to elections.

This followed new requirements for online tech giants announced last month, such as setting up internal election teams and adopting measures to reduce the risk posed by generative AI, to ensure the smooth functioning of democratic processes.

Ireland has also taken note of the threat that deceptive AI content presents for elections, as it recently released a framework on online electoral process information, political advertising and deceptive AI content.

“The risks to the integrity of elections are already apparent, with many examples across the world of inauthentic behaviour, particularly involving AI-manipulated audio or video,” said Minister of State Malcolm Noonan, TD. “It is important that we do all we can to protect our electoral and democratic processes from such interference.”

Experts are also being brought together in Irish-led studies to address the threats facing democracy, including AI. Dr Joseph Lacey, founding director of the University College Dublin Centre for Democracy Research, is leading an international research project called Elect that is investigating the challenges stemming from recent changes in terms of how political campaigns are run and won.

Watching the US closely

Speaking to SiliconRepublic.com, Lacey noted various developments in recent years that have altered political campaigns, such as “big data and analytics, new media and the emergence of new electoral forces”.

When it comes to AI, Lacey said its potential use and impact on the campaign environment is still “largely unknown”, but noted that AI can greatly enhance productivity and efficiency, which suggest it will help “both those who want to use it for positive and more nefarious purposes”.

“For example, parties may be able to reach more voters online with the help of AI, but anyone wanting to flood the public sphere with misleading information or lies will also be better able to do this, at least in the absence of countervailing measures,” Lacey said. “One thing we know about political parties is that they have limited resources and expertise. So they can be slow in really maximising what can be gotten out of new technologies to help them campaign.”

Lacey said the main exception to political campaigns having limited resources is in the US, where the two main parties are usually “the real trailblazers in integrating technology into their campaign apparatus”.

“Loose campaign financing means there is plenty of money around and plenty of professionals to be hired who can really add to the technological prowess of campaigns,” he said. “Less well-resourced parties around the world – as ever – will be keeping a close eye on what is being done in US campaigns with the hope of learning from any experimentation with AI that they could potentially adapt within their own resource constraints.”

How to protect democracy?

With research and regulation moving forward, it looks like there will be some tools available to protect elections from AI-generated content. But how soon these defences will be in place – and how effective they will be – remain unclear.

Callan noted that legislation to restrict the use of deepfake technology will “help somewhat”, but said it’s much more difficult to tackle the “various underground clandestine stuff”, such as chatbots on social media sites that are not identified as AI.

But the real issue – according to Callan – is that there is a “lag” between the average voters’ understanding of this technology compared to “the actual state of the technology”.

“It has been true for more than a year that AI is badly tainting your online experience,” he said. “It could be through deepfakes, it could be through other things that are not deepfake but just AI-driven chatbots.

“There’s a whole bunch of places where the online dialogue, or the set of online voices that you get, are probably being partially or largely generated by AI, and they’re doing that to a purpose.”

Callan believes one answer lies in consumer education – initiatives to make the general public more aware of the threat and the fact online content may be AI-generated. But this presents another issue – it becomes harder for people to trust anything online.

Last year, Wasim Khaled of Blackbird.AI claimed there were examples of AI being used for “distributed denial-of-trust” attacks, which he claimed are attacks designed to drown out “any notion of trust” associated with a particular entity or narrative by “pulling a lot of other things into it”, such as conspiracy theories.

Callan shared a similar concern when it comes to the rise in AI technology, as he said it has reached the point where he is “sceptical of anything I see” online.

“We’re in a dangerous time where nothing that occurs online is reliable, but most people don’t realise that,” Callan said. “That can be the cause of real damage.”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com