Google is taking aim at AI-generated political ads

7 Sep 2023

Image: © Andreas Prott/

The tech giant wants political advertisers to clearly disclose when they use ‘synthetic content’ about real people or events.

As AI technology becomes more prominent, Google is making a push to prevent this tech from creating misleading political content.

The company plans to make political advertisers clearly disclose if they have used AI in their adverts. This change will require advertisers to reveal if their ads contain “synthetic content that inauthentically depicts real or realistic-looking people or events”.

In a blogpost, Google said the disclosure must be “clear and conspicuous” and will apply to any image, video and audio content. The update to Google’s political content policy is planned for mid-November this year.

The tech giant shared examples of when an ad will require a disclosure, such as if an ad makes it appear that “a person is saying or doing something they didn’t say or do”, alters footage of a real event or depicts realistic looking scenes that did not take place.

AI-generated ads have already been used for political purposes. For example, the US republican party shared an advert in April criticising president Joe Biden, which contained multiple AI-generated images of riots and warfare.

There has been a push this year to make AI-generated content more transparent, following its surge in popularity. In July, the US White House secured “voluntary commitments” from leading tech companies including Google, Meta and OpenAI to develop “robust” mechanisms such as watermarking to ensure users know when content is generated by AI.

California takes a look at AI risks

Meanwhile, the governor for the US state of California has signed an executive order to examine the benefits and risks associated with generative AI.

The order contains directives for state agencies and departments to look at the potential risks this technology presents to individuals, communities and government staff. Some risks that will be investigated include those related to cybersecurity, unintended impacts and risks toward legal processes, public safety and the economy.

One goal of this new order is to develop a “joint risk analysis” by March 2024 to help develop recommendations for the safe use of generative AI.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic