Kickstarter is forcing creators to disclose if AI was used in their projects, while a screenshot suggests Instagram is working on its own AI content label.
The push to easily spot AI-generated content online is getting a boost, as more companies have shared plans to put informative labels on this content.
The funding platform Kickstarter is adding a new policy that requires users to disclose if their project was created by AI.
The policy requires creators to be “transparent and specific about how they use AI in their projects”. Kickstarter said the policy doesn’t ban the use of AI in these projects, but aims to ensure that any funded project includes “human creative input and properly credits and obtains permission for any artist’s work that it references”.
“In the past year, there’s been a lot of discussion about AI,” Kickstarter said in a statement. “It’s particularly top of mind for creatives – some who are exploring ways that AI can enhance their work and others who are thinking through what the latest advancements mean for credit, consent and their livelihoods.
“As a platform for creators, we have a responsibility to directly engage in this ongoing conversation.”
The new policy will go into effect on 29 August. Kickstarter said any project that fails to properly disclose its use of AI could be suspended.
“Attempts to skirt our guidelines or intentionally misrepresent a project will result in restrictions from submitting a Kickstarter project in the future,” the company said.
Instagram takes note
Meanwhile, a screenshot suggests that Instagram is working on its own label, which will identify if AI has had a role in creating a piece of content.
The screenshot was shared on Twitter by app researcher Alessandro Paluzzi. The screenshot shows an example of a post that was generated by “Meta AI”, along with a description of what generative AI is.
— Alessandro Paluzzi (@alex193a) July 30, 2023
Eduardo Azanza, the CEO of identity platform Veridas, said the move to a more transparent media landscape is “extremely positive”, as AI tools have the potential to “completely erode trust in the news cycle and what the public perceives as true”.
“We’ve already seen a dramatic increase in abuses of deep fake images and videos circulating online,” Azanza said. “As artificial intelligence advances, it will become more and more challenging to distinguish between authentic and artificially generated media.
“Without some sort of label, the public is left to rely on their personal intuition alone and the spread of misinformation becomes easier.”
The potential AI-generated content label follows Meta, Google and OpenAI giving “voluntary commitments” to the US White House last month, to ensure the safe and transparent development of AI technologies. These commitments includes developing “robust” mechanisms such as watermarking to ensure users know when content is generated by AI.
“This action enables creativity with AI to flourish but reduces the dangers of fraud and deception,” a White House statement read.
In June, the EU urged social media companies to “immediately” start labelling content and images generated by AI in order to curb the spread of disinformation by Russia, The Guardian reported.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.