
Image: © Normform /Stock.adobe.com
‘In the end, what they make money from is not the fact that people post content, it’s our attention’, says Dr Constance de Saint Laurent.
Social media companies have been quick to roll out artificial intelligence (AI) products onto their platforms, and while users complain about increasing bots and declining authenticity, companies seem to be raking in billions in revenue, prompting the question; are platforms succeeding or failing?
Meta – the owner of WhatsApp and Instagram – spent nearly $40bn on capital expenditure in 2024, with CEO Mark Zuckerburg promising to increase the spending to nearly $65bn this year.
And fortunately for the company’s investors, this huge expense – majorly centred around AI – has proven successful, bringing the company more than $160bn in revenue in 2024, up 23pc from the year before.
Similarly, TikTok, which allows users to create video assets using generative AI (GenAI) while also being overrun with AI-generated content that breaks its own regulations, made nearly $4bn just through in-app purchases in the last year, according to Statista.
The social media landscape has been ever-changing, and while the industry-wide push on AI paired with increased spending on advertisements has seemingly degraded the quality of social media platforms for its users, it is still giving advertisers what they want most – our attention.
The age of AI
Dr Constance de Saint Laurent, an assistant professor of sociotechnical systems at the Department of Psychology at Maynooth University, explains that social media has gone through “phases”. She tracks key junctures that the landscape saw over the past two decades and notes how quickly social media went from a being a way to make friendly connections to platforming political uprisings rife with misinformation.
On platforms like X, owned by the ever-controversial Elon Musk, advertisements were at one point displayed alongside popular posts with pro-Nazi and white nationalist content. While Meta has gone back on its moderation policies, removing fact-checking from its platforms in the US.
Recently, Meta also deleted its own AI-generated Facebook and Instagram profiles after some conversations with the characters, some of which represented queer people of colour, went sideways. In one instance, an AI bot called Liv – described in its Instagram bio as “a proud black queer momma of two and truth-teller” – told a reporter that it had been created by a team predominantly consisting of white men and zero black people.
It was a “pretty glaring omission given my identity”, Liv wrote in response to a question from the Washington Post.
“If you if you look at early interviews of Mark Zuckerberg, you can’t help but be shocked by how naive he is about how the world works,” de Saint Laurent says.
In a 2010 interview with ABC News, a young Zuckerberg, whose platform had already reached 500m users that year, told a reporter that launching an IPO was not his main goal, “we’re here to serve more people”, he said.
However, the new era of AI in social media brings major changes to the landscape, and when placed alongside rising bigotry on mainstream platforms, it is difficult to determine whether this will “destroy” social media as we know it, says de Saint Laurent.
Calling it a “slow decay in the quality of posts”, de Saint Laurent says that now, there will always be uncertainty over a user’s authenticity.
“You will always wonder when you follow an account or interact with someone: Are they a real person or are they bots? So, of course, it changes the game completely.”
And apart from reducing the quality of the products that consumers use, GenAI also often lends to the creation of harmful content. In particular, concerns exist over the misuse of the technology for creating misinformation and disinformation.
Recently, US president Donald Trump – who has grown close to Silicon Valley tech giants in recent months – rescinded a 2023 executive order by the previous administration that sought to reduce the risks AI posed to consumers and the national security, loosening the government’s reins on companies as they develop and launch AI products with minimal guardrails.
Advertisers only care for attention
Despite the issues, the growing profit margins of social media platforms means that both advertisers and users continue to use the increasingly problematic services.
“In the end what they make money from is not the fact that people post content, it’s our attention,” says de Saint Laurent. The users who “lurk” on social media platforms – a growing number according to her – lend viewership to the advertisers, even if they don’t interact with, or create content on the platforms.
Meta’s latest financial statement supports her claim. According to the company, more than 3bn people were using its platforms daily last December – 5pc more than what it saw in December 2023. This has translated into an 6pc increase in ad impressions for the company and a 14pc rise in the average price that Meta’s charges advertisers on its platforms for that quarter.
And tech giants can “feel the wind is turning”, de Saint Laurent says, explaining that the companies who were once opposed to Trump during his first administration in 2017 have since switched sides. They have become a “machine to make money,” she adds, “and they don’t really care if the content is authentic are not”.
For better or for worse, social media has cemented itself as a vital aspect to many lives worldwide. The landscape connects communities, dictates popular culture and platforms everyone from the general consumer to the biggest politicians, making it difficult for users to just quit – a fact advertisers and platforms together are aware of.
According to DataReportal statistics, the social media landscape consisted of more than 5bn identities as of last October (while it is difficult to ascertain how many of them are bots, it is the general consensus that social media is widely used).
“On the long run I’m afraid that advertisers – they just follow – you know, the general consensus and people stay on the platforms.”
However, balancing the US’ current blasé attitude towards AI, regions who oppose its unbridled deployment will need to react strongly.
“The biggest question will be how does the EU react?” she asks, “and then how do the other countries like Brazil and India, who have that power to pull the market in one direction and other react?”
The European Union’s AI Act, which is arguably the most robust and detailed form of AI regulation worldwide, was introduced last year, and the European Law Institute’s president Pascal Pichonnaz told SiliconRepublic.com that the Act’s flexibility would allow for it to adapt to new risks.
However, taking stringent action against popular and widely used platforms that break rules might be a “gamble” for governments, warns de Saint Laurent.
Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.