What does the year ahead look like for artificial intelligence?

11 Jan 2024

Image: © Andrew Ink/Stock.adobe.com

From generative AI going mainstream to increased regulatory scrutiny, here are a few trends that experts think lie in store for the fast-growing technology in 2024.

If the world of tech had its own Spotify Wrapped, artificial intelligence would undoubtedly be the top playlist for 2023. The emerging technology took the world by storm last year, not least because of the warp speed at which generative AI is advancing.

Just yesterday, we reported on how global chip sales have seen a resurgence after an elongated lull thanks in part to a trade war. This growth is partly, if not nearly entirely, because of the AI gold rush that is changing the face of everything from HR tech to automation.

Its effects will be felt both at the individual level by employees as well as enterprises. So, as we near the end of the first two weeks of 2024, we look at what experts in the industry have to say about the tunes that tech’s top playlist has in store for us.

Generative AI becomes pervasive

As artificial intelligence will become more advanced, it will also get democratised as it begins to reach more people across a wider range of locations and economic strata.

According to Andy Patel, a researcher at cybersecurity company WithSecure, open-source AI will continue to improve and be taken into “widespread” use.

“These models herald a democratisation of AI, shifting power away from a few closed companies and into the hands of humankind. A great deal of research and innovation will happen in that space in 2024,” said Patel.

“And while I don’t expect adherents in either camp of the safety debate to switch sides, the number of high profile open-source proponents will likely grow.”

Rodrigo Liang, CEO of Silicon Valley-based SambaNova Systems, thinks that this year marks a significant shift from the “task-based use” of generative AI to what he calls pervasive AI, with companies deploying the tech across all business functions, employees and workstreams.

“The key to getting there will lie in the development of platforms that address companies’ security, privacy and widespread adoption challenges, and models that increase value year over year, ultimately making AI a major asset for company-wide transformations rather than a helpful tool to improve isolated, rote tasks.”

With democratisation comes disinformation

Like any technology, the pervasiveness of AI will bring with it a pandora’s box of problems, starting with an increased level of disinformation.

Patel said that one of the most critical times AI will be used to create disinformation is in the months leading up to elections in 2024. It will come in many forms: synthetic written, spoken and even audio-visual content.

“Disinformation is going to be incredibly effective now that social networks have scaled back or completely removed their moderation and verification efforts. Social media will become even more of a cesspool of AI and human-created garbage,” he said.

According to Patel, the cybercriminal ecosystem has become “compartmentalised” into multiple service offerings such as access brokers, malware writers and spam campaign services that use generative models to create social media content, synthetic images and deepfakes.

“On the disinformation front, there are many companies that pose as PR or marketing outfits, but who provide disinformation and influence operations as services. Where relevant, cybercriminals and other bad actors will turn to AI for the sake of efficiency,” he explained.

“The creation of such content requires expertise in prompt engineering – knowing which inputs generate the most convincing outputs. Will prompt engineering become a service offering? Perhaps.”

Naturally, regulators rush to keep up

AI has already been top of the agenda for regulators on both sides of the Atlantic, the most notable manifestation of which was the landmark AI Act passed by the EU last June.

Chris Dimitriadis, chief global strategy officer at international IT governance association ISACA, said that as more and more businesses look to implement AI into their processes and services, governments are looking at options to follow the EU’s footsteps and reign in the high-risk tech.

“Big tech companies in their droves started testing generative AI this year. And consumers and employees alike have benefitted from the emerging technology. Now that the world has been privy to the benefits of generative AI, we can expect businesses beyond the tech space to adopt it to innovate, optimise costs and boost productivity,” he explained.

“As AI becomes mainstream, governments will look to follow in the EU’s footsteps and create their own comprehensive laws around AI. That translates to additional responsibility for businesses to comply with new legislation to avoid regulatory breaches and ensure they’re making the most of what AI has to offer.”

According to Ian van Reenen, chief technology officer at digital employee experience company 1E, it will be imperative to take a deeper look at the technology organisations already have in place to determine if it is “truly scalable” with AI.

“Before diving full in, we must ask how our businesses can specifically benefit from investing in AI and if it will provide the right outcomes,” he said.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic