The Stanford AI Index highlights the dramatic changes the sector has undergone in recent years, with more powerful, expensive models and a greater focus from lawmakers worldwide.
AI development has taken a dramatic turn in the past decade, becoming heavily dominated by industry while academia trails far behind, according to a new report on the sector.
The AI Index – an independent initiative at Stanford University – has compiled its annual report on global AI developments.
The detailed report claims that most significant machine learning models were released by academia in 2014, but the results have changed starkly since then.
The report claims that there were 32 “significant” industry-produced machine learning models in 2022, while academia only produced three.
“Producing state-of-the-art AI systems increasingly requires large amounts of data, computing power, and money, resources that industry actors possess in greater amounts compared to non-profits and academia,” the researchers said.
The US leads the way in machine learning systems, being behind 16 significant models last year, followed by the UK at eight and China at three. The report’s statistics suggest the US has outpaced all other competitors including the EU in this field since roughly 2013.
The index also highlighted the significant rise of large language models in recent years, with more real-world examples of these systems being released such as ChatGPT and text-to-image generators.
The report shows that more than half of the researchers behind these models came from US institutions last year, followed by the UK at nearly 22pc of researchers.
However, the report suggests that the US is losing its lead in terms of “AI conference and repository citations”, while China has the most total AI journal, conference and repository publications.
Larger and more expensive AI
As industry takes a greater focus on AI models, the report notes the growth some of these models have reached in terms of scale and cost.
The report states that in 2019, GPT-2 had 1.5bn parameters and cost an estimated $50,000 to train. By contrast, Google’s large language model PaLM – released last year – had 540bn parameters and cost an estimated $8m, being 360 times larger and costing 160 times more.
“It’s not just PaLM: Across the board, large language and multimodal models are becoming larger and pricier,” the report said.
Despite this, the report claims global private investment into AI decreased last year, marking the first decline in a decade.
Global AI private investment was $91.9bn in 2022, which is nearly 27pc less than 2021. Last year also saw a decline in the total number of AI-related funding events and the number of newly-funded AI companies.
Tech companies began a period of cost cutting last year, due to a looming energy crisis and various economic headwinds.
“Still, during the last decade as a whole, AI investment has significantly increased,” the report said. “In 2022 the amount of private investment in AI was 18 times greater than it was in 2013.”
Last month, a report by the International Data Corporation predicted that global spending on AI systems will reach $154bn this year and surpass $300bn by 2026.
Policymakers focus more on AI
With the larger focus that industry leaders are having on AI, the report also highlights how legislation is being released towards this sector more.
In an analysis of 127 countries, the AI Index claims that the number of bills containing “artificial intelligence” that passed into law grew from just one in 2016 to 37 in 2022.
“An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016,” the report said.
The report also noted a rise in AI-related controversies over the years, with these incidents growing by 26 times since 2012.
“This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities,” the report said.
The EU is currently in the process of finalising its AI Act, the first-ever legal framework on AI proposed by the European Commission.
This set of proposals is expected to classify different AI applications depending on their risks and implement varying degrees of restrictions.
However, there are reports that the legislation is struggling to keep up with AI advances, as the rise of ChatGPT has caused delays for the act, Politico reports.
ChatGPT is also facing scrutiny in some countries, such as being banned in Italy by the country’s privacy regulator last week due to alleged privacy violations. The chatbot and its creator, OpenAI, are currently being investigated by a Canadian privacy watchdog for similar concerns.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.