OpenAI, Google and others form safety body for ‘frontier’ AI

27 Jul 2023

Sam Altman in 2019. Image: Steve Jennings/Getty Images for TechCrunch (CC BY 2.0)

Open to any organisation building advanced AI systems, the Frontier Model Forum will promote responsible research and development.

Four of the world’s biggest names in artificial intelligence have come together to form an industry body that will aim to ensure the safe and responsible development of AI models that have the potential to pose a significant risk to humanity.

Announced yesterday (26 July), the Frontier Model Forum is an initiative by OpenAI, Google, Microsoft and Anthropic to self-regulate the emerging area of tech while they wait for government regulation to take shape.

The forum takes its name from the term ‘frontier AI’, which refers to advanced AI and machine learning models that can cause significant harm to humans if left unchecked.

In the joint statement, the four tech companies said the forum – which is open to any organisation building frontier AI – will identify best practices for the development of new and existing models while also collaborating with policymakers, academics and civil society.

OpenAI vice-president of global affairs Anna Makanju said that the industry body is “well-positioned to act quickly to advance the state of AI safety”.

“Advanced AI technologies have the potential to profoundly benefit society and the ability to achieve this potential requires oversight and governance. It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible.”

This announcement comes just days after all four tech companies, joined by Amazon, Meta and Inflection, made voluntary commitments – not too different from the objectives of the latest forum – to the White House to ensure the safe and transparent development of AI technologies.

One of the major commitments is to test the safety of their AI products internally and externally before releasing them to the public, all the while remaining transparent about their processes.

The companies will also commit to developing “robust” mechanisms such as watermarking to ensure users know when content is generated by AI. They will prioritise research on AI risks to society, including on avoiding harmful bias and discrimination and protecting privacy.

“The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them,” a White House statement about the companies’ commitment reads.

Last month, US lawmaker Chuck Schumer launched his plan for Congress to regulate the technology without hindering innovation in the space. “The first issue we must tackle is encouraging, not stifling, innovation,” he said at the time. “But if people don’t think innovation can be done safely, that will stifle AI’s development and even prevent us from moving forward.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Sam Altman in 2019. Image: Steve Jennings/Getty Images for TechCrunch (CC BY 2.0)

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com