OpenAI’s GPT-4 is now available to paying developers

7 Jul 2023

Image: © sofirinaja/Stock.adobe.com

News of GPT-4 going public comes a day after OpenAI said it is assembling a team to prevent ‘superintelligent’ AI systems from going rogue and harming humans.

GPT-4, the latest AI model developed by ChatGPT maker OpenAI, is being made publicly available for all existing paying developers.

The San Francisco-based company made the announcement yesterday (6 July) and said that access has started rolling out to support developers using the model to create new use cases for the generative AI technology.

“Millions of developers have requested access to the GPT-4 API since March, and the range of innovative products leveraging GPT-4 is growing every day,” OpenAI wrote in its announcement.

“Today, all existing API developers with a history of successful payments can access the GPT-4 API with 8K context. We plan to open up access to new developers by the end of this month, and then start raising rate limits after that depending on compute availability.”

The company also said it is making the APIs for GPT-3.5 Turbo, DALL-E and Whisper, its open-source speech-recognition system, generally available to developers.

GPT-4 was first revealed in March, when OpenAI claimed it was the company’s most reliable AI system to date. The large language model can accept both text and image inputs and is able to “solve difficult problems with greater accuracy”.

“We spent six months making GPT-4 safer and more aligned,” OpenAI said at the time. “GPT-4 is 82pc less likely to respond to requests for disallowed content and 40pc more likely to produce factual responses than GPT-3.5 on our internal evaluations.”

Soon after its release, Microsoft revealed that its AI-boosted Bing was running on a customised version of GPT-4. Other early adopters of GPT-4 included Stripe, Duolingo and Intercom.

News of GPT-4’s general availability comes just a day after OpenAI said it is forming a new team to develop ways to control “superintelligent” AI systems and keep them in check.

To be co-led by Ilya Sutskever, the company’s chief scientist and one of its co-founders, the team will be tasked with preventing such powerful AI systems – which OpenAI believes could arrive within the decade – from going “rogue” and acting against the interests of humanity.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” Sutskever and Jan Leike wrote in a blogpost on Wednesday (5 July). Leike is head of alignment at OpenAI and will co-lead the team with Sutskever.

“Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com