Anthropic turns up the heat in the AI race with Claude 2

12 Jul 2023

Image: Anthropic

Founded by former OpenAI employees, Anthropic said Claude 2 is even less likely to produce harmful outputs than the previous model.

Alphabet-backed AI company Anthropic is taking another shot at OpenAI’s GPT-4 with its latest Claude 2 model.

In an announcement yesterday (12 July), Anthropic said that the latest iteration to its flagship Claude generative AI chatbot is now generally available in the US and UK, with more locations in the pipeline.

The company claims Claude 2 has improved performance, longer responses and can be accessed via API as well as a new public-facing beta website, Claude.ai.

“We have heard from our users that Claude is easy to converse with, clearly explains its thinking, is less likely to produce harmful outputs and has a longer memory. We have made improvements from our previous models on coding, math and reasoning,” Anthropic said.

It gave the examples of standardised tests in which Claude 2’s performance marked an improvement over the first Claude model, such as the Bar exam and the Graduate Record Examinations.

“Think of Claude as a friendly, enthusiastic colleague or personal assistant who can be instructed in natural language to help you with many tasks,” Anthropic went on, adding that the Claude 2 API for businesses is being offered for the same price as the previous Claude 1.3 version.

Claude is pitched as a relatively “harmless” AI system that is capable of a wide variety of conversational and text processing tasks while maintaining “a high degree of reliability and predictability”. Some of these tasks include summarisation, search, creative and collaborative writing, Q&A and coding.

Anthropic was co-founded by former OpenAI employees in 2021 and is based in San Francisco.

Earlier this year, Google parent Alphabet invested $300m in Anthropic for a 10pc stake, according to a Financial Times report. As a result, Anthropic agreed to make Google Cloud its “preferred cloud provider” with the companies “co-develop[ing] AI computing systems.”

Anthropic also said it has tried to make Claude 2 safer for users by making it harder to prompt the chatbot to produce “offensive or dangerous” output.

“We have an internal red-teaming evaluation that scores our models on a large representative set of harmful prompts, using an automated test while we also regularly check the results manually,” the company wrote, claiming Claude 2 is twice as good as the previous version at giving harmless responses.

“Although no model is immune from jailbreaks, we’ve used a variety of safety techniques as well as extensive red-teaming, to improve its outputs.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com