Leike said he will ‘continue the superalignment mission’ with his new team at Anthropic, while a former OpenAI board member spoke about why the board tried to fire Sam Altman last year.
Jan Leike, a former OpenAI executive who co-led the company’s AI safety team, has joined rival company Anthropic to work in a similar role.
Leike was one of the leaders of OpenAI’s superalignment team, which was focused on the safety of future AI systems. But this team was dissolved after Leike and former OpenAI chief scientist Ilya Sutskever resigned from the company. OpenAI said it chose to integrate superalignment across its research efforts.
But Leike spoke out against the company shortly after resigning and claimed he had been disagreeing with OpenAI’s leadership about the company’s core priorities “for some time” and that these issues reached a “breaking point”.
“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact and related topics,” he said.
In a new post, Leike said he has joined Anthropic to “continue the superalignment mission”. His post also suggests Anthropic is planning to expand this team.
“My new team will work on scalable oversight, weak-to-strong generalisation and automated alignment research,” Leike said.
Yesterday (28 May), OpenAI announced the formation of a safety and security committee to evaluate and develop OpenAI’s processes and safeguards, while the company works on its next AI model.
Why did the OpenAI board go against Altman?
Meanwhile, a former board member of OpenAI – Helen Toner – has shared her version of why the board went against OpenAI CEO Sam Altman last year.
The company went through a period of turmoil when Altman was suddenly fired from his position. This didn’t last long, however, as the internal and external chaos it caused prompted the company to bring Altman back as CEO with a new board.
On The Ted AI Show podcast, Toner says the board tried to fire Altman because they had lost trust in him. Among the reasons listed, she claims the board was not informed about the release of ChatGPT in 2022 and that the board “learned about ChatGPT on Twitter”.
She also claims that Altman gave inaccurate info about OpenAI’s safety processes to the board “on multiple occasions”.
Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.