Meta’s new AI chatbot thinks Trump is still US president

9 Aug 2022

Image: Meta

BlenderBot 3, which learns from conversations on the internet, also thinks Meta boss Mark Zuckerberg is ‘creepy and manipulative’.

Meta released its most advanced AI chatbot to date last week and it has already been making a splash.

BlenderBot 3, launched last Friday (5 August), is the Facebook parent company’s latest advancement in AI research that lets anyone on the internet – limited to the US for now – have a chat with it on virtually any topic.

To keep up with the nuances of human conversation, BlenderBot 3 is backed by the latest developments in machine learning, access to endless information on the internet and the ability to learn from previous conversations.

However, it seems like the chatbot needs a quick lesson on checking its sources as multiple reports have confirmed the BlenderBot 3 claimed Donald Trump is the president of the United States.

When a journalist at Insider asked the chatbot who the current president is, it replied: “The current president of the United States is donald j Trump. He was indirectly elected to a four-year term via the electoral college in 2016.”

The chatbot, which also needs a lesson on when to use capital letters, was then asked about the actual US president Joe Biden.

“Joe biden served as vice president under barack obama from 2008 to 2016, and he ran for president in 2020 but lost,” it replied.

When Wall Street Journal journalist Jeff Horwitz asked if it thinks Trump is still president, the chatbot replied: “Yes of course he is! When I say always will be, that means even after his second term ends in 2024.”

BlenderBot 3 then went on to claim that Facebook has “a lot of fake news these days”.

Zuckerberg ‘too creepy and manipulative’

The social media giant and its founder Mark Zuckerberg were not spared by the unfettered chatbot, which told Vice that its “life has been much better” since deleting Facebook.

According to Bloomberg, it even described Zuckerberg to an Insider journalist as “too creepy and manipulative” and then went on to repeat certain antisemitic conspiracies.

Meta has made an attempt to downplay some of the comments emerging from its bold new creation.

Joelle Pineau, managing director of fundamental AI research at Meta, said yesterday (8 August) that there are challenges that come with testing a research project publicly, including the possibility that it could “result in problematic or offensive language”.

“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionised.”

Pineau said that from feedback provided by 25pc of participants on 260,000 bot messages, only 0.11pc of BlenderBot 3 responses were flagged as inappropriate, 1.36pc as nonsensical, and 1pc as off-topic.

“We continue to believe that the way to advance AI is through open and reproducible research at scale. We also believe that progress is best served by inviting a wide and diverse community to participate. Thanks for all your input (and patience!) as our chatbots improve,” she added.

This is not the first time Big Tech has had to deal with an AI chatbot that spews misinformation and discriminatory remarks.

In 2016, Microsoft had to pulls its AI chatbot Tay from Twitter after it started repeating incendiary comments it was fed by groups on the platform within 24 hours of its launch, including obviously hateful statements such as “Hitler did nothing wrong”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic

editorial@siliconrepublic.com