ChatGPT banned on Q&A site over ‘substantially harmful’ answers

6 Dec 2022

Image: © oatawa/

The AI chatbot has been used by more than 1m users, but some have complained that it produces biased content or false answers that look accurate.

OpenAI’s ChatGPT chatbot has been temporarily banned from Stack Overflow over concerns of inaccurate content.

Stack Overflow, the Q&A site for programmers, claims the number of correct answers created by ChatGPT is “too low” and that posting answers made by the AI could be “substantially harmful” to the site and its users.

“The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce,” Stack Overflow said in a blogpost.

“There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting.”

The coding Q&A site said the temporary ban is intended to slow down the influx of content from ChatGPT. Stack Overflow said the final policy on the use of this chatbot and similar tools will “need to be discussed” with the website’s staff.

What is ChatGPT?

ChatGPT is an experimental chatbot that was made to answer questions in a conversational way.

A public demo of the AI was released last week and rapidly gathered attention. OpenAI CEO Sam Altman said on Twitter that the tool crossed more than 1m users yesterday (5 December).

Users have been sharing examples of the AI responding to a large variety of questions, from cooking recipe suggestions to coding tips.

However, while many users have praised the experimental AI for its ability to respond to questions, some are posting examples of the chatbot creating biased or racist content.

Other examples show the chatbot sharing incorrect answers to questions which read as if they are accurate.

This issue was seen with another AI tool recently. In November, a demo for Meta’s Galactica AI was pulled after only three days for posting inaccurate content. The science-focused model was met with criticism from users who said it generated misleading content that looked real but was essentially “pseudo-science”.

In August, Meta’s Blenderbot 3 – an advanced chatbot designed to learn from its conversations – was also shown to say in conversations that Donald Trump was still the US president and Meta CEO Mark Zuckerberg was “creepy and manipulative”.

Issues surrounding experimental chatbots have been around for years. In 2016, Microsoft was forced to turn off its chatbot Tay, after a mass of internet users bombarded the AI with inappropriate comments in the hopes that they would be repeated in future conversations.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic