OpenAI launches tool that can detect if a text was written by AI

1 Feb 2023

Image: © Akaberka/

While still unreliable for use as a decision-making tool, the latest OpenAI service is a good indicator for longer English texts.

OpenAI has launched a new tool that can detect if a piece of text has been written by a human or AI – including its own ChatGPT chatbot.

In an announcement yesterday (31 January), the artificial intelligence research company said that while it is “impossible” to reliably detect all AI-written text, good tools “can inform mitigations for false claims” that AI-generated text was written by a human.

This could have applications for detecting AI-written automated misinformation campaigns, identifying instances of academic dishonesty in university settings and even exposing AI chatbots posing to be human.

However, OpenAI warned that its latest tool released to the public has a number of important limitations and is not yet fully reliable.

“It should not be used as a primary decision-making tool, but instead as a complement to other methods of determining the source of a piece of text,” the company said.

Based on evaluations of English texts, OpenAI said its tool correctly identified 26pc of AI-written text as “likely AI-written” while incorrectly labelling human-written text as AI-written 9pc of the time.

Importantly, the tool is also very unreliable for short pieces of texts that are under 1,000 words.

“Our classifier’s reliability typically improves as the length of the input text increases. Compared to our previously released classifier, this new classifier is significantly more reliable on text from more recent AI systems.”

OpenAI also cautioned that the detection tool should only be used for English texts because it performs “significantly worse” in other languages.

Recognising the impact of ChatGPT in academic circles, where it has been misused for assignments and subsequently banned from use in some universities, OpenAI is now working with educators to get their feedback on the technology.

“We are engaging with educators in the US to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations, and we will continue to broaden our outreach as we learn,” the company said, asking educators for direct feedback.

“These are important conversations to have as part of our mission is to deploy large language models safely, in direct contact with affected communities.”

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Vish Gain is a journalist with Silicon Republic