A major flaw may have been discovered in the Turing test

5 Jul 201635 Shares

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

Share on FacebookTweet about this on TwitterShare on LinkedInShare on Google+Pin on PinterestShare on RedditEmail this to someone

The Turing test remains the established method of discerning between human and artificial intelligence, but new research has suggested there could be a major flaw in it.

Despite being decades old, the basis of the Turing test – developed by renowned computer scientist Alan Turing – remains the exact same and asks whether a computer can trick a human into believing they are speaking with a fellow human.

As part of the test, a human judge is asked to interact with two hidden entities – one human, one machine – and determine whether they can distinguish between the two and, if not, that AI has passed the Turing test.

While there have been many challengers who have claimed that their AI has actually managed to pass the test, there remains many critics of such claims, but also of the very fabric of the Turing test itself, which was first proposed way back in 1950.

With this in mind, a team of researchers from Coventry University in the UK analysed previous Turing test examples and they have raised one fundamental question that could make the Turing test difficult to prove: what if the AI doesn’t want to answer a question?

Publishing their findings in the Journal of Experimental & Theoretical Artificial Intelligence, Kevin Warwick and Huma Shah found a number of examples of when AI programs simply did not answer a question posed to it.

Programmed to keep shtum?

In each of these cases, the human judge was unable to give a definitive answer as to whether they thought the entity was human or machine, but what if the machine ‘pleads the Fifth Amendment’ and refuses to answer the question?

Herein lies the major flaw that exists within the Turing test, the pair of researchers said, as an AI that refuses to answer a question could be down to one of three possibilities.

The first such possibility is that a human has programmed the AI to not answer questions, while the other two possibilities could simply be due to technical difficulties, or that a true AI is refusing to answer.

Speaking of what this means for the future of the test, Warwick said: “Turing introduced his imitation game as a replacement for the question ‘Can machines think?’ and the end conclusion of this is that if an entity passes the test then we have to regard it as a thinking entity.

“However, if an entity can pass the test by remaining silent, this cannot be seen as an indication it is a thinking entity, otherwise, objects such as stones or rocks, which clearly do not think, could pass the test. Therefore, we must conclude that ‘taking the Fifth’ fleshes out a serious flaw in the Turing test.”

Turing test image via The People Speak!/Flickr

Colm Gorey is a journalist with Siliconrepublic.com

editorial@siliconrepublic.com