The “Human or not” game is over: Here’s what this latest Turing test teaches us

Estimated read time 3 min read

The game


AI21 Labs conducted a social experiment last spring, during which more than 2 million participants participated in more than 15 million online conversations. At the end of each conversation, each participant had to guess if his interlocutor was a human or a robot. Almost a third of them were wrong (and the French did quite well in this exercise).


AI21 Labs was inspired, for the “Human or Not” experiment, by Alan Turing’s assessment of the ability of a machine to present a level of intelligence indistinguishable from that of a human being.


This type of experiment will be known as the “Turing test” because of the observation made by the mathematician in 1950: “I think that in 50 years, it will be possible to make computers play the imitation game so well that an average interrogator will have no more than a 70% chance of making the correct identification after 5 minutes of interrogation”.

The results of the Human or Not experiment confirm Turing’s prediction


The results of the Human or Not experiment confirm Turing’s prediction: Overall, the participants in the experiment guessed correctly in 68% of cases. When paired with an AI chatbot, participants guessed correctly only in 60% of cases. When the interlocutor was another human, they guessed correctly in 73% of cases.


Although it is not a perfect Turing test, AI21 Labs’ Human or Not experiment has shown that AI models can imitate human conversation convincingly enough to fool people. This challenges assumptions about the limits of AI and could have implications for AI ethics.


The experiment showed that the human participants used different strategies to try to spot the AI robots, in particular by asking personal questions, informing themselves about current events and evaluating the level of politeness of the answers.

Robots trick players into adopting human-like behaviors


On the other hand, the authors of the study found that the robots deceived the players by adopting behaviors similar to those of humans, such as the use of slang, typos, rudeness in the answers.

“We created “Human or Not” with the aim of allowing the general public, researchers and policymakers to better understand the state of AI at the beginning of the year 2023,” said Amos Meron, head of creative products at AI21 Labs at the time of the experiment. One of the objectives, he added, was “not to consider AI only as a productivity tool, but as future members of our online world, at a time when people are wondering how AI should be implemented in our future”.

Source: “ZDNet.com “

You May Also Like

More From Author