top of page
Writer's pictureGlyn Heath

Transforming the Turing Test

Why we must rethink the education and assessment of AI

Advancements in AI are moving at such a remarkable speed, that it’s hard for those outside, or even in, the industry to keep up. Traditional benchmarks, such as the Turing Test, have already been surpassed, forcing a rethink on how to assess AI. While it's undoubtedly an exciting time, developers must be wary of alienating the public, which could have consequences ranging from minimal to dire.


AI versus human

As artificial intelligence continues to dominate news cycles and column inches, the public’s awareness of AI is certainly increasing. But is their understanding? A recent Public First report found that just 5% of those surveyed said that they were not familiar with artificial intelligence, although only 55% said that they would be able to explain what it was.


Clearly, a disparity is growing between people familiar with AI, and those who grasp its capabilities. The same people may also not be aware of the many versions of artificial intelligence; you’d be forgiven for thinking generative is the limit of AI due to ChatGPT’s notoriety.


Where understanding is lacking, criminality can thrive and developers are soon playing catch up. Utilising AI to test and monitor AI, also touted to play a role in upcoming regulation, may hold the key.


Surpassing the Turing Test


Introduced in 1950 by Alan Turing, the Turing Test evaluates whether the responses to a human from a computer are indistinguishable from those of a human. Since then, it’s functioned as the gauge for new advancements; only now, it’s becoming obsolete due to the sophistication of large language models.


In a recently published study, a total of 1.5 million users took part in the largest-ever Turing-style test, with a total of 1.5 million users completing over 10 million chat sessions. Participants only guessed they were interacting with AI in 60% of conversations, which is a rate “not much higher than chance” according to researchers.


Evidently, AI is forcing us to reassess our understanding of pretty much everything and processes such as the Turing Test, as insightful as it’s been for over half a century, must now be updated to compete with the modern day.


What takes its place remains to be seen. Any test we construct in a similar vein is likely to be surpassed quickly once again. Instead of introducing frequent stopgaps, the solution appears to be empowering AI to oversee and inspect itself. Whatever the solution, the fact the Turing Test is now being comfortably exceeded by some programmes means that their human-like ‘qualities’ are becoming all the more convincing. And that’s potentially bad news for the general public.


Better education


As the capabilities of AI accelerate, the gulf in comprehension can increase if allowed to fester unchecked. The majority of the general public are reliant on what the media depicts – and what is headline-worthy is not always the most useful. Prophecies of doom often guarantee clicks, while there is a tendency to focus predominantly on generative AI; an interesting story but one that barely scratches the surface.


In truth, AI takes many, many forms and performs countless functions. This no doubt means that they will play an increasing role in everyday life for years to come. Even at this relatively early stage, a discrepancy is emerging that will only grow worse over time. But clear and concise education – which lays out exactly how AI works alongside its strengths, weaknesses and, most importantly, its potential for exploitation – can tackle the increasing chasm of disparity.


Not understanding the potential of artificial intelligence means the public can be more susceptible to falling victim to crime. Armed with knowledge, they can better protect themselves from exploitation. But what are some of the tangible solutions to counter that?


AI and fraud: Tools and teaching


Alongside wider education, tools must be provided that enable the general public to detect fraudulent AI. Scams involving deep fakes, the manipulation of facial appearance and voices, are on the rise. Perhaps it could follow in the steps of something the banking sector has already adopted; customers may receive automated phone calls to authorise transactions via a code to protect them from fraud.


In a similar vein, potential victims could ask a series of questions to reveal whether the caller is simply harmless – or operated by fraudsters. Empowering people to report these encounters will also prove essential, too; only then will developers be able to identify threats and make progress on counteracting them.


With collaboration symbolising much of the development of AI up to now, it’s now important that industry leaders work with the general public to protect themselves. Only then will we see a gap in knowledge bridged, and those in the society armed with the knowledge to fight fraudsters.

Image attribution: Free Stock photos by Vecteezy

11 views0 comments

Comments


Commenting has been turned off.
bottom of page