There was a lot in the news last week about a program that supposedly passed the Turing test. I’m not so sure about that, for several reasons.
First of all, I don’t think that Turing ever proposed a test as such, but rather made a statement about it being forseeable that one day a machine might be mistaken for a human 3 times out of ten and others cobbled together a set of rules for such a test. Along the way those rules have been refined and seem to have lost sight of the original spirit of the thing.
I haven’t seen the full transcriptions, but the little bits I have seen seem to be very unconvincing. I’ll admit that making it simulate a Ukrainian 13-year-old is interesting, but did the test use real 13-year-old Ukrainian boys as the other samples? I would like to think they would make more sense than this program did.
The thing is, the whole testing procedure is very srtificial. All the judges know they are conducting a test, which will affect their judgement. When the tests were first devised, the only way for a computer to converse with a person was very artificial, and it involved the real subjects to also do something artificial – to sit in a room typing stuff.
We have now reached the point where real people do routinely conduct conversations in that way so why not use that?
Here is my proposal. Set up a Twitter account for an AI and see if anybody notices. Maybe even plug it into the APIs for Facebook and some BBS-type forums.
If you then persist in giving your AI the identity of a 13-year-old you have a ready-made criterion for success. If somebody tries to groom your AI then they were fooled.
The danger is that even a Lisa-level pseudo-AI would make more sense than quite a few Twitter accounts.