Until recently, mention of Alan Turing outside of computer science circles would fail to generate any recognition. In recent months, however, the British technology pioneer has been brought into the public eye following the release of the film, “The Imitation Game.”
The film itself is a combination of Turing’s personal biography and an account of his pivotal work cracking the German Enigma cipher machine during World War II. Cracking Enigma allowed the allies to decrypt coded messages coordinating German military activity. This work was not only critical in reducing the length of the war and saving millions of lives, but also a huge step forward in the field of computer science: Turing developed one of the first digital computing devices – the “Bombe” – to decipher Enigma.
Turing’s brilliance was not just in his cryptography, it also was demonstrated in how he used the decrypted messages. Turing was a mathematician and logician. Using statistics, he could decide which messages to act upon that would have the greatest effect on the war without alerting the Axis forces to the leak in their communications. This was critical to the Allied war effort, and demonstrated how statistics could be used effectively in conjunction with computing technology.
Ironically, perhaps, the Bombe was not what we now call a Turing machine, since it was not a general purpose device. Turing developed a hypothetical model for general purpose computing devices, an implementation of which is referred to as a Turing machine. These machines remain among the most widely used theoretical models for computation, and are a key foundation of modern computer science.
Although Turing himself passed away almost 61 years ago, his legacy remains significant in modern artificial intelligence (AI) research. For example, the Imitation Game itself (also known as the Turing Test) is still the most significant litmus test for true artificial intelligence. The Test itself is rather simple. The standard interpretation involves three players– a Judge, a Computer and a Human. The Judge does not know which is the computer and which the person, but may ask questions to try and determine this. The aim of this test is to demonstrate a machine’s ability to exhibit human-equivalent intelligent behavior. The test is not without detractors, but is unarguably highly influential in the field of artificial intelligence. And as AI in applications and video games becomes increasingly sophisticated, the prospect of AIs that cannot be distinguished from real humans becomes ever more likely.
Recently a software program passed the Turing Test at the Royal Society in London. The test required 30%of the judges to be fooled into thinking it was human. The program, pretending to be 13-year-old Eugene Goostman, succeeded and convinced 33% of the judges that it was human. The machine in question was created by Vladimir Veselov and Eugene Demchenko. The event itself took place on the 60th anniversary of Doctor Turing’s death.
The Turing test itself has also manifested itself more indirectly in everyday web applications. Although most people wouldn’t know exactly what a CAPTCHA is, anyone who spends a little time on the internet will run in to one. A CAPTCHA – Completely Automated Public Turing Test to Tell Computers and Humans Apart – usually provides a user with one or more words or pictures, and asks them to type in what they see. The purpose of this is to stop automated programs from entering secure parts of websites. In other words, a CAPTCHA is a type of Turing test, where the judge itself is also a computer program.
Alan Turing was a key figure in the birth of computer science and artificial intelligence. Thanks to “The Imitation Game,” he will hopefully get the broader recognition he deserves, and perhaps help attract a new generation to technology and the practical benefits of computer science.