New user? Register here:
Email Address:
Password:
Retype Password:
First Name:
Last Name:
Existing user? Login here:
 
 
Features

Coded Humanity

Kester Brewin

He cracked the Enigma code, designed the first computers, and invented a test
to sort man from machine - before being criminalised for his homosexuality. One hundred years after Alan Turing's birth, Kester Brewin believes the father of artificial intelligence ultimately taught us to be more human.

 

FBrewin1.jpgThe 23rd of June seems a rather unremarkable date. But if you use a computer or a mobile phone - even if you have occasionally considered hurling it out of the window in frustration - you may want to pause a moment to honour the birthday of the man who made these complex digital machines a possibility in the first place.

Born in 1912, Alan Turing's early expertise saved countless thousands - perhaps millions - of lives before building the foundations for what can only be described as a new enlightenment. Though silenced at the time, his work cracking the German Enigma code was pivotal in turning the Second World War in favour of the Allies. He made key breakthroughs in the design of the 'bombes' - the machines built to crunch the complex algorithms needed to decipher the coded messages that were being intercepted.

After the war, he worked at the National Physical Laboratory, and then in the computing department at Manchester University, where he created the designs for some of the very first computers that could store and run programs. Before the war he had worked on the theory of computable numbers, and in a ground-breaking paper in 1936 showed that a hypothetical device ­- now called a Turing Machine - could be postulated, which would be capable of performing any conceivable mathematical computation.

It is hard to imagine the significance of this result in a world where computers now do almost anything (except turn on quickly or print things properly when we need them yesterday.) Yet in 1936 a 'computer' was still a person whose job it was to compute, to work a machine which performed calculations. If machines needed reprogramming back then, one turned not to complex coding, but a screwdriver. To think that a machine could be developed which could do more than basic arithmetic, which could store information, and recall it, and display text, and handle symbolic algebra ... all of this was in the realm of science fiction.

ARTIFICIAL INTELLIGENCE
Turing was just 24 when his extraordinary paper ('On Computable Numbers, with an Application to the Entscheidungsproblem'1) was published. His youth had not yet calloused into a vision of what was only probable. He was able to see far beyond the materials and components of his own time, and, in his later work, imagine machines that could do virtually anything. Such a machine, which could read, write, understand and respond - playfully or seriously - would be almost human. Who would be able to distinguish it from a real person? And if nobody could, who would anyone be to say that such a machine did not, in fact, deserve the same status and rights as a person?
Turing thus opened up the field of artificial intelligence, along the way proposing what has become known as the Turing test - a baseline by which we might define a machine, a computer, as genuinely 'intelligent.' Turing's view was that a computer could be said to be thinking if a human who interrogated it could not tell it apart, through typed conversation, from a fellow human being.

The problem of creating such a machine can be approached in two ways. The first is to program a computer to have the same knowledge, understanding, nuance and thought patterns as an adult human. This can be seen as a 'top down' approach, whereby every possible question that could be asked of it is given an appropriate response. So that the question 'was Hamlet really mad' could be answered with reference to the relevant facts and opinions, all of which would need to be pre-stored. This top down model quickly ran into the sand. The difficulties of getting a machine to answer questions in a human way, in conversation with a fellow human, were enormous.

While not working on a machine to pass the Turing test directly, one academic at MIT in the 1960s made significant progress towards developing a machine that at least appeared intelligent. Joseph Weizenbaum realised that many statements contain within themselves an appropriate follow up question which can lead the conversation on. Hence, 'I'm having a bad day' is very easily turned round to 'Why are you having a bad day?' This form of statement-linked question/statement is the basis of non-directive Rogerian therapy, whereby the therapist mirrors what the client is saying, and, empathetically, turns it back to them as a question. 'I'm feeling depressed,' is thus turned back to the client: 'I'm sorry you're feeling depressed. Tell me more. Why are you feeling depressed?'

FBrewin2.jpg


ALONE WITH ELIZA
With just a hundred or so lines of code Weizenbaum developed what was essentially the first 'chat-bot', Eliza. 'Clients' could (and still can, as Eliza is available online2) sit at a terminal and 'talk' to Eliza, who would respond and lead the conversation on, using a very crude form of natural language processing. Weizenbaum was initially tickled by the way Eliza worked. His amusement quickly turned to concern, however, when he found that people within his team were asking others to leave the room so they could be left alone with 'her.' Weizenbaum was shocked to realise that, even though people knew that they were talking to a 'dumb' program, they were still opening up the deepest part of their lives. He once came into his office to find his secretary pouring out her secrets to the machine… she asked him to give her and the machine some privacy.

Although Eliza was not a machine that attempted to pass the Turing test, it broke new ground in highlighting the complex relationship humans can have with machines. The fact that Eliza was obviously no more than a piece of code didn't matter. People suspended their rational understanding that a machine could not know them, love them, care about them or give them any advice, because the desire to open up and talk about deep issues was so great. Talking to a machine that pretended to be a human was in fact easier and safer than talking to a human. This greatly concerned Weizenbaum, who withdrew his support for Eliza, which was being touted as a huge step forward in cheap and easily accessible therapeutic practice.

DATING A MACHINE
Weizenbaum refused to believe that a pseudo-human machine was any replacement for genuine human interaction. In other words, we might be able to create a machine that could pass the Turing test, but would we want to have such a machine? In a recent episode of the US podcast Radiolab, the technology writer Robert Epstein described joining a dating site and connecting with a woman in Russia.3 They began exchanging emails over a number of months, and he freely admits that he fell in love with her.

Having some family in Russia, he proposed in one email that he come over and visit. Her response didn't address the question of his visit at all, and nor would she respond to his requests to speak on the phone. Eventually, by inserting some nonsensical information into his emails to her, 'Yesterday I went to Disdnsdciomtycx...' it dawned on him that there was no real woman at the end of the line at all, just a more sophisticated Eliza, programmed to respond in a mechanically loving way: 'My love, how was your trip to Disdnsdciomtycx?' If we can no longer trust that the people we are corresponding with are actually people at all, how does that affect the way we trust anybody?

Turing looked into the future and saw machines that could be said to be intelligent. What he perhaps overlooked was the way that we might respond to a world of intelligent machines, and how we might become more guarded, more careful - and yet, paradoxically, more foolishly open too.

SMARTPHONE STUPIDITY
This year's story of Liam Stacey, who was jailed for posting racist tweets about Fabrice Muamba when he collapsed with a cardiac arrest during a football game, is a case in point. His interacting with a machine - his mobile phone - seems to have given voice to racist feelings that he normally repressed. What he forgot was that Twitter was a social network: a human-machine-human network, whereby people are not interacting with a dumb piece of software, but with an application that mediates responses between real people. In this case, interacting with a machine - no matter how 'smart' Stacey's phone was - appears to have made him more stupid.

What Turing realised in a paper on trying to pass his own test was that, rather than trying to pre-programme a machine to know everything and be able to respond intelligently, if they were to create a machine that appeared to be human, it should be born as a human is: unintelligent, but with a vast potential to learn. Humans do not learn to talk and engage in conversation by going to lessons on speech, they learn by doing: copying, experimenting and being corrected.

CLEVERBOT LEARNING
This is the idea behind one of the projects that has passed a low-level version of the Turing test: Cleverbot. Having created it in 1988, Rollo Carpenter and his team spent some ten years engaging it in conversation. What Cleverbot does is draw on the archive of conversations it has had previously, and come up with responses that match certain keywords in the phrase it is being asked to respond to. So, having programmed it, the first thing Carpenter did was type 'Hello,' and Cleverbot, at this point being dumb, could only say 'hello' back. But then, once he typed again - 'How are you?' - Cleverbot had two choices: 'hello' again, or 'how are you.' Thus, over time, it was able to 'learn' how best to respond. Having reached a certain level of 'fluency,' Cleverbot was made available online4 , and has, to date, engaged in around 65 million 'conversations,' learning and honing its conversational skills each time.

A version of Cleverbot was subjected to a version of the Turing test in 2011, and of the thousand or so people who observed the conversations (held over five minutes, according to the rules of the original test) and voted on whether they were human-human or human-machine, 59 per cent judged Cleverbot to be human.

This represents a 'pass' of sorts - Cleverbot can be held, under these conditions, to be intelligent. But here's the interesting statistic: the same voters judged the humans to be 63 per cent human. In other words, humans were still able to be, on average, 'more human' than the machines.

PRIZING HUMANITY
Cleverbot has also entered the annual Loebner prize, an annual event for machines wishing to be judged on the Turing test, the winner being the machine that convinces the most people that it is human. Early winning machines used quite simple bluffing techniques - adding typographical errors to responses, which hood-winked the human interlocutors. The humans would not be fooled so easily in future though, and so algorithms had to change in response.

FBrewin3.jpg

Yet the competition has also produced a parallel effect: the actual humans who participate as respondents themselves are keen to 'win': to convince the most people that they are human. So a prize has also been created for 'the most human human' - the person who is most frequently judged to have been a human, rather than a machine. The event has thus become a complex ecosystem of bluff and counter-bluff, with humans responding like robots, or by saying 'I don't have enough RAM to answer that question,' in order to convince their interlocutors that they are not machines, because machines always try too hard not to sound like machines. And, correspondingly, machines then begin to try to act more machine-like, in order to seem more human. The 2008 winner, Elbot, used precisely this double-bluff strategy.

STORIES NOT ANSWERS
So what were the characteristics of the 'most human human'? Brian Christian, who participated as a human in the 2009 Loebner Prize, was awarded 'most human human' and wrote a book on the subject5, has spoken about how the experience forced him to reflect on just how complex genuine human conversation is. Taken outside of the clinical conditions of the Turing test, with its curtains and keyboards and timed interactions, conversation relies hugely on body language, tone, absurd interjection, humour, gesture, interruption, irony - and ignorance. (This is perhaps why, one year at Loebner, a human Shakespeare expert was judged to definitely be a machine because the voters thought no one could possibly know that much about his plays.)

But, more importantly, human conversations almost always involve human stories. Picasso once said, 'computers are useless. They can only give you answers.' When asked what the weather is like the answer is straightforward, as the advertisement for the 'Siri' assistant on the iPhone 4 shows: it's going to be sunny in Paris this weekend. But for the human, the words 'Paris' and 'sunshine' do more than activate associations between other phrases. They stir emotions and memories of warmth, of heartache, of smells and tastes, aches and longings. And it is this interweaving of stories that Christian saw in genuine human conversation, which flowed and responded - and then jumped into an entirely new direction, triggered by some strange new association.

BEYOND DATA
The ability to take information that has been twisted and represented in a new and surprising way is what coders now use in a reverse Turing test that is used widely on websites to distinguish between humans and bots. The 'Captcha' test uses text that has been distressed and distorted into an image that the human brain finds easy to decode, but which a machine finds very difficult.

Machines, for all their nuance and artificial intelligence, still love purity: what is presented to them needs to be clean. And this is the final irony of Turing's legacy: in his universal machine everything can be encoded digitally as zeroes and ones. Everything can be represented, in other words, as data. Yet to truly pass the Turing test, a machine based on these universal principles needs something beyond the sterility of binary code. Truth, and wisdom, are not about content, but about context. Who we are is not what we know. It is about what we do with what we know, and how we came to know it.

Yet even this does not encapsulate 'us.' Beyond information and context, beyond stories and histories, there is what Slavoj Žižek calls an 'abyss of being' within each one of us. No matter how many pieces of data you might read about me, no matter how many stories you may hear or how many measurements you take, you will never get the heart of the 'me-ness' that only I know.

But at the heart of what we believe there is more, even beyond this, for faith teaches us that there is another who knows, and accepts, this abyss of my being too. We are deeply and wonderfully made, and every deep and rich part of us is known and accepted. This, no matter how successfully it dupes a human into thinking it too is human, is what a machine will never do: as much as we can love an iPad, it can never love us, and it is in the reciprocity of love that life exists.

ENIGMA OF TURING
Herein lies the tragedy of the way Turing was treated by those he served so brilliantly: though able to break the German Enigma code, he was part of a society that could not accept the enigma that was Turing himself. His love for another man was incomputable by the society that hosted him, and, machine-like, it sent him for chemical castration as a 'cure' for his homosexuality. It was that, or face prison.

This mechanistic approach to the human condition - that the deepest things such as sexual orientation can be adjusted and tweaked like the circuits of a computer - is perhaps an ironic by-product of Turing's own thought. Following his treatment, he descended into a deep and profound depression. The female hormones administered meant that he grew breasts, and he felt his whole mind and body being dulled. He was also sure that his scientific legacy would forever be tainted by his sexuality. Feeling unable to be himself, he decided to take his own life. In this suicide, performed by eating a poisoned apple, we might see his own horror at the unfeeling and mechanically reactive society he had helped create.

Yet his legacy is profound and hugely positive. All digital technologies owe some debt to the ground-breaking work that he did. As one of the principle creators of this world that we enjoy - whether we tweet or use the internet, or simply enjoy the convenience of a word processor - Turing should be honoured for the complex and very human person that he was.

This, then, is the new 'Turing test' that faces each of us: as people of faith, who believe in a world beyond the material and beyond the digital, we should lend our voices to those calling for him to be pardoned for his 'crime' of homosexuality6, and be working to strengthen and celebrate interactions between humans and humans, not humans and machines.

NOTES
http://en.wikipedia.org/wiki/Entscheidungsproblem
2  Eg: http://nlp-addiction.com/eliza/
http://www.radiolab.org/2011/may/31/clever-bots/
http://cleverbot.com/
5  Christian, B, The Most Human Human (Viking, 2011), and www.brchristian.com
http://epetitions.direct.gov.uk/petitions/23526