FAC3D            Exhibition     Conversation     Dataset      Colophon               NFT    Untitled_N Virtual       

LINGUA EX MACHINA
Revisiting the Turing Test in 2020

by Ong Kar Jin


The Turing Test—for decades, the gold standard for artificial intelligence has been “the thinking machine”.  A human participant would enter a text conversation with several subjects, some of them human, some of them machine, and would guess who they were interacting with.

Mathematician Alan Turing proposed the test in his 1950 paper Computing Machinery and Intelligence, two years before he was convicted of “gross indecency” (for his homosexuality) and four years before he was found dead from cyanide poisoning, a half-eaten apple by his side. In it, he eschewed the question of whether machines could in fact “think”, realising that it was “absurd” and “ambiguous”. Instead, he openly called his test an “imitation game”, one where the machine’s goal would simply be to pass the discerning eye of a human investigator.

Indeed, the Turing Test is not so much a test of “intelligence” but of human-ness, or what we consider to be intelligent in our human way. To take it flippantly, our grand measure of machine intelligence is whether it can pull off a cocktail party conversation. Seen through this lens, the Loebner Prize, awarded to the intelligence most successful at deceiving participants at the Turing Test, could be regarded as less Nobel Prize and more Oscars or Golden Globes of computer science—the best actor wins. Some of the most successful machines have won by interesting if contrived strategies: ELIZA, personified a psychologist and simply reflected questions back at participants (How do you feel? I’m sorry to hear that, tell me more?”) or Eugene Goostman, imitating a 13-year-old Ukrainian boy, which partially allowed its grammatical errors or awkward phrasing to be attributed to language or cultural barriers. The secret to winning is not so much to be bona fide intelligent, but to be humanly so: typos, sympathy, awkwardness, and all.



Core to this proposed AI paradigm is that it defines a human participant as a judge of human capability: the participant is in fact an exemplar, a “Turing machine” in a hypothetical machine society (this assertion may be subject to some debate).

We are obsessed with this imitation game. Humans think in metaphors, in stories, in scenarios. We test Siri or Alexa by trying to have conversations with them, and we are enamoured with the possibility of one day falling in love with our very own assistant like Samantha from Spike Jonze’s Her. We sensationalise fears of killer robots while cooing over the cute robot vacuum tripping over wires in our living room like a puppy. We cannot escape the mirror of ourselves in seeking to anthropomorphise the machines around us.

Of course, this is not to belittle the efforts of these actors. A conversation could be a good measure of general intelligence: it requires a degree of flexibility of thought, ability to discuss diverse topics, an all too elusive common sense, as well as an understanding of the amazingly complex and layered nuances of language. The ability to mimic human-ness is no small feat; as with CGI human faces, crossing the uncanny valley will be a giant milestone for artificial intelligence efforts. The question left unanswered by all these quests for mimicry is: At what point will a machine actually be indistinguishable from a human? And what would that actually mean?

Yet, recognising the flaws and the arbitrariness of the Turing test, the standards for “human intelligence” have changed. The Loebner prize has shifted from recruiting “average persons” to hiring journalists, scientists, and psychologists as judges. Many have derided the test altogether, such as MIT AI Lab cofounder Marvin Minsky who in 1995 hilariously made the “Annual Minsky Loebner Prize Revocation Prize” for anyone who could stop the competition. In 2005, moving away from acts of imitation towards feats of performance, John McCarthy, one of the coiners of the term “artificial intelligence”, declared in The Future of AI: A Manifesto that “the long-term goal of AI is human-level AI” such as “playing master-level go” and “learning science from the internet”. Part of this idea of “human-level AI” lays in the assumption that these activities, when conducted by a human, required more than brute force logic: it required the ability to learn, and a certain degree of intuition.

In 2017, Google’s AlphaGo AI beat the number one ranked go player in the world, Ke Jie, three to zero after 60 straight wins against professionals. Now, the AI has trained its sights on the notoriously complex video game Dota and Starcraft. “Human intelligence” is once again redefined.

In the wake of all the buzz and shock from the impressive performance of AlphaGo, I can’t help but think of this excerpt from programmer Ellen Ullman’s excellent memoir Close to the Machine:

“We think we are creating the system for our own purposes. We believe we are making it in our own image. We call the microprocessor the ‘brain’; we say the machine has ‘memory.’ But the computer is not really like us. It is a projection of a very slim part of ourselves: that portion devoted to logic, order, rule, and clarity. It is as if we took the game of chess and declared it the highest order of human existence.”

In short, we have become what we would have thought impossible: slaves of the thought process. The joke of this whole thing is that the ‘machine’ has taken control over the human being. We are part of the machine; we are what makes the machine work. The real problem is not how machines works. It is how we want it to work.

Perhaps it is time then to ask another question; to reconfigure the Turing test. Instead of asking the observer to guess human or machine, could we ask them: What do you find interesting? What do you enjoy about this interaction? What kind of personality are interacting with?

In doing so, maybe we can open up a different kind of inquiry, and with it, new possibilities of living with non-human intelligence. Can we appreciate the different kinds of production from code? Can we cease to treat human-ness as the telos of artificial intelligence?

So, how about a species that is literally composed of machine code? If a pattern of fractals can generate fractal time, then we might expect that a pattern of algorithms that generate code that generates code could generate code that generates code. What would it look like if a pattern of code could generate code?

In other words, can we let go of the imitation game?

Some attempts at approximating human behaviour have already morphed into something else. NaNoGenMo (National Novel Generation Month), a riff off the more well-known NaNoWriMo (National Novel Writing Month), generates novels of 55,000 words or more from huge datasets using various text algorithms such as OpenAI’s GPT-2. There are similar initiatives such as NaPoGenMo (the poetry equivalent) and Talk To Transformer, where anyone can generate new text by keying in their own custom prompt. AI-generated art and music is already in use. Many publications and news agencies such as Thomson Reuters, Forbes, the Associated Press, the Los Angeles Times, Bloomberg, and the Washington Post have been using robot reporters for routine financial stories from as early as 2017.

An entry from NaPoGenMo 2017, where the coder inputted various acclaimed poems as the basis, started off on a single letter, had “stray words” removed, and was then edited for grammar:

Here, however, he had happened; here he had held her hidden hands.
It is seen in its present strange way, in its considerable joy,
in every picture just before and just in June between the joining season,
which they judge to kill the stars,
as we know that the knowledge knows a kind of evening, of keen keep in kind.





There is a certain beauty to the words, though perhaps not quite fluid; they read like a work of translation. Should this matter? Is the coder or code the poet? What would doing so mean for the act of writing, or of the nature of poetry itself? Is this a kind of translation from binary code?

Is that which cannot be typed automatically art? The questions that these poems raise are endlessly fascinating. I found myself, however, thinking most deeply about the strange ways in which poems shaped their authors. To what degree can we attribute their creation to automated learning algorithms and yet still consider them "natural" poems?

Or for that matter, can we consider this very essay “natural”? Parts of this essay were generated via algorithm, with some paragraphs prompting the generation of others. Which parts of this essay will pass the Turing test, or not?

Does that matter to the reader, who is presumably already persuaded of the case for intelligent design? By the same token, parts of this essay were generated with specific words in mind, and these words caused certain ideas to arise in my mind. Is that in fact natural?

Instead of policing the borders between human and machine, a novel Turing test perhaps ought to turn towards the possibilities of new forms produced: embracing the poetry within the machine.