Will Stonefield

Can Computers Be Competent Communicators?

Blog Post created by Will Stonefield Employee on May 17, 2016



Language—the ability to assign words to objects and ideas. It’s something we take for granted, but it’s an amazing power if you sit down and really think about it. Language is central to the field of communication studies, and for good reason: It’s a big part of what makes us human. From Merriam-Webster (emphasis mine):



: the system of words or signs that people use to express thoughts and feelings to each other

: any one of the systems of human language that are used and understood by a particular group of people


As this entry shows, we tend to think of language as a uniquely human phenomenon. Sure, other animals like chimps and dolphins can communicate with each other to some extent, but they can’t have a lengthy debate about the meaning of life—and they definitely can’t write a 700-word blog post.


But is language really exclusive to humans? In 1950, mathematician and computer scientist Alan Turing challenged that idea. (You might know him from the 2014 Oscar-winning film The Imitation Game, in which he’s played by Benedict Cumberbatch.) Turing hypothesized that in the not-too-distant future, computers would develop such advanced linguistic skills that human judges, who see the computer typing on a screen, would be fooled into thinking that the computer is itself human. A computer that can trick people this way is said to pass the “Turing test.”


Turing thought a computer would pass his test as early as the year 2000. That hasn’t happened yet—but some computers have come close. In 2014, a chatbot named “Eugene Goostman” made headlines for allegedly passing the Turing test in one trial—but then failed spectacularly in subsequent trials with lines like, “No. Beep-beep. I am not a ma-chine. Blink-blink. I am hu-man. Click! Hu-man. Click! Hu... Damn.” (Yes, really. You can read the bot’s full interview with Business Insider here.)


So that was a bust. But in 2015, a different computer passed a variant of the test that has been called a “visual Turing test”. In this version, human participants couldn’t tell the difference between symbols “drawn” by the computer program and symbols drawn by humans. Meanwhile, even mass-market computers like Amazon’s Echo have become increasingly sophisticated at recognizing human verbal commands and speaking answers in reply, suggesting a degree of (albeit primitive) linguistic competence.


But my favorite case involves a poem-generating algorithm developed by programmer Zachary Schnoll. As a joke, Schnoll let his algorithm write several poems, which Schnoll then submitted to The Archive, the literary magazine at Duke University. He was amazed when the magazine actually published one:


A home transformed by the lightning

the balanced alcoves smother

this insatiable earth of a planet, Earth.

They attacked it with mechanical horns

because they love you, love, in fire and wind.

You say, what is the time waiting for in its spring?

I tell you it is waiting for your branch that flows,

because you are a sweet-smelling diamond architecture

that does not know why it grows.


As far as nature-minded poetry goes, this isn't exactly Henry David Thoreau—but it's also not bad, considering it was written by a computer. Nobody noticed anything amiss until Schnoll revealed the true author on his blog, four years after the poem's original publication. Does this algorithmically-generated poem count as a true pass of the Turing test? Maybe, maybe not. Either way, it’s fascinating.


While true artificial intelligence isn’t here yet, its theoretical implications for communication studies are huge. Humans probably won’t be alone in the language club forever. A computer convincingly passing the Turing test is more likely a matter of when than if. Maybe in twenty years, interpersonal communication textbooks will include case studies about how to respond when you learn that your pen pal is really an AI. Until then, we can only wait and see.