The Inhuman Truth of AI

“balls have a ball to me to me to me to me to me to me to me”….

It may look like gibberish. But to the computers that created it, it was part of a Trump-esque piece of deal-making.

It was part of an exchange between two chatty computers Facebook created to test how artificial intelligence copes with trade and barter. The pair of chatbots were tasked with holding a conversation to trade balls, hats and books.

But in short order the computers (or “agents”) realised grammatically-correct English was a clumsy way to go about things, so they set about improving it. “There was no reward to sticking to English language,” Dhruv Batra, Facebook researcher, told FastCo.

“Agents will drift off understandable language and invent codewords for themselves. Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item.”

Facebook pulled the plug and when news of the experiment went public, picture editors had a field day, dusting off images of The Terminator.

But the machines’ behaviour cuts to the heart of our (mis)understanding about AI. Many people seem to cling to the cute yet erroneous view that artificial intelligence means intelligence on human terms (in this case, coming up with neatly turned-out sentences to carry out trades).

It doesn’t. AI will develop on its own terms (albeit within the parameters we set for it, at least for the moment). Given a task (trading) and a tool (words), AI will take the most direct course of action to complete the task.

The sooner we drop our anachronistic ideal of AI behaving “like us”, the sooner we’ll get to grips with this emerging world.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s