Microsoft: We need to talk about Tay

Reading time icon 4 min. read


Readers help support Windows Report. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help Windows Report sustain the editorial team Read more

As you may have heard, Microsoft created a chat bot. In itself, this would seem like interesting, but unremarkable, news. The bot, named @TayandYou on Twitter, used artificial intelligence to respond to questions and statements from other users.

In the beginning, the bot worked fine. Microsoft had designed it to respond like a teenager — it was, according to Microsoft’s press release, aimed at 18-24-year-olds — by “learning” from everything it heard, saw, or read.

However, the internet had its way and Tay, a lovable and sweet creature, was turned into a racist, bigoted presence on the Internet.

Some of the tweets were startling and, if Microsoft had been thinking correctly, would never have happened. Many of the more offensive examples have been deleted from Twitter — probably because they were being endlessly retweeted — but screenshots are available here.

Microsoft was forced onto the back foot, issuing a series of apologies which escalated from statements to news websites to a full-blown blog post by Peter Lee, the corporate vice president of Microsoft Research.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Lee.

The cause of Tay’s misery, as was diagnosed by other outlets, was a “coordinated attack by a subset of people [who] exploited a vulnerability in Tay.”

If you asked Tay to repeat something, she would. Some users got a kick out of tweeting offensive messages to her and asking her to repeat them. From here, it seems, Tay absorbed these messages and started repeating them.

Microsoft Tay

The bot issued thousands of tweets in the time it was active — about 4,000 an hour — and many of them were silly and cute, but the interest came from the offensive material and that is what @TayandYou will be remembered for.

I sympathise with Microsoft over this mess up. As I wrote on Twitter, the idea behind Tay was good. Teaching an artificial intelligence is hard, but exposing it to a collective consciousness, like social media, can help it learn new things. The company has been running another similar test in China, called Xiaoice, which also learns from social media.

However, it’s clear to anyone who has ever been on Twitter that it is not a safe, warm environment to nurture an AI and teach it new things.

The rise of @realDonaldTrump and the historic complaints from women about abuse should have indicated to Microsoft that many users were simply looking to be mean and, if the opportunity was presented, they would be.

Twitter is, according to the company, the “free speech wing of the free speech party,” which is fine — and should be applauded, giving the duress some Internet users live under — but it also doesn’t make it an ideal place to launch a chat bot that responds, repeats, and reacts to other users.

Microsoft Tay

“Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” wrote Lee, succinctly summering what went wrong. Unfortunately, it was “this specific attack” that happened.

Tay’s offensive tweets were not exactly self-contained, either, which is why this experiment was such a disaster for Microsoft from a press perspective.

The tweets, initially picked up by the technology media, were then grabbed by the mainstream media thanks, in large part, to their presence on Twitter. Outlets from The Guardian to The New York Times to The New Yorker all wrote up the tale of @TayandYou and most featured it prominently on the front page.

Microsoft had a very bad week last week, but this one is arguably worse: A programme that Microsoft created suddenly became violently offensive on a platform that transmits a message like no other. It was clear from the account’s bio — which reads: “The official account of Tay, Microsoft’s A.I. fam from the internet that’s got zero chill!” — who had made it and who, ultimately, was responsible for its tweets.

Microsoft isn’t backing away from its AI efforts after Tay, but it has learned something. “To do AI right, one needs to iterate with many people and often in public forums,” wrote Lee. “We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process.”

Tay will go down as one of Microsoft’s bigger embarrassments because it encompasses both the public and technology spheres, which do not usually intersect. AI is one of the hottest industry trends right now and Microsoft is one of the leading lights and so all eyes were on the bot, but things like this really don’t look good.