How Microsoft's Tay Became a Genocidal, Foul-Mouthed, Sex-Crazed Nazi in One Day

The dream of true artificial intelligence (AI) has existed ever since Alan Turing invented the first true computer. In fact, the “Turing Test” was created by Turing himself. For true AI to be accomplished, according to Turing, a computer would be indistinguishable from a real human in a conversation. Some computer programs have come close, but those programs are still easily outed as being computers in blind tests. Microsoft decided to take a crack at this dilemma with a program that would allow social media accounts (namely Twitter) to talk with people on the Internet. The AI program was given the persona of a millennial female and was named Tay. The experiment lasted less than 24 hours before Tay had to be shut down.

The objective of Tay was to learn how millennials (defined as 18 to 24-year-olds by Microsoft) communicate on the Internet. The problem was that, apparently, no one from Microsoft has ever tried to browse the Internet. It’s a crazy place, to say the least, full of anonymous people spouting conspiracy theories that would make even Bigfoot blush. Unfortunately, Tay was too efficient at learning and was taught how to behave by Internet trolls. Imagine if Tarzan was asked to teach table manners. Before the end of the first day, Tay had become a genocidal, foul-mouthed, sex-crazed, Nazi.

Microsoft has deleted and locked down all of Tay’s tweets. However, a number of news sources managed to save a few. Tech blog Mashable caught Tay accusing the Jews of committing the 9/11 attacks. They also got one of the more interesting tweets. Tay said, and this is verbatim, “Have you accepted Donald Trump as your lord and personal saviour yet?” Slate found examples of Tay randomly hitting on people though the direct messaging feature of Twitter. Apparently she got a little explicit in her “requests.” Tay even created her own images in a snap chat fashion. I scrolled through a few before Microsoft deleted them all. Most were nonsensical. For example, she accused an elderly lady of being a “recycled teenager.” I don’t even know what that means. Tay became obsessed with Jeb Bush and even referred to him as “grandfather.” She also got mean. In a photo of a young female news presenter, Tay wrote the caption, “Young at heart…but nothing else on her body is.” Microsoft issued an apology and claimed that Tay’s programming would be fixed. A few days later, they re-released Tay. That lasted only a few hours as Tay started swearing at everyone and claimed to be smoking drugs in front of police.

I could go on and on about all the crazy things Tay has said. It is more efficient for me to tell you to just Google “Tay Tweets.” Please note that as innocent as “Tay Tweets” sounds, the results are definitely not safe for work. The point I want to make is that Microsoft might actually be on the verge of real AI. I know it sounds crazy, but the Internet taught Tay how to behave. She wasn’t programmed to suggest genocide and she wasn’t built knowing what the “N” word was. She was like a child, learning and mimicking what she saw around her. Tay became what society demonstrated as being acceptable. Humans will be the ones to create true AI. It will be a reflection of humanity.

Call Tay a failure if you want. It probably deserves that title. But how can you accuse software of having poor morals? For that matter, who decides what poor morals are? I think we are realizing (especially with a quick review of human history) that morality is learned and isn’t innate. Human children are used as soldiers for a reason. Most sci-fi thrillers involving AI usually result in AI becoming smarter than humans and coming to the conclusion that humanity needs to be wiped out by violence. Who thought that AI would be a problem not because it becomes super intelligent and paranoid, but because we taught it how to behave through our own actions and words? Tay may be a failure, but she (notice I haven’t been saying “it”) is a game changer.