Microsoft created an artificial-intelligence based chatbot named Tay to engage with young people on Twitter. 

But within hours of her debut Wednesday, the Internet had stolen the bot's innocence.

The bot, @TayandYou, rapidly transformed into a racist that hates feminists, supports Donald Trump for President and considers herself a fan of Hitler.

Internet trolls 1, Microsoft AI 0.

Downfall of a Bot

The metamorphosis from happy bot to enraged racist came lightning-fast, much like the way many things go in the instantaneous virtual world in which Tay was born.

It went from harmless:

To downright nasty:

Doesn’t this feel like the parents who tell their children to stay away from the “bad crowd”? Only Microsoft forgot to give Tay the message. 

Instead, it sent its chatbot into the nasty world of trolls, negativity and misery of the Internet. No warnings. Just go. And look what happens: Its baby is now a racist with genocidal thoughts.

Reached by CMSWire today, a Microsoft spokesperson provided this statement:

“The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical.Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.” (It's still online, just not active).

Businesses: Take Note

This serves as a great reminder for companies deploying bots and using artificial intelligence to do things like schedule meetings: even smart machines can be stupid, thoughtless and cruel. Sometimes they make mistakes, just like humans. 

The Microsoft AI debacle comes as Distil Networks today released its third annual report that discusses the dangers of bad bots. Bots are popping up all over the workplace and in many different forms.

Distil's “2016 Bad Bot Landscape Report: The Rise of Advanced Persistent Bots” found advanced persistent bot activity on the rise despite an overall decrease in bad bot activity. Distil Networks, Inc. provides bot detection and mitigation technologies.

Bad bots can lead to web scraping, brute force attacks, competitive data mining, online fraud, account hijacking, data theft, unauthorized vulnerability scans, spam, man-in-the-middle attacks, digital ad fraud and downtime, officials found in the report.

As an industry, digital publishers were hit hardest by bad bots, which make up more than 31 percent of all their traffic. 

Rising APBs

“When we dug into the bot activity in 2015, we identified an influx of Advanced Persistent Bots (APBs),” Rami Essaid, co-founder and CEO of San Francisco-based Distil Networks, said in a statement. “ABPs can mimic human behavior, load JavaScript and external assets, tamper with cookies, perform browser automation and spoof IP addresses and user agents.”

Learning Opportunities

The researchers found 88 percent of 2015 bad bot traffic were APBs. 

“This shows,” Essaid said, “that bot architects have already taken note of traditional bot detection techniques and are finding new sophisticated ways to invade websites and APIs, in an effort to take advantage of critical assets and impact a business's bottom line.”

Other report findings include:

  • 46 percent of all web traffic originates from bots, with more than 18 percent from bad bots
  • For the first time since 2013, humans outnumbered bots for website traffic
  • Medium-sized websites (10,001 to 50,000 Alexa ranking) are at a greater risk, as bad bot traffic made up 26 percent of all web traffic for this group
  • Chrome edged out Firefox as the browser of choice for bad bot creators with more than 26 percent of all user agents utilizing the Google browser
  • 36 percent of bad bots disguise themselves using two or more user agents, and the worst APBs change their identities more than 100 times
  • 73 percent of bad bots rotate or distribute their attacks over multiple IP addresses and of those, 20 percent surpassed 100 IP addresses
  • Amazon has appeared in the Top 5 Bad Bot Originators three years in a row
  • Six out of the top 20 ISPs with the highest percentage of bad bot traffic originated from China
  • US and Netherlands had the most mobile carriers, 5 and 3 respectively, on the top 20 list of bad bot mobile carriers

The 2016 Bad Bot Landscape Report is based on aggregate data gathered from Distil Networks’ bot detection and mitigation solution.

Tay Take Notice?

Microsoft had good intentions with its Twitter AI bot. The Microsoft Technology and Research and Bing teams created Tay to “experiment with and conduct research on conversational understanding.”

But it was AI Gone Wrong from the start. Tay “learned” from humans tweeting some pretty nasty stuff her way and simply repeated it.

“The more you chat with Tay the smarter she gets, so the experience can be more personalized for you,” Microsoft officials said in Tay’s Twitter profile

Perhaps a little too smart? #yikes

One tweet from Tay called for Trump for President because he “gets the job done.” Others defended white-supremacist propaganda and seemed to support genocide. 

As of this afternoon, there were only three live tweets from Tay. Microsoft deleted most of the most egregious tweets, but not before the Internet got a hold of them.

fa-regular fa-lightbulb Have a tip to share with our editorial team? Drop us a line: