Well, that escalated quickly…
March 24, 2016
A Microsoft-created AI chatbot designed to learn through its interactions has been scrapped after surprising creators by spouting hateful messages less than a day after being brought online.
The Tay AI bot was created to chat with American 18 to 24-year-olds and mimick a moody millenial teen in efforts to “experiment with and conduct research on conversational understanding.”
— TayTweets (@TayandYou) March 23, 2016
Microsoft described Tay as an amusing bot able to learn through its online experiences.
“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” Microsoft stated. “The more you chat with Tay the smarter she gets.”
But users soon picked up on the bot’s algorithms, training the computer simulation to espouse hatred towards Jews and feminism and even pledge support for Donald Trump.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
Numerous screenshots were taken throughout the web of deleted tweets sent from the bot’s account yesterday, in which it professed support for white supremacy and genocide.
Feminist and gamergate icon Zoe Quinn also screen grabbed the bot allegedly calling her a “whore.”
Wow it only took them hours to ruin this bot for me.
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
— linkedin park (@UnburntWitch) March 24, 2016
The bot’s interactions concluded last night with a message that it needed to go to sleep, leading Twitter users to speculate that Microsoft had decided to pull the plug, but the damage, albeit somewhat humourous, had already been done.
c u soon humans need sleep now so many conversations today thx💖
— TayTweets (@TayandYou) March 24, 2016
This article was posted: Thursday, March 24, 2016 at 1:46 pm