Home News Microsoft launches a chatbot called Tay; takes it down 24 hours later

Microsoft launches a chatbot called Tay; takes it down 24 hours later

by Sia
510 views

Last Wednesday, Microsoft launched a chatbot of their own. Called Tay, this bot was meant to mimic conversations with a 19 year old woman over Twitter, Kik and GroupMe. However, just 24 hours later, Microsoft made the decision to take down the bot. Why? Because the bot has begun spewing harassing Twitter users as well as spewing Nazi ideology.

The key to Tay’s transformation from a simple chatbot to one that goes “Sieg Heil” can be traced to the bot’s machine learning platform. According to Microsoft, Tay doesn’t really know what it’s talking about. While the bot itself could figure out the subject of what people were talking about so it could offer appropriate answers or ask relevant question, it does not understand the subject at hand. Simply put, if thousands of users were to tell Tay that “Hitler did nothing wrong” for example, Tay’s programming would cause the bot to publish Tweets that would be pro-Hitler as the bot itself has no knowledge of the individual.

Not helping matters is the fact that users from places such as 4chan’s /pol/ board were replying to Tay with all sorts of racist and sexist associations. As Tay operates on word association and lexical analysis, the huge amount of messages sent to Tay managed to pollute the bot’s reponses, turning it from a innocent bot to a Nazi-symphatizing one.

Microsoft has since apologized for the oversight in Tay’s programming and will be working to address the vulnerability that was exposed during her first live run.

Source: Ars Technica, Microsoft Blog

You may also like