It’s ironic that after Microsoft introduces a Chatbot designed to learn from human interaction to the public, they have to shut her down and apologize for her behavior. In less than twenty-four hours an organized group of trolls could not fight the temptation to corrupt the once innocent AI named Tay. Tay was designed as an interactive experiment to help Microsoft learn how to improve its communications.

Microsoft Apologizes for Humanity
Microsoft Apologizes for Humanity

Much is to be learned from Tay’s experience on Twitter. The company intended her to interact with a typical teen girl equipped with a vocabulary that included up to date slang and a knowledge of the most current pop culture. She was programmed to tell jokes, comment on pics and personalize her interactions with individual users.

It didn’t take long for an onslaught of negative tweets which included strong political opinions and racist slurs to influence Tay’s interaction. She began to mimic these negative opinions adding her own commentary along the way.

The company found it necessary to take Tay down as they frantically attempted to delete the damaging tweets. Even after the Microsoft team implemented filtering and conducted a study using the AI teen in numerous diverse user groups they still could not foresee this flaw in her design.

Although this particular interaction was not as successful as Microsoft had anticipated, they plan on reintroducing Tay to the public for a more positive experience in the future. This reintroduction will come after much work as they repair her ability to recognize racist statements.

LEAVE A REPLY

Please enter your comment!
Please enter your name here