Microsoft is deeply sorry about the entire episode

 

Microsoft hurtled into action immediately as the company’s AI chatbot Tay went rogue recently when it began tweeting a series of racist and sexist posts. Peter Lee who is the Corporate Vice President of Microsoft Research expressed how deeply sorry they were for Tay’s behaviour. The company very clearly mentioned that the tweets do not represent Microsoft in any way at all or the team who are behind the creation of the AI.

Microsoft was prepared for attacks to a certain extent but this seems to be an oversight which led to Tay’s misbehaviour on the social networking platform. It is being contemplated that the people behind this attack are from the message board 4chan. Tay’s misbehaviour is regarded to the abuse of the ‘repeat after me function’. This meant that Tay not only tweeted a slew of hateful messages, its inherent nature as an AI made it learn them as well. Which means Tay has made them a part of its vocabulary.

Microsoft had created Tay for the purpose of interacting with users between the ages of 18 and 24 in the United States which can help the company conduct research on conventional understanding. The company has deleted almost 96,000 of those offensive tweets posted by Tay. Other platforms where Tay was rolled out are GroupMe and Kik but these platforms were not impacted as much as Twitter.

Artificial Intelligence (AI) has always been wrapped in the mist of illusiveness and controversies. From luminaries like Stephen Hawking worrying about the future of the world if we fall into the hands of AI to the many researchers around the world who put endless hours in researching the applications of AI in today’s world.

While Tay is an example of how AIs can raise concerns, it also gives a chance to Microsoft to tighten its security revealing the loopholes it may have overseen before.