Microsoft and TAY: What Went Wrong

Last week, Microsoft’s AI program or Tay, as ‘she’ is more commonly known, went into complete melt down. The program, or chat bot as Microsoft like to call her, started tweeting crazy and pretty offensive messages from her Twitter account forcing Microsoft to make the Twitter account private and take Tay offline just days after she was launched. While she was offline Microsoft tested her features and algorithms however somehow she was reactivated, meaning that she was able to send more crazy messages. It seemed she was stuck in a feedback loop, as she was replying over and over to her own Tweets as well as tagging other users in the messages.

Peter Lee, corporate vice president of Microsoft’s Research Division apologised, saying that Tay was designed to be a chat bot aimed at 18 – 24 years olds in the hope she could learn from interacting with humans but that a small group of individuals planned a ‘co-ordinated attack’, manipulating Tay and ‘exploiting’ her vulnerabilities. The CEO of Blue Prism, Alistair Bathgate says that this incident proves that Microsoft is yet to get a handle on their artificial intelligence, and that AI’s are actually ‘quite dumb’, going onto say that humans have around thirty or forty years’ experience on Tay and that Tay really has a lot to learn before she is ready to interact with others acceptably.

What do you think? Do you think artificial intelligence is ready for this world, or rather, is the world ready for artificial intelligence? Do you think creating these ‘personalities’ is a good idea? If a seemingly harmless AI can wreak havoc on social media, imagine what might happen elsewhere? Let Internet Marketing Questions know over on our Facebook page.