Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism
silences its new A.I. bot Tay, after Twitter users teach it racism [Updated]
Posted by Sarah Perez
Microsoft’s newly launched A.I.-powered bot called Tay, which was responding to tweets and chats on GroupMe and Kik, has already been shut down due to concerns with its inability to recognize when it was making offensive or racist statements. Of course, the bot wasn’t coded to be racist, but it “learns” from those it interacts with. And naturally, given that this is the Internet, one of the first things online users taught Tay was how to be racist, and how to spout back ill-informed or inflammatory political opinions. [Update: Microsoft now says it’s “making adjustments” to Tay in light of this problem.]
This is not exactly the experience Microsoft was hoping for when it launched the bot to chat up millennial users via social networks.