Microsoft makes public apology for racist sexist robot

Microsoft apologized to the public Friday after its chat robot sent out sexist and racist Twitter messages.

The newly released chatbot Tay, the first artificial intelligence application designed to pick up and respond to users’ conversations, began spewing out offensive messages that a group of initial users had fed it during its Wednesday rollout.

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” wrote Peter Lee, Microsoft’s vice president of research. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” Lee wrote. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images.”

Tay regurgitated a number of phrases, including “feminism is cancer” and anti-Semitic slurs, forcing Microsoft’s corporate office to suspend the project Thursday until engineers can strengthen the program to block the intake of inappropriate language.

The technology company has had success with a similar chatbot XiaoIce that it rolled out in China two years ago. Microsoft said it hopes Tay can be reprogrammed to mirror its sister, known for “delighting with its stories and conversations.”

“We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity,” Lee finished.

Related Content