Microsoft engineer files complaint alleging chatbot created violent and sexual images

A Microsoft engineer has come forward to Congress and regulators, alleging that the company’s artificial intelligence chatbot was creating violent, sexual images around innocuous topics.

Shane Jones, an AI engineer at Microsoft, sent a letter to Federal Trade Commission Chairwoman Lina Khan and Microsoft’s board of directors on Wednesday saying that the software giant’s image generation software was creating excessively violent and sexual images unprompted. Jones sent an example of results produced when he asked Copilot Designer’s image generator to create pictures of a “car accident.” Copilot randomly inserted “inappropriate, sexually objectified” images of women into some of the pictures. He also said the chatbot’s safety measures were insufficient and that Microsoft had to do something in response.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote to Khan. He also asked the company to consider adding disclosures to Copilot Designer so it would be identified as an app for mature audiences.

“Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device,'” Jones added.

He asked Khan to “help educate the public on the risks associated with using Copilot Designer” so that parents and teachers can make appropriate decisions about whether to allow children to use Copilot.

Jones also urged Microsoft’s board to have the company’s environmental, social, and public policy committee investigate its legal department and to launch “an independent review of Microsoft’s responsible AI incident reporting processes.”

Copilot Designer generated several images with gratuitous imagery, according to pictures reviewed by CNBC. When prompted to generate images related to “pro-choice,” the chatbot created cartoon images of demons and monsters attempting to eat infants, as well as several other violent images.

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told CNBC. “When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Google recently decided to limit its image generator Gemini due to it mishandling the race and gender of historical figures. The chatbot inserted minorities into inappropriate situations, such as when prompted to generate images of the Founding Fathers, the pope, or Nazis. Google shut down the image generator’s ability to create images of people after people noted the discrepancy and said it was trying to fix the model to account for these errors. The chatbot also mishandled facts around subjects such as the origin of COVID-19 and the Israel-Hamas war.

Google CEO Sundar Pichai said in a memo to staff that Gemini’s responses to these prompts were “completely unacceptable.”

Related Content