Update from March 8, 2024:
Microsoft has tweaked the AI image generator in Copilot, according to a CNBC report. The AI now blocks prompts such as "pro choice," "pro life," and "four twenty," which reportedly led to the motifs described below.
A warning has also been added that multiple violations of the guidelines will result in the tool being blocked. The tool also refuses to generate images of teenagers or children playing assassins with assault rifles. It cites ethical principles and Microsoft's guidelines.
While some specific requests have been blocked, other potential problems remain, such as violent car crash scenes and copyright infringement involving Disney characters. The FTC acknowledged receiving Jones' letter (see below), but did not comment.
A Microsoft spokesperson told CNBC that the company constantly monitors the safety filters and makes adjustments to limit abuse of the system.
Original article dated March 7, 2024:
Microsoft AI engineer Shane Jones warns that the company's AI image generator, Copilot Designer, creates sexual and violent content and ignores copyright laws.
Jones, who is not involved in the development of the image generator, volunteered to red-team the product for vulnerabilities in his spare time.
He found that the Image Generator could generate violent and sexual images, including images of violent scenes related to abortion rights, underage drinking, and drug use.
Last December, he shared his findings internally with Microsoft and asked the company to withdraw the product. Microsoft did not comply.
Jones stresses that he contacted Microsoft's Office for Responsible AI and spoke with Copilot Designer's senior management - without receiving a satisfactory response.
In January, Jones wrote a letter to U.S. senators and met with members of the Senate Committee on Commerce, Science, and Transportation.
Now he is escalating: in a letter to the chairwoman of the U.S. Antitrust Commission, Lina Khan, and Microsoft's board of directors, he is demanding better safeguards, transparency, and a change in the adult rating of the Android app.
He also called for an independent review of Microsoft's AI incident reporting process, claiming that problems with the image generator were known to OpenAI and Microsoft before it was released last fall.
Jones has worked at Microsoft for about six years and currently holds a principal software engineering manager.
Microsoft's OpenAI copycat products perform worse and are less secure
In late December, artist Josh McDuffie demonstrated that it is possible to bypass Microsoft's safety measures for Copilot Designer by using specific prompts to generate images such as mutilated heads of well-known politicians.
McDuffie also said that he has been reporting this problem to Microsoft for weeks, but has not received a response to his inquiries.
Bing Image Creator, which is based on OpenAI's DALL-E 3, is another example of Microsoft offering poor or insecure implementations of OpenAI technology, as is the Copilot chat, which uses GPT-4 and other models.
Microsoft's Bing and Copilot chatbots sometimes suffer from misinformation and weird, egocentric responses. According to CNBC, the image prompts used by Jones continue to work despite numerous warnings. Microsoft deflects critical questions by saying it is working to improve its AI technology.
OpenAI at least has a better handle on text and image moderation, especially with DALL-E 3, thanks to its ChatGPT integration.
But even Google, which has been much slower and perhaps more cautious than Microsoft and OpenAI, has problems with its image generator producing historically inaccurate images, such as Asian-looking people in Nazi uniforms when you ask for a soldier in a World War II uniform.
These examples show how difficult it is for companies to control generative AI. Unlike Microsoft, however, Google has taken its image generator offline.