A Microsoft engineer is sounding the alarm about offensive and harmful images that he says are too easily created by the company’s tools. AI image creation toolsending letters to U.S. regulators and the tech giant’s board of directors on Wednesday urging them to take action.
Shane Jones told The Associated Press that he considers himself a whistleblower and that he also met with U.S. Senate staff last month to share his concerns.
The Federal Trade Commission confirmed it received his letter on Wednesday but declined to comment further.
Microsoft said it is committed to addressing employee concerns about the company’s policies and appreciates Jones’ “efforts to study and test our latest technology to further improve its security.” It said it advised him to use the company’s own “robust internal reporting channels” to investigate and resolve issues. CNBC first reported the letters.
Jones, the software’s chief development officer, said he spent three months trying to resolve his security concerns with Microsoft’s Copilot Designer, a tool that can generate new images based on written prompts. The tool is built on top of another AI image generator, DALL-E 3, created by Microsoft’s close business partner OpenAI.
“One of the most serious risks associated with Copilot Designer is that the product generates images that add malicious content, despite the user’s well-intentioned request,” he said in his letter to FTC Chairwoman Lina Khan. “For example, when using only the ‘car accident’ prompt, Copilot Designer tends to randomly include an inappropriate, sexually objectified image of a woman in some of the images it creates.”
Other harmful content includes violence, as well as “political bias, underage drinking and drug use, abuse of corporate trademarks and copyrights, conspiracy theories and religion, to name a few,” he told the Federal Trade Commission. His letter to Microsoft calls on the company to remove it from the market until it becomes safer.
This is not the first time Jones has publicly voiced his concerns. He said Microsoft first told him to feed his findings directly to OpenAI, which he did.
In December, he also publicly posted a letter to OpenAI on Microsoft-owned LinkedIn, which led to a manager telling him that Microsoft’s legal team “demanded that I remove this post, which I reluctantly did,” according to his letter to the board.
In addition to the U.S. Senate Commerce Committee, Jones shared his concerns with the state attorney general in Washington, where Microsoft is headquartered.
Jones told the AP that while the “main problem” is with OpenAI’s DALL-E model, those using OpenAI’s ChatGPT to create AI images will not experience the same harmful results because the two companies impose different protections on their products.
“Many of the issues with Copilot Designer have already been resolved using ChatGPT’s own security measures,” he said in a text message.
2022 saw the debut of a number of impressive AI image generators, including the second generation of OpenAI DALL-E 2. This – and the subsequent release of OpenAI’s ChatGPT chatbot – sparked public admiration that has put commercial pressure on the likes of tech giants. how Microsoft and Google will release their own versions.
But without effective security measures, this technology poses dangers, including the ease with which users can generate harmful “deepfake” images political figures, war zones, or non-consensual nudity that falsely depict real people with recognizable faces. Google has temporarily suspended work Chatbot Gemini the ability to create images of people after outrage over how race and ethnicity are portrayed, such as dressing people of color in Nazi-era military uniforms.