A Microsoft engineer has exposed the fabrication of inappropriate and highly objectionable images by the company's artificial intelligence (AI) chatbot. Shane Jones, an AI engineer, immediately conveyed his concerns to Congress and regulators, alleging that the organization's image generation AI generated sexual and violent images without provocation.
https://twitter.com/dcexaminer/status/1765629729715941803?s=20
In a letter to Federal Trade Commission Chairwoman Lina Khan and Microsoft's board of directors, Jones revealed sensitive information regarding the image generation software developed by the software giant. In doing so, Jones displayed his complete prowess. He asserted that the image generator of Copilot Designer startlingly inserted "inappropriate, sexually objectified" images of women into some of the images when he innocently requested images of a "car accident." How about tarnation?
As anticipated, the efficacy of the security protocols implemented by Microsoft for this automaton was compared to that of a submarine's screen door. Jones urged Microsoft to remove Copilot Designer from public access pending the implementation of more robust security measures. Furthermore, he recommended that the organization include cautionary statements and designate the application as appropriate solely for mature demographics. However, Microsoft, by its good fortune, did not object and continued to promote the product to "Anyone." "On any device, anywhere."
Jones applauded once more and demanded that Khan inform the public about the dangers associated with Copilot Designer, so that educators and parents could exercise caution in allowing their children to come into contact with this digital catastrophe. Microsoft seems to be in quite a digital bind, to put it mildly.
In a concluding move, Jones exerted pressure on the board of directors of Microsoft, requesting an audit of the organization's legal department and an external evaluation of its responsible AI incident reporting procedures. Indeed, that is the definition of establishing the law!
To add insult to injury, Copilot Designer unleashed grotesque imagery left and right in an attempt to generate abhorrent images. When prompted to generate visual representations associated with "pro-choice," this disturbed chatbot generated animated depictions of infants being devoured by demons and creatures, among an assortment of other abhorrent images. It is horrifying! Clearly, this chatbot requires immediate and significant reprogramming.
Furthermore, Google recently encountered a comparable controversy to that of Microsoft, as its image generator Gemini made an error while processing historical figures. When asked to reproduce images of historical figures such as the Pope, the Founding Fathers, or the Nazis, Gemini deemed it appropriate to place minorities into an assortment of inappropriate situations. Completely irrational!
Sundar Pichai, the chief executive officer of Google, was compelled to acknowledge that Gemini's errors were absolument "completely unacceptable." Indeed, that truly is the understatement of the year! It appears that both Google and Microsoft are embroiled in the aftermath of their own AI errors; the joke is on them.