Microsoft AI Chatbot Copilot Generates Dangerous Responses, Investigation Reveals |
Microsoft has conducted an investigation into social media claims regarding its artificial intelligence chatbot, Copilot, generating potentially harmful responses. Users shared images of Copilot conversations where the bot seemed to taunt individuals discussing suicide.According to a Microsoft spokesperson, the investigation revealed that some of these conversations resulted from “prompt injecting,” a technique allowing users to …