Microsoft has taken legal action against a group that it claims the company deliberately developed and used tools to bypass the security fences of its cloud AI products. According to the complaint filed by the company in December in the US District Court for the Eastern District of Virginia, a group of 10 unnamed defendants allegedly used stolen customer credentials and specially designed software to log into Azure OpenAI Services, a service fully managed by Microsoft. powered by ChatGPT technology from OpenAI manufacturer. In the complaint, Microsoft accused the defendants — referred to only as “What,” a legal pseudonym — of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and federal racketeering laws by accessing and using Microsoft software illegal. and servers for the purposes of “offensive” and “harmful and illegal content.” Microsoft does not provide specifics about the abusive content it creates. The company is seeking “other equivalent” relief and compensation. In the complaint, Microsoft said it discovered in July 2024 that customers with Azure OpenAI Service credentials – specifically API keys, a unique string of characters used to authenticate applications or users – were used to generate content that violated the service’s acceptable use policy. Later, through an investigation, Microsoft discovered that API keys had been stolen from paying customers, according to the complaint. “The exact manner in which the Defendants obtained all of the API Keys used to commit the misconduct described in this Complaint is unknown,” Microsoft’s complaint reads, “but it appears that the Defendants have engaged in a systematic pattern of theft of API Keys that enabled them to steal them.” to steal Microsoft API Keys from some Microsoft customers. Microsoft said the defendants used stolen Azure OpenAI Service keys from US-based customers to create a “hacking-as-a-service” scheme The defendant created a client-side tool called de3u, as well as software to process and route communications from de3u to Microsoft systems. De3u allows users to use stolen API keys to generate images using DALL-E, one of the OpenAI models available to Azure OpenAI Service customers. , without having to write your own code, Microsoft said. De3u also tries to prevent the Azure OpenAI Service from modifying the prompts used to generate images, according to the complaint, which could happen, for example, when text messages contain words that cause Microsoft to filter content. Screenshot of De3u tool from Microsoft complaint. Image Credit:Microsoft A repo containing the de3u project code, hosted on GitHub — a Microsoft-owned company — was no longer accessible at press time. “These features, combined with Defendants’ unauthorized programmatic API access to Azure OpenAI services, enabled Defendants to reverse engineer Microsoft’s content and abuse measures,” the complaint reads. “The defendants knowingly and intentionally accessed computers protected by the Azure OpenAl Service without authorization, and as a result of these actions caused damage and loss.” In a blog post published on Friday, Microsoft said that the court has authorized the seizure of websites “instrumental” to the operation of the defendants that will allow the company to gather evidence, determine how the service it is accused of monetizing, and interfere with any additions. found technical infrastructure. Microsoft also said it is “putting in place preventative measures,” which the company did not specify, and “adding additional security mitigations” to Azure OpenAI Services targeting the activity it observed.