


The sueball further contends the defendants have violated the Computer Fraud and Abuse Act by intercepting interaction data via plugins. The complaint alleges Microsoft and OpenAI have violated America's Electronic Privacy Communications Act by obtaining and using private information, and by unlawfully intercepting communications between users and third-party services via integrations with ChatGPT and similar products. Any actual damages would be determined if the plaintiffs prevail, based on the findings of the court. The lawsuit is seeking class-action certification and damages of $3 billion – though that figure is presumably a placeholder. OpenAI in the past has dealt with the reproduction of personal information by filtering it.

It remains to be seen how, if at all, plaintiff-created content and metadata has actually been exploited and whether ChatGPT or other models will reproduce that data. Small custom AI models are cheap to train and can keep data private, says startup.Open source licenses need to leave the 1980s and evolve to deal with AI.OpenAI calls for tough regulation of AI while quietly seeking less of it.Microsoft Azure OpenAI lets enterprises feed corporate secrets to ChatGPT.The 157 page complaint is heavy on media and academic citations expressing alarm about AI models and ethics but light on specific instances of harm.įor the 16 plaintiffs, the complaint indicates that they used ChatGPT, as well as other internet services like Reddit, and expected that their digital interactions would not be incorporated into an AI model. "With respect to personally identifiable information, defendants fail sufficiently to filter it out of the training models, putting millions at risk of having that information disclosed on prompt or otherwise to strangers around the world," the complaint says, citing The Register's Maspecial report on the subject.
