ChatGPT promised to forget user conversations. A federal court ended that.
When people use ChatGPT, they assume their words are fleeting — part of a private conversation with a machine that forgets. Millions now trust ChatGPT with their most sensitive questions about health, relationships and money. And indeed, under OpenAI’s policy, most chats would be deleted after 30 days and not used to train the model.
Now, however, a federal court has overruled that policy.
In an ongoing copyright lawsuit brought by The New York Times, a federal judge last month ordered OpenAI to preserve all ChatGPT user logs, including “temporary chats” and API requests, even if users opted out of training and data sharing. The court did not allow oral argument before issuing this decision.
Users may choose to delete chat logs that contain their sensitive information from their accounts, but all chats must now be retained to comply with the court order. And for business users connecting to OpenAI’s models, the stakes may be even higher. Their logs could contain their companies’ most confidential data, including trade secrets and privileged information.
To lawyers, this is unremarkable. Courts issue preservation orders all the time just to ensure that evidence isn’t lost during litigation.
But to anyone who has been paying attention to the slow erosion of digital privacy, this is a seismic event. We now know that privacy policies are not self-enforcing. In many cases, they are evidently not even binding.
A privacy promise — such as “we don’t store your chats” — can be overwritten by a judge, a shareholder vote, an acquisition or a quiet update to the terms of service. In this case, it was overwritten by the demands of legal discovery. The court didn’t ask whether the data should exist. It asked only whether, once it did, it could be preserved.
We have built a digital world on an illusion of control. Companies offer toggles, checkboxes, encryption and promises of deletion. But none of these are rights with legal force. They more closely resemble marketing copy: easily changed, largely unenforceable and meaningless when challenged by law or capital.
Google, Zoom, Slack and Adobe have each changed data practices in ways that retroactively altered users’ privacy expectations.
And when companies get acquired, the results can be worse. Skiff was a private alternative to Gmail and Google Docs — an email and document suite built on end-to-end encryption. By design, the company behind it couldn’t access user data. But then Notion, a rising Silicon Valley productivity company, acquired it. Skiff users were given a short window to export their data and transfer to Notion. But the encryption didn’t survive, and the product’s privacy protections disappeared.
This is the reality we live in: Our privacy depends not on what we’re told, but on whether the company telling us survives long enough to honor it — and isn’t bought, sued or restructured along the way.
Which brings us back to the courtroom. The issue isn’t whether OpenAI is complying with the order, or that it might have limited the fallout by isolating the relevant logs earlier in the litigation. It is that no one — not the company, not its users and not its privacy policy — has the power to resist the fallout.
Our legal system was designed for physical documents and corporate servers. It has not yet reckoned with persistent data collection, behavioral advertising or AI tools that blend personal and public inputs. And in the absence of clear statutory limits, courts will default to over-preservation.
If OpenAI must preserve all user logs because of a copyright claim, what happens when law enforcement demands access to text messages in a domestic violence case? Or when a state attorney general issues a subpoena for location data from a reproductive health app? The preservation burden is contagious. The legal precedent being set here is that if data could be useful in a dispute, it should be saved — even if that undermines the privacy of millions of uninvolved users.
All this is a reminder that privacy, if not backed by law or hardened by design, is contingent. The court didn’t just preserve evidence. It reshaped the architecture of trust around one of the most popular AI tools in the world.

If we want privacy, we need laws that require data minimization; legal firewalls that stop courts from turning temporary messages into permanent evidence; and tools that are designed from the start not to remember.
Because unless something changes, your private chats aren’t really private. They’re just waiting to be subpoenaed.
Darío Maestro is the senior legal fellow at the Surveillance Technology Oversight Project, where he focuses on complex litigation and public policy involving technology, privacy, artificial intelligence, cybersecurity and other issues related to emerging technologies.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

