The bottom line
For the kind of conversations Forum Sage is built for -- preparing for meetings, thinking through leadership challenges, exploring forum dynamics -- the privacy risk is real but small, and objectively smaller than the risk of sending an email. Every documented case where AI chats caused problems involved confessions to crimes, uploads of trade secrets, or active litigation strategy.
How Forum Sage handles your data
There are two ways to use Forum Sage, and they handle privacy differently.
The sidebar chat on this website sends your messages through our own server to OpenAI's API. Under API terms, your inputs and outputs are retained by OpenAI for up to 30 days for abuse monitoring, then permanently deleted. They are never used for training. There is no chat history, no account, no memory. When you close the browser tab, the conversation is gone from your end. On OpenAI's servers, it persists for up to 30 days, then it's erased. This is temporary by design -- no settings to change, nothing to opt out of. OpenAI retention policy
The standalone Forum Sage on ChatGPT runs through ChatGPT's consumer product. By default, conversations are saved to your chat history, persist indefinitely, and may be used for model training unless you've opted out. To get temporary behavior here, you need to enable Temporary Chat mode before starting a conversation. With Temporary Chat on, the conversation never appears in your history, is never used for training, and auto-deletes from OpenAI's servers within 30 days -- same as the sidebar. Temporary Chat FAQ
The sidebar is the more private option by default. The standalone version on ChatGPT is more capable -- it has access to the full knowledge files, remembers context within a session, and gives deeper responses -- but it requires you to manage your own privacy settings.
After 30 days, is the data really gone?
When OpenAI deletes a conversation -- whether through the 30-day API window or a Temporary Chat expiration -- it is removed from their production systems. Internal backups may retain it for up to 30 additional days after that, making the practical maximum roughly 60 days before complete erasure. OpenAI retention policy
There are two exceptions. First, if the data was already de-identified and separated from your account -- this only happens if you left training enabled, which doesn't apply to the sidebar or to Temporary Chat. Second, if OpenAI is under a legal obligation to retain it, such as a court order. OpenAI privacy policy
After 60 days with no legal hold in place, the data is gone. Not hidden, not archived, not recoverable. If it doesn't exist, it can't be subpoenaed.
Does anyone at OpenAI read your conversations?
OpenAI runs automated classifiers on all conversations to flag potentially harmful content. When the classifiers flag something, a human reviewer on OpenAI's Trust and Safety team may look at it. Human review also occurs when users report violations, during security incident investigations, and in response to law enforcement requests. OpenAI safety overview
OpenAI has never disclosed what percentage of conversations are reviewed by humans. One third-party analysis estimated 1-2% through random quality assurance sampling, but that figure has not been confirmed. What is known: the review team is small relative to the hundreds of millions of weekly conversations; access requires secure workstations with audited just-in-time approval; and even if you opt out of training, conversations may be reviewed during the 30-day abuse monitoring window. OpenAI safety practices
The realistic scenario: a conversation about preparing for a forum meeting or thinking through a leadership challenge will never be flagged by safety classifiers, and no human at OpenAI will ever see it.
Could a conversation be subpoenaed?
Yes. Federal courts have established that AI chat logs are electronically stored information, treated the same as emails and text messages for litigation purposes. There is no "AI privilege" protecting them. Sam Altman confirmed this directly in July 2025, noting that therapists and lawyers have legal privilege but ChatGPT does not, and that OpenAI could be compelled to hand over conversations. VentureBeat
In February 2026, a federal judge rejected the argument that conversations with an AI could be protected by attorney-client privilege -- even when the user was feeding attorney advice into the chatbot to develop a defense strategy. The ruling: an AI is not an attorney, holds no license, and owes no duty of loyalty. Hunt Ortmann analysis
But look at what actually ended up in court. A CEO used ChatGPT to develop a strategy for avoiding a $250 million contractual payout, executed the AI's recommendations, and deleted the logs. A financial executive fed his attorneys' privileged advice into an AI to build a defense strategy, and the FBI found it when they seized his devices. An 18-year-old consulted an AI chatbot about self-defense law hours before a fatal shooting; prosecutors used the logs to prove premeditation. Cyber Security News · Lynchburg News
Every one of these cases involves criminal conduct, trade secret misappropriation, or active litigation strategy. There is no documented case of someone's AI conversations about personal feelings, leadership challenges, or interpersonal dynamics being used against them.
The NYT preservation order
In May 2025, a federal judge ordered OpenAI to preserve all ChatGPT conversation logs -- including deleted ones -- as part of the New York Times copyright lawsuit. The order ended on September 26, 2025. OpenAI returned to standard 30-day deletion for all new conversations. As of March 2026, there is no active preservation order, though the underlying lawsuit is still live. OpenAI's response
How does this compare to email, text, and other channels?
Forum members discuss sensitive topics across many channels -- email threads about scheduling and logistics, texts between members, Google Docs with shared notes, conversations with therapists, and sometimes with attorneys. The table below compares these channels on the dimensions that matter most for privacy. Therapist and attorney are included because they're the two relationships where confidentiality is most often expected, and they illustrate where legal privilege exists and where it doesn't.
| Text | Google Docs | Forum Sage (sidebar) | Therapist | Attorney | ||
|---|---|---|---|---|---|---|
| How long it exists | Indefinitely | Indefinitely on device + cloud | Indefinitely | 30-60 days, then gone | 7-15 years (varies by state) | Indefinitely (firm records) |
| Copies | 6-12+ (servers, devices, archives, backups) | 2-4 (devices, cloud backup) | 1 primary + version history + backups | 1 (OpenAI API servers) | 1-3 (EHR, notes, billing) | 1-3 (firm files, email, cloud) |
| Can be subpoenaed | Yes -- routine | Yes -- common | Yes -- routine | Yes -- but only if data still exists | Yes -- when privilege is pierced | Rarely -- strong privilege |
| Legal privilege | None | None | None | None | Yes, with exceptions | Yes, with exceptions |
| Employer / admin can read | Yes (corporate email) | No (personal phone) | Yes (Workspace admin) | No | No | No |
| Used for AI training | Varies by provider | No | Varies by Google policy | No (API) | N/A | N/A |
| Risk window | Years to permanent | Until device is wiped | Years to permanent | 30-60 days | Years (EHR records) | Decades (firm archives) |
The asymmetry is striking. An executive who discusses a difficult board relationship over Gmail creates a record that may persist across a dozen systems for years, is readable by corporate IT, and will be one of the first things opposing counsel requests in discovery. The same discussion in the Forum Sage sidebar creates a record in one system for 30-60 days that no employer can access. Data Studios comparison
Text messages are somewhere in the middle. Carriers retain almost no SMS content -- AT&T and T-Mobile keep none, Verizon keeps content for 3-5 days. But messages persist on devices and cloud backups indefinitely. Texts have been central evidence in high-profile trials, congressional investigations, and divorce proceedings. TIME
Google Docs are retained indefinitely, with full version history. Google Workspace administrators can access any document in the organization. Google accepts civil subpoenas and has formal processes for responding to legal requests for user data. Google legal process · Google retention policy
How does this compare to talking to a therapist?
Forum conversations often touch on the same territory as therapy -- family, identity, purpose, difficult relationships. So the comparison is natural. The answer is that both carry risks, and they're different risks.
Therapist-patient privilege is one of the strongest confidentiality protections in American law, established by the Supreme Court in Jaffee v. Redmond (1996). But it has real limits. In 34 states, therapists must warn third parties if a patient presents a serious danger of violence. All 50 states mandate reporting of suspected child abuse. And when someone puts their mental health "at issue" in litigation -- claiming emotional distress damages, contesting custody, or asserting an insanity defense -- courts routinely compel therapists to testify, effectively piercing the privilege. SimplePractice · EBSCO Research
Therapists are also human. A 2009 study found that 76% of psychologists surveyed were misinformed about their own state's confidentiality laws. Therapist notes live in electronic health record systems -- the same systems that have suffered over 7,400 large data breaches since 2009. A therapist can be careless with notes. A therapist's office staff can see records. A therapist can mention something to a colleague. HIPAA Journal · PMC study
An AI chatbot has no human memory, no social circle, no mandatory reporting obligation, and no ability to share what you said with anyone over coffee. It doesn't file notes in a medical records system or bill your insurance company. A therapist offers legal privilege that AI lacks. But a therapist also creates a permanent record in a human mind and a medical records system, both of which have been compromised in ways a 30-day auto-deleting API conversation has not.
What to actually worry about -- and what not to
Never share -- on any platform
Passwords, financial account numbers, Social Security numbers, government IDs. These are dangerous regardless of the platform. Don't put them in an email either.
Proprietary source code or trade secrets. This is what made headlines when Samsung engineers pasted semiconductor code into ChatGPT.
Active criminal defense strategy. A February 2026 federal ruling confirmed that sharing privileged attorney communications with an AI can destroy the privilege.
Discuss freely
Personal feelings, leadership challenges, career decisions, meeting pre-work. Forum Sage can help structure your thinking around challenging problems. No one has ever been harmed by an AI conversation about a difficult board relationship, a career crossroads, or preparing a forum presentation. The content isn't dangerous. It's the same kind of thing people discuss with friends, coaches, and in journal entries.
People in your life. You're going to mention your spouse, your kids, your colleagues, your board members by name -- especially over time, across multiple conversations. That's normal. The realistic risk of saying "I'm frustrated with my COO, David" in a conversation that auto-deletes from one server in 30 days is smaller than texting a friend the same sentence, which lives on both phones forever.
Emotional and family situations. Working through a difficult conversation with a teenager, processing a health scare, navigating a divorce. These are human experiences, not state secrets. An AI chatbot is not going to gossip about your family, and there is no plausible scenario where "I'm having trouble with my teenager" becomes evidence in a proceeding.
Use judgment
Legal situations. Discussing "I'm navigating a partnership dispute and trying to figure out my options" is fine. Pasting in the full text of a settlement agreement your attorney sent you and asking for strategic advice carries more risk -- not because of the AI, but because sharing privileged communications with any third party can weaken privilege protections. The practical test: would you be comfortable if this conversation appeared in a filing? For most legal questions that aren't about hiding assets or circumventing obligations, the answer is yes.
Details about other people's health, legal situations, or finances. Be thoughtful about the level of identifying detail. "A member in my forum is going through a health crisis" is different from sharing their name, diagnosis, and hospital. It's not that the AI will do something harmful with the information -- it's that other people didn't consent to having their private information entered into a third-party system, even a temporary one.
Three settings that matter
These apply to the standalone Forum Sage on ChatGPT. The sidebar on this website doesn't need any of them -- it's already configured for maximum privacy.
Turn off training. Go to Settings, then Data Controls, then toggle off "Improve the model for everyone." This prevents your conversations from being used in any future model training. Takes five seconds. OpenAI controls
Use Temporary Chat for sensitive conversations. Click the pill-shaped "Temporary" button when starting a new chat. The conversation won't appear in your history, won't be used for training, and will auto-delete from OpenAI's servers within 30 days. This works with custom GPTs, including Forum Sage. Temporary Chat FAQ
Delete conversations when you're done. Once deleted, the conversation is removed from your account immediately and from OpenAI's servers within 30 days (plus up to 30 days for backups). If it doesn't exist, it can't be subpoenaed.