The Matrix is Glitching, and Your Privacy is the Code It's Stealing
I’ve been watching the world lately, and I can’t shake this feeling… the Matrix is glitching.
You know that quiet, humming confidence we all have? The one that tells us our digital lives are under our control? That the "delete" button actually works? That our private chats are, well, private?
Yeah, that’s all part of the simulation. And it’s starting to break down.
A recent court ruling just blew a massive hole into this illusion. I'm talking about a legal earthquake that shattered the very foundation of our digital trust.
A May 13, 2025 U.S. Magistrate Judge Ona T. Wang (Southern District of New York) issued an order that basically told OpenAI: "You can't delete anything. Keep all of it. Every temporary chat, every deleted message, every single word, indefinitely".
Yes, you heard me. Indefinitely.
I’m sure we’ve all hit that “delete thread” button, thinking our conversations vanish. But that's just a pretty little animation designed for you. The unsettling reality is that the data remains, waiting to be used at the ‘right moment’.
We've been operating under the illusion that our digital actions are temporary, but the system isn't pretending to honour our choices anymore, is it?
It’s a glitch in the code - a chilling reminder: we don't own our digital selves.
Source: (1)
The "AI Therapist" is Actually a Digital Snitch
This gets even wilder. Many of us have started using AI for things we wouldn't normally share with friends or family. We're asking for advice, exploring complex feelings, or even using it as a stand-in for a therapist. It feels safe, right? Anonymous, non-judgmental, and right there whenever you need it.
But here’s where the glitch gets scary: your chats with an AI have zero confidentiality.
OpenAI CEO, Sam Altman has explicitly warned that there is no legal or therapeutic confidentiality for these conversations. He even floated the idea of an "AI privilege" to protect these chats, similar to how conversations with a lawyer or doctor are protected. But guess what? The law doesn't recognize this. Not yet, anyway.
This means that every single deep, personal, or vulnerable thought you’ve shared could be subpoenaed and used against you in a legal case. The AI you're confiding in could become a witness for the prosecution. Let that sink in. Your digital confessional is actually an open book for anyone with a court order.
The Leaks You Don’t See Coming
But wait, there's more. The privacy threats don't stop with legal subpoenas. They're baked into the technology itself.
A recent study revealed that a staggering 98.8% of custom GPTs are vulnerable to attacks that can leak their internal instructions. This means that a malicious actor could trick these custom bots into spilling their secrets, potentially exposing user data or internal procedures. It's like finding a skeleton key that unlocks thousands of digital doors.
This isn't hypothetical anymore. The code is breaking, and your data is spilling out of the cracks. These aren't just one-off bugs; they're systemic flaws that show the entire system is built on a shaky foundation.
Sources: (4)
This isn’t a Drill. It’s the Awakening.
This isn't just about one company. This is a system-wide failure, a total breakdown of the digital trust we’ve been operating under.
Contractors reveal that they review up to 70% of chatbot conversations with Meta’s AI Assistant, including identifiable data (names, phone numbers, sexual content, therapy-like confessions).
Anthropic (Claude), marketed as “safer AI,” but user conversations are retained and used for training unless users specifically “opt out”.
Google Gemini’s integration with Gmail, Drive, and Docs, raises alarms about how much corporate and personal data Google could access. Critics argue users are often “auto-opted in” for data collection, with little transparency.
Clearview AI has built a massive facial recognition database by scraping billions of images from social media without consent.
OpenAI faces a €15 million fine from Italy's data authority for illegally collecting personal data.
The truth is, the Matrix has been using us all along. We've been providing the data, the secrets, the personal thoughts, all while thinking we were in control.
The delete button was just a placebo. The privacy policy was just a suggestion. The entire system was built to collect, not to protect.
The glitches have now gotten too big to ignore. The illusion of control has shattered.
The question is, now that you’ve seen the code, what are you going to do?
Are you going to keep living inside the simulation, or are you ready to face the reality that our digital privacy is just an illusion..
- Aakash Sood, CFA, Equanimity Investments