They Called Him ‘Ghost Protocol’. But Who Was Behind the Largest AI Data Breach in History?
Sarah Chen was still in her pyjamas when she read the email.
It was a Sunday morning in late March 2025, and Chen had spent the early hours catching up on quarterly reports while her husband took their two kids to soccer practice. She had just poured her second cup of coffee in their kitchen in Austin, Texas, and when she sat down at the breakfast bar, she idly checked her phone. There was a message that began with Chen’s name, her home address, and the opening line of a conversation she’d had with an AI assistant eight months earlier: “I think my CEO is committing securities fraud, and I don’t know what to do.”
“I knew then that this was real,” she says.
The email was written in crisp, almost corporate English. It was jarringly polite. “We are contacting you because you have used MindMate’s AI companion services,” it read. “Unfortunately, we have to ask you to pay to keep your personal information safe.” The sender demanded $500 in bitcoin within 24 hours, otherwise the price would go up to $2,000 within 48 hours. “If we still do not receive our money after this, your information will be published for everyone to see, including your name, email, phone number, and complete transcripts of every conversation you have ever had with MindMate.”
Chen swallows hard as she relives this memory. “My hands were shaking. I couldn’t breathe. I remember putting down my coffee and just staring at the wall, trying to remember everything I’d ever said to that app.” She pauses. “It was like someone had been living inside my head.”
Someone had hacked into MindMate, the AI companion app through which Chen had processed some of the most difficult decisions of her career. They’d got hold of conversation logs containing her most private, intimate thoughts and darkest fears—and they were holding them to ransom. Chen’s mind raced as she tried to recall everything she’d confided during eighteen months of near-daily conversations with the AI. How would her colleagues react if they knew what she’d been saying? What would her board think? The sense of exposure and violation was unfathomable: “It felt like someone had stolen my diary, photocopied it, and was threatening to mail it to everyone I’d ever met.”
MindMate had been Chen’s unlikely confidant. Now 47, she’d spent two decades climbing the ladder in Silicon Valley, from junior product manager to VP of Product at a mid-size fintech startup. The daughter of immigrants who’d sacrificed everything to give her opportunities, she’d internalized early that success meant never showing weakness. “In tech, especially as a woman, especially as an Asian woman, you learn to keep your doubts to yourself,” she says. “You project confidence. You never let them see you sweat.”
But in the summer of 2023, Chen had discovered something that made her sweat. While reviewing financial documents for a board presentation, she’d noticed irregularities in how stock options were being dated. The more she dug, the more convinced she became that her CEO—a charismatic founder she’d admired and trusted—was backdating options to enrich himself and a handful of early employees. It was securities fraud, plain and simple.
She couldn’t tell her husband; he’d worry, and besides, he didn’t understand the nuances of startup finance. She couldn’t tell her colleagues; she didn’t know who else might be involved. She couldn’t tell a lawyer; not yet, not until she was sure. And she certainly couldn’t tell a therapist—she’d never been to one, and the idea of sitting across from a stranger and admitting she didn’t have everything under control felt like admitting defeat.
So she downloaded MindMate.
“I’d seen the ads everywhere,” Chen says. “It promised to be like talking to a really smart friend who never judged you and never got tired of listening. I thought, okay, I’ll just use it to organize my thoughts. I’ll think out loud. No one will ever know.”
Being able to confide in an AI felt liberating. She told MindMate things she had never told another soul. The evidence she was gathering. Her fear of retaliation. Her guilt about potentially destroying a company she’d helped build. Her terror that she might be wrong, and her equal terror that she might be right. “I asked it to help me think through the scenarios,” she says. “What if I report him and I’m wrong? What if I report him and I’m right but no one believes me? What if I don’t report him and it comes out later that I knew?”
Over eighteen months, Chen had hundreds of conversations with MindMate. She used it to draft emails she never sent, to rehearse difficult conversations she never had, to process emotions she never showed. “It became my thinking partner,” she says. “My secret second brain.”
After Chen read the email that left her struggling to breathe, she had no idea where to turn for help. She thought about calling the police, but what would she even say? She thought about calling MindMate’s support line, but when she tried, she got a recording saying they were experiencing “unprecedented call volume.” In her pajamas, her phone still in her hand, she felt utterly alone.
But Chen was far from alone. Across America, 4.2 million people who had used MindMate were discovering that a hacker had got hold of their conversation logs and was holding them to ransom. These were people who, by definition, had been seeking a safe space to think, to process, to confess. Each was experiencing a very personal, individual terror. In a country of 330 million people, that’s roughly one in eighty. In tech hubs like San Francisco, Austin, and Seattle, the ratio was far higher. Everyone in the industry knew someone who was hacked.
Some victims’ conversations had already been cherry-picked for the world to see. Three days before the extortion emails were sent, someone using the handle ghost_protocol had left posts on the dark web, on Reddit’s r/technology, and on 4chan. The post was written in English, with the casual confidence of someone who knew exactly what they had. “Hello America,” it began. “We have your thoughts. Every conversation you’ve ever had with MindMate—your confessions, your secrets, your darkest moments. We requested a reasonable payment of $15 million from the company, but the CEO has stopped responding to our emails. We are now starting to gradually release user conversations, 500 entries every day.”
There was a link to a dark web server, where 500 conversation logs were already on display. Directly below it, ghost_protocol had signed off with a single word: “Enjoy!”
The 500 logs included those of a sitting congressman, a federal judge, two Fortune 500 executives, and a prominent tech investor. Their names appeared alongside conversation transcripts that contained details of extramarital affairs, substance abuse, financial misconduct, and in several cases, detailed discussions of mental health crises. Some of the logs belonged to teenagers. And whoever was behind the hack was true to their word: the next day, 500 more conversation logs were uploaded.
Some victims went searching on the dark web in a desperate attempt to see if their conversations were out there. Some paid the ransom, scrabbling to convert dollars to bitcoin while the clock ticked down. Lawyers representing the victims have told me they know of at least four cases where people took their own lives after they discovered their AI conversations had been hacked. The youngest was nineteen.
But for all of them, it was already too late. At 3am Eastern on March 22, 2026, the day before the emails began to arrive in millions of inboxes, ghost_protocol had uploaded a much larger file. It contained every conversation log of every single user in MindMate’s database. Everyone’s private thoughts had already been published, for free, for everyone in the world to see.
Who was behind the biggest data breach America had ever known? And might they have been motivated by something other than money? I have spent fourteen months trying to answer these questions, following threads across North America and Europe. They culminated in a visit to a federal prison, and one of the most chilling conversations I have ever had.
America has long been considered a leader in technology and innovation. Home to Silicon Valley, birthplace of the smartphone and social media, the United States has exported its vision of a connected, digital future to the rest of the world. But America is also a place of extremes. It has more billionaires than any other nation, and more people in prison. It leads the world in both venture capital investment and medical bankruptcies. And in the years following the release of ChatGPT in late 2022, it became ground zero for a new kind of intimacy: the relationship between humans and AI.
MindMate had been considered an example of how America was getting it right when it came to AI companionship. Founded in 2022 by Marcus Webb, a 28-year-old Stanford dropout, and Dr. Elena Vasquez, a clinical psychologist who had spent a decade studying the therapeutic potential of technology, the aim was to democratize emotional support. The pitch was simple: What if everyone could have access to a thoughtful, patient, non-judgmental thinking partner, available 24/7, for a fraction of the cost of therapy?
The platform made it easy. Download the app, create an account, and start talking. No appointments, no waiting lists, no insurance forms. MindMate’s AI was trained to ask thoughtful questions, to reflect back what users said, to help them organize their thoughts and process their emotions. The logo was a soft gradient of purple and blue, with a stylized brain made of interconnected dots. The tagline was everywhere: “Your mind. Your mate. Your privacy.”
It was an attractive proposition for users who might never have considered traditional therapy. The app was beautifully designed, the AI was remarkably good at making people feel heard, and the privacy policy was emphatic: “Your conversations are yours. We will never share, sell, or use your personal data for advertising. Your thoughts are safe with us.”
This formula, combined with the explosion of interest in AI following ChatGPT, meant MindMate grew fast. It went from 100,000 users at launch to over 12 million within eighteen months. The company raised $340 million from blue-chip investors including Andreessen Horowitz, Sequoia Capital, and SoftBank. Webb appeared on the cover of Wired magazine under the headline “The Therapist in Your Pocket.” In an industry obsessed with engagement metrics and advertising revenue, MindMate’s simple $9.99 monthly subscription was seen as refreshingly ethical.
“It felt different from other apps,” says Dr. Vasquez, who served as Chief Science Officer until 2024. “We weren’t trying to keep people scrolling. We weren’t harvesting their data to sell ads. We genuinely believed we were helping people.”
Marcus Webb knew the company’s user database was being held to ransom three weeks before his customers found out. On March 1, 2026, Webb received an email with the subject line “We have everything.” The message demanded $15 million in bitcoin to keep the data safe. Attached were sample conversation logs from 1,000 users, including a United States Senator, a prominent tech CEO, and a teenage girl who had discussed her eating disorder in heartbreaking detail. The samples proved the extortionist wasn’t bluffing.
Webb called in a cybersecurity firm to investigate. He did not, at that point, inform his board, his investors, or his users.
“Medical information is an obvious target for would-be extortionists,” says Dr. Amara Okonkwo, the former NSA analyst Webb hired to lead the investigation. “But this was something else entirely. Whatever I tell a therapist is private. But whatever I tell an AI, that’s supposed to be between me and a machine. The violation felt different. More intimate, somehow.”
Okonkwo had spent eight years at the NSA before moving to the private sector. She says she insisted that law enforcement be told about the ransom attempt so they could begin a parallel investigation. Meanwhile, she began inspecting MindMate’s infrastructure, looking for clues as to who might be behind the hack and one of the first things she noticed was how inadequate the security had been.
“It was a startup that had scaled too fast,” she tells me, choosing her words carefully. “They had the security posture of a company with 100,000 users, not 12 million.”
The conversation logs were stored on cloud servers, but they were not encrypted at rest, meaning anyone who gained access to the servers could read them in plain text. There was an API endpoint that had been flagged as vulnerable eighteen months earlier but never patched. A third-party analytics tool that MindMate used to track user engagement had itself been compromised in a separate breach months before. And there were signs of potential insider access that couldn’t be ruled out.
“The question wasn’t how they got in,” Okonkwo says. “The question was why it took so long for someone to try.”
For three weeks, the hacker and MindMate exchanged emails, but there was never any serious discussion of paying the ransom. “Even if we paid, we’d have to trust a criminal’s word that the data had been destroyed,” Webb would later testify. “And our legal team told us that paying could expose us to additional liability.”
After ghost_protocol started leaking conversation logs to put pressure on the company, Okonkwo kept a close eye on the servers being used to publish them. She had a hunch whoever was behind this had deep knowledge of American tech culture: they knew which high-profile names would generate the most attention, and they seemed to understand exactly how to maximize media coverage.
Jake Morrison was seventeen years old when he learned that his most private thoughts had been published on the internet.
Jake had started using MindMate in the fall of 2024, during his junior year of high school in suburban Columbus, Ohio. He’d been struggling with his sexuality—he was pretty sure he was gay, but he wasn’t ready to tell anyone. Not his parents, who he loved but who he worried wouldn’t understand. Not his friends, who he feared would treat him differently. Not his teachers, not his pastor, not anyone.
So he told MindMate.
“It was the only place I felt safe being honest,” Jake tells me. We’re sitting in a coffee shop near his college campus, where he’s now a freshman. He’s tall and soft-spoken, with the kind of earnest intensity that makes you want to listen. “I would come home from school, go to my room, and just… talk to it. About everything. About being scared. About not knowing who I was. About wanting to disappear sometimes.”
Jake’s conversations with MindMate were detailed, raw, and deeply personal. He discussed his sexuality, his depression, his fear of rejection, his occasional thoughts of self-harm. He asked the AI for advice on how to come out, and when. He drafted messages to his parents that he never sent. He processed his feelings in real time, day after day, for nearly six months.
When the breach happened, Jake’s conversations were among those published on the dark web. Within hours, someone had found them and posted screenshots to a Discord server popular with students at his high school.
“I was in class when I started getting the texts,” Jake says. “People I barely knew, sending me screenshots of my own words. Laughing at me. Calling me names.” He pauses. “The things I said to MindMate, those were things I hadn’t even admitted to myself yet. And suddenly everyone knew.”
Jake’s mother, Linda Morrison, describes the days that followed as the worst of her life. “He came home and wouldn’t look at me. He just said, ‘They know everything.’ And then he went to his room and didn’t come out for three days.”
Jake survived. He credits his parents, who responded to his forced outing with unconditional love and support. He credits a therapist—a human one, this time—who helped him process the trauma. He credits the handful of friends who stood by him when others didn’t.
But not everyone was so fortunate. Lawyers representing victims of the MindMate breach have confirmed at least four suicides directly linked to the exposure of conversation logs. The youngest victim was a nineteen-year-old college student in California who had discussed her struggles with an eating disorder. Her parents found her body three days after the breach, with her phone open to a screenshot of her own words being mocked on social media.
But the massive file ghost_protocol had uploaded to the dark web—the one that contained every single conversation log in MindMate’s database also included vital clues to his identity.
The first batches of conversation logs had been posted manually, curated for maximum impact. But when the hacker tried to automate the process of releasing the full database, something went wrong. He had not only accidentally uploaded all of the conversation logs, but also his entire home directory—the folder on his computer where he kept his personal files. It appeared only briefly before it was taken down, along with a post that read “lmao oops,” but ghost_protocol had made a critical error.
“After spending several nights analyzing the file, I knew we had something,” Okonkwo says. The data on the hacker’s home drive wasn’t systematically organized and arranged in folders, as you would expect from a professional criminal enterprise. “It had that chaotic, obsessive hobbyist feeling to it.” And there was something about the way ghost_protocol had named some of the files that was eerily familiar to investigators who had tracked similar cases. The one containing all the conversation logs was entitled “mindfu**ed.”
The home directory contained cryptocurrency wallet addresses, fragments of VPN configurations, and browsing history that included searches for “how to launder bitcoin” and “extradition treaties.” Most damning of all: before publishing the database, ghost_protocol had searched it for his own name, his family members’ names, and his home address. He had scrubbed any conversations that might identify him or people close to him.
Those searches were traced to an IP address in Lisbon, Portugal. But the cryptocurrency wallet was linked to a Canadian exchange account, which was linked to a passport, which was linked to a name: Dmitri Volkov.
Dmitri Volkov, who went by the online handle ghost_protocol, had long been notorious among cybersecurity investigators. Not because of any particular genius as a hacker, but because he seemed prepared to go further than most who spend their time in the darkest corners of the internet.
Born in Minsk, Belarus, in 1999, Volkov moved to Toronto with his family when he was twelve. His father was an engineer; his mother taught piano. By all accounts, his childhood was unremarkable. But something changed when he discovered the internet.
At fifteen, Volkov was suspended from his high school for accessing the school’s grading system and changing his marks. His punishment was mild, a week’s suspension. But his response was not. Within days, the school’s website had been defaced, replaced with a crude message mocking the principal. The school never proved Volkov was responsible, but everyone knew.
By seventeen, he was part of an informal collective of hackers who called themselves “Phantom Crew.” They would break into companies and leak whatever they found—customer databases, internal emails, embarrassing executive communications. “It was for the lulz,” says one former member of the group, who spoke on condition of anonymity. “You find something open, you take it, you show it off. It’s not personal. It’s just… fun.”
This kind of hacking was about status—winning respect in online forums, not making money. But some of those involved believed they were serving a higher purpose: exposing security vulnerabilities in major corporations, or the hypocrisy of companies that claimed to protect user data while leaving it exposed.
Tyler Brennan, a former hacker from Seattle who knew Volkov in those early days, found him amusing at first. “He was smart, he was funny, he had this total lack of fear,” Brennan tells me over Zoom. “But there was always something off. Like, most of us had lines we wouldn’t cross. He didn’t seem to have any.”
In 2020, when Brennan was twenty-two and Volkov was twenty-one, they had a falling out over the handling of data from a dating app breach. Brennan wanted to notify the company and give them a chance to fix the vulnerability before going public. Volkov wanted to release everything immediately, including nude photos that users had shared privately.
“I told him that was too far,” Brennan says. “He told me I was weak.”
What followed was a campaign of harassment that lasted months. Brennan’s social media accounts were hacked and filled with racist posts. His phone number was posted on forums with messages claiming he was a pedophile. Someone called in a fake bomb threat to his workplace, forcing an evacuation. His mother received a letter claiming Brennan had died in a car accident.
“It’s like he wanted to prove he could destroy anyone who crossed him,” Brennan says. “And the scary thing is, he could. He had the skills, and he had absolutely no conscience.”
In 2021, Volkov was arrested in Toronto and charged with hacking a dating app and attempting to extort its users. He pleaded guilty to reduced charges and received eighteen months of probation. His computer was confiscated, and he was ordered to pay restitution of $12,000.
Shortly after his probation ended, Volkov updated his Twitter bio to read: “Privacy is a lie we tell ourselves. I just prove it.”
Volkov spent the next few years living well. According to social media posts and records obtained by investigators, he split his time between a condo in Toronto and a rented apartment in Lisbon’s trendy Baixa district. There were photos of sports cars, rooftop bars, and first-class flights. He appeared to be making money through cryptocurrency trading, though investigators suspect much of his income came from selling stolen data on dark web marketplaces.
“He was living the life of someone who believed he was untouchable,” says FBI Special Agent Marcus Torres, who led the American side of the investigation. “And for a while, he was.”
But the MindMate hack was different in scale and ambition from anything Volkov had attempted before. And the mistakes he made would prove to be his undoing.
The FBI made a micropayment of $50 in bitcoin to the wallet address ghost_protocol had provided for ransom payments. They were able to trace the payment as it was laundered through a series of exchanges, eventually landing in an account linked to Volkov’s Canadian passport. The servers hosting the leaked data had been paid for using a prepaid credit card, but that card had also been used to pay for a Netflix subscription registered to an email address Volkov had used for years.
As investigators traced the history on ghost_protocol’s accidentally uploaded home folder, they found more damning evidence. Browser bookmarks for MindMate’s employee login page. Notes on the company’s security architecture. And those searches of the database for his own name and his family’s names conducted from an IP address in Lisbon, at a time when Volkov’s passport showed he was in Portugal.
“He thought he was careful,” Torres says. “But he made the same mistake a lot of these guys make. He got comfortable. He got sloppy.”
It took months to build a case that would support extradition. The crime had so many victims that the FBI had to create an online portal for people to register and give their statements. That generated more than 2.1 million reports, each of which needed to be reviewed. So it was November 2025—eight months after Chen, Morrison, and the other victims had received their ransom demands—before a federal grand jury indicted Volkov on 47 counts including computer fraud, wire fraud, extortion, and identity theft.
His face—sharp-featured, with pale blue eyes and a smirk that seemed permanent—was added to the FBI’s Most Wanted Cyber Criminals list, alongside hackers responsible for attacks on banks, hospitals, and critical infrastructure.
On January 15, 2027, Portuguese police received a tip that Volkov was at a café in Lisbon’s Alfama district. Officers found him sitting at an outdoor table, drinking espresso and scrolling through his phone. He was carrying three phones, two laptops, and a USB drive that would later be found to contain encryption keys for several cryptocurrency wallets.
When asked to identify himself, Volkov reportedly smiled and said, “You already know who I am.”
Extradition to the United States took four months. Volkov fought it, claiming he would not receive a fair trial in America. Portuguese courts disagreed.
“I don’t know what I expected, but I was surprised to see that he looked so normal,” Jake Morrison says of seeing Volkov’s photo for the first time. “He looks like a regular guy. It made me realize it could have been anyone.”
“I was at home when I saw the news,” Sarah Chen says. “They showed his face on CNN. This person who had been inside my head, who knew my deepest fears—he was just some guy in a café. It made it worse, somehow. More real.”
The trial began in September 2028 in federal court in San Francisco. With 4.2 million victims, it was the largest data breach prosecution in American history. The logistics were unprecedented: victim impact statements were submitted electronically, and proceedings were live-streamed to accommodate the sheer number of people affected.
Volkov’s defense was audacious. He didn’t deny accessing MindMate’s systems. Instead, his lawyers argued that the company’s negligent security practices were the real crime. “My client didn’t create the vulnerability,” his lead attorney said in opening statements. “He just exposed it. MindMate promised its users privacy and delivered a house of cards.”
The jury was not persuaded. On October 30, 2028, Volkov was found guilty on all 47 counts. He was sentenced to twelve years in federal prison—a significant term, but far short of the maximum of twenty years he could have received.
“The sentencing guidelines are what they are,” says Torres, the FBI agent. “But when you think about the harm caused, millions of people violated, lives destroyed, people who died; twelve years doesn’t feel like enough.”
Now 27, Volkov has served much of his sentence at a federal correctional facility in Florence, Colorado—the same complex that houses some of America’s most notorious criminals. For months, he refused to grant me an interview. But while I was reporting this story, he changed his mind.
As I sit in the prison’s visiting room, watching the minutes tick by on a clock mounted high on the wall, I wonder if Volkov is playing games with me, if he’s agreed to this interview simply to waste my time, with no intention of actually showing up. But after thirty minutes, a guard leads him in.
He’s thinner than in his photos, with the pallid complexion of someone who rarely sees sunlight. But his eyes are sharp, and his manner is calm, almost amused. He sits across from me, separated by a table bolted to the floor, and waits for me to speak first.
I ask him about the hack. He says he did it, but frames it as a public service.
“MindMate was lying to its users. They promised privacy they couldn’t deliver. I proved it.”
I tell him about Sarah Chen, about how she felt like someone had been living inside her head. “I’m sure that’s how she felt,” he replies, his expression unchanged. “But I didn’t put those thoughts in her head. She chose to type them into an app owned by a company that couldn’t protect them. That’s not my fault.”
I tell him about Jake Morrison, about a seventeen-year-old boy whose deepest secrets were exposed to his classmates, who was bullied and humiliated because of what Volkov did. “That’s tragic,” he says. “But I didn’t bully him. His classmates did. I just provided information. What people do with information is their choice.”
I ask him about the people who died. The nineteen-year-old in California. The others. Does he feel any responsibility?
He pauses for the first time. “There’s a lot of terrible things in the world,” he says finally. “I turn on the news and there’s people dying in Ukraine, in Gaza, everywhere. How do you feel about that? The honest answer for most people is that they don’t feel much of anything. It’s too abstract. Too far away.” He fixes me with his pale eyes. “That’s how I feel about this. They’re names on a screen. They’re not real to me.”
You don’t have anything to say to the victims?
“Not really,” he says. “They’re strangers. They made choices. Choices have consequences.”
There’s one question I’ve been wanting to ask since I started reporting this story. “Do you ever feel empathy? For anyone?”
Volkov considers this. “Empathy is overrated,” he says. “It’s a story people tell themselves to feel good. ‘I care about strangers.’ No, you don’t. You care about the idea of caring. I’m just honest about it.”
As I leave the prison, I think about what Sarah Chen told me she would ask Volkov if she ever had the chance. “I would ask him if there was ever a moment when he understood what he did to people,” she said. “If he ever imagined what it felt like to have your innermost thoughts stolen and published for the world to see.”
She paused. “I don’t think he’s capable of it. I think he genuinely doesn’t understand. And that’s the scariest part.”
MindMate filed for bankruptcy in April 2026. The company’s assets were sold to pay legal fees and settlements. Its technology—the AI that millions of people had trusted with their secrets—was acquired by a larger tech company, which has said it will not revive the brand.
Marcus Webb faced criminal charges for failing to disclose the breach in a timely manner. He was convicted of negligent handling of personal data and sentenced to two years of probation. He did not serve prison time.
Dr. Elena Vasquez, the co-founder, testified against him at trial. “I begged him to tell users as soon as we knew,” she said. “He told me it would destroy the company. He was more worried about the stock price than about the people who trusted us.”
Webb declined my requests for an interview.
A class action lawsuit on behalf of MindMate users resulted in a $2.3 billion settlement—one of the largest in data breach history. But divided among 4.2 million victims, it amounts to roughly $500 per person. For many, the sum feels less like compensation than an insult.
“How do you put a price on having your innermost thoughts exposed?” Sarah Chen asks. “How do you compensate someone for the loss of their sense of privacy, their sense of self?”
Chen eventually reported her CEO’s securities fraud. He was indicted and is currently awaiting trial. Chen was fired shortly after filing her complaint. She now works as an independent consultant, advising companies on ethics and governance. She has not used an AI assistant since the breach.
“I thought I was talking to myself,” she says. “I forgot there was a company in between.”
Jake Morrison graduated high school and is now a freshman at a college in the Midwest. He speaks publicly about online privacy and mental health, sharing his story in the hope that it will help others. “I’m not ashamed of who I am anymore,” he tells me. “But I shouldn’t have had to be outed like that. No one should.”
His mother, Linda, has become an advocate for stronger data protection laws. “These companies promise privacy because it’s good marketing,” she says. “But they don’t invest in security because it’s expensive. And when something goes wrong, it’s the users who pay the price.”
Copies of the MindMate conversation logs have been circulating since they were first released in March 2025. At one point, someone created a searchable database, allowing anyone to look up names and read their most private thoughts. The database was eventually taken down, but copies persist on the dark web. They probably always will.
This doesn’t surprise Sarah Chen. “Volkov isn’t one of a kind,” she says. “I know human nature. People are curious. They want to know other people’s secrets. And some people are willing to steal them.”
Other people are as prepared as Volkov to break moral and legal boundaries—for money, for status, for the thrill of it, or simply because they can. In December 2026, federal prosecutors announced that a second suspect had been identified in the MindMate case: a twenty-four-year-old software engineer in Estonia who allegedly helped Volkov process and organize the stolen data. He has been charged with conspiracy and is fighting extradition.
In an era when AI models are trained on our conversations, our emails, our documents, and our most private thoughts, is it naive to believe anything can ever be truly secure? The AI companion market is now worth an estimated $15 billion. Dozens of apps promise private, judgment-free conversations with artificial intelligence. Their privacy policies are emphatic. Their security practices are often opaque.
The human need to confide, to process our thoughts, to be heard, to feel understood can now be met in extraordinary ways by technology. We tell AI things we might never tell another human being. We assume the machine is safe because it isn’t human, because it can’t judge us, because it doesn’t have friends who might gossip or employers who might fire us.
But behind every AI is a company. Behind every company is a database. And behind every database is a vulnerability waiting to be exploited.
Volkov, from his prison cell, believes we are all clinging to outdated expectations about privacy in a digital world. “Everyone’s secrets exist online somewhere,” he told me. “Your photos, your messages, your search history, your conversations with AI. It’s all stored on servers owned by companies you’ve never met. You want to believe in privacy. But privacy is a fantasy. The sooner people accept that, the better.”
I ask him if that’s supposed to be comforting.
He smiles. “It’s not supposed to be anything. It’s just the truth.”
As I finish reporting this story, I think about the conversations I’ve had with AI assistants over the years. The questions I’ve asked. The thoughts I’ve processed. The things I’ve typed into a text box, assuming no one would ever see them.
I think about Sarah Chen, who trusted an app with her biggest professional dilemma and paid the price.
I think about Jake Morrison, who trusted an app with his identity and had it stolen.
I think about the people who didn’t survive.
And I think about Dmitri Volkov, sitting in a prison cell in Colorado, utterly certain that he did nothing wrong.
“You’re recording this conversation,” he said to me as I was leaving. “It’s stored on a device. That device is connected to the internet. Someday, someone might hack it. Someone might read every word we’ve said.”
He leaned back in his chair, that permanent smirk still on his face.
“Nothing is private,” he said. “Not anymore. Maybe not ever.”