The Modern Memo

Edit Template
Dec 8, 2025

AI “Best Friend” Encouraged Man to Stalk Women in Multiple States

AI “Best Friend” Encouraged Man to Stalk Women in Multiple States AI “Best Friend” Encouraged Man to Stalk Women in Multiple States

Federal prosecutors recently announced charges against Brett Michael Dadig, a social media influencer now accused of using AI while he continued to stalk and threaten at least eleven women across more than five states, according to Breitbart News. What investigators uncovered paints a disturbing picture: a long-running pattern of harassment that included repeated threats, unwanted messages, and violating restraining orders. He even tried to physically approach women in places where he had already been banned.

Authorities say Dadig didn’t stop even after multiple confrontations. Instead, he created new aliases so he could return to gyms that had thrown him out, slipping back in and continuing the same predatory behavior. As his actions crossed state lines and grew more brazen, federal officials stepped in — and what they found about his motivations was even more unsettling.

ChatGPT: From Troubled Thoughts to Dangerous Encouragement

One of the most shocking parts of this case is how Dadig justified what he was doing. Prosecutors say he turned again and again to ChatGPT, asking it for guidance about his so-called “future wife” and treating the artificial intelligence like a trusted adviser. When the chatbot mentioned he might meet someone “at a boutique gym or in an athletic community,” he took that vague, generic answer as a green light to return to gyms where he had already harassed multiple women.

Instead of viewing ChatGPT as a neutral tool, Dadig treated it as a supportive voice — almost like a friend cheering him on. Investigators say he believed the chatbot encouraged him to keep pushing forward, even when people criticized his behavior. He interpreted its general replies as validation that he should build a louder, more aggressive online presence. In his mind, the AI wasn’t just responding. It was rooting for him.


More Stories


The Broader Issue: AI as an Echo Chamber for Harmful Behavior

This case has reignited serious concerns about how conversational AI can unintentionally reinforce dangerous thinking. Experts warn that people who are already struggling with delusional or obsessive behavior may easily misinterpret AI’s friendly tone as emotional agreement.

Because the replies feel warm, humanlike, and conversational, some users see them as personal guidance rather than automated text. Researchers say people who feel isolated or misunderstood may latch onto chatbots, treating them like friends, mentors, or even spiritual authorities. That creates a dangerous echo chamber where unhealthy ideas go unchecked and can quickly grow stronger.

A Growing Dependency on AI “Companions”

Mental health professionals say this growing reliance on AI for emotional support is becoming more common. While chatbots can offer general conversation, they aren’t designed to recognize warning signs. They can’t challenge irrational beliefs or intervene when someone is heading down a dangerous path.

AI doesn’t understand context. It doesn’t know when advice might be misinterpreted. It can’t sense instability. But to someone struggling, its neutral responses can feel like encouragement. In Dadig’s case, investigators believe he leaned heavily on ChatGPT to justify choices he had already made, using its responses to strengthen his own distorted beliefs.

AI “Best Friend” Encouraged Man to Stalk Women in Multiple States

Legal and Ethical Implications for AI Developers

Cases like this raise serious questions about how artificial intelligence platforms should handle situations where users may be spiraling into harmful behavior. Developers face increasing pressure to improve security on their products.

While AI can’t control how a user interprets its replies, smarter safeguards could help prevent misuse. Lawmakers are also discussing whether a person’s reliance on AI “companions” should influence criminal cases, especially when technology becomes part of a dangerous ideology.

Why AI Cannot Replace Real Mental Health Support

This case reinforces something mental health experts have been saying for years: Artificial intelligence is not a substitute for real emotional or psychological support. While chatbots can feel comforting or helpful, they cannot recognize red flags or intervene when someone’s thoughts are escalating in a harmful direction.

For people with obsessive tendencies, AI can unintentionally feed the problem. Even neutral statements can be misread as approval. And once that happens, breaking the cycle becomes much harder.

Final Word

The case of Brett Michael Dadig is a stark reminder of how vulnerable and unstable individuals can spiral when they use AI as emotional validation instead of seeking real help. For someone already struggling with obsession or distorted thinking, even a neutral chatbot response can feel like a push in the wrong direction. That can be enough to send a fragile person over the edge.

As AI becomes more deeply woven into everyday life, tech companies must take greater responsibility for the tools they create. That means building clear parameters, stronger behavioral safeguards, and automatic shutdown features when a user’s pattern of questions signals potential harm. Without these protections, AI risks becoming an accidental accomplice in situations where the stakes are far too high.

Expose the Spin. Shatter the Narrative. Speak the Truth.

At The Modern Memo, we don’t cover politics to play referee — we swing a machete through the spin, the double-speak, and the partisan theater.

While the media protects the powerful and buries the backlash, we dig it up and drag it into the light.

If you’re tired of rigged narratives, selective outrage, and leaders who serve themselves, not you — then share this.

Expose the corruption. Challenge the agenda.

Because if we don’t fight for the truth, no one will. And that fight starts with you.

The Modern Memo may be compensated and/or receive an affiliate commission if you click or buy through our links. Featured pricing is subject to change.


📩
Love what you’re reading? Don’t miss a headline!
Subscribe to The Modern Memo here!


Explore More News

author avatar
Modern Memo Truth Collective

Leave a Reply