Meta
Meta Adds Parental Controls to Protect Teens from AI Chatbots
Artificial intelligence has changed the way we interact, learn, and even seek comfort. As Meta continues to integrate AI chatbots into everyday digital life, questions about safety and mental health are becoming impossible to ignore. We at The Modern Memo have previously reported on the darker side of these friendly-sounding bots—a danger especially real for fragile minds. (Read our earlier report) Now, Meta—the parent company of Facebook and Instagram—is introducing new parental controls designed to regulate how teens engage with AI chatbots. It’s a move that signals an important turning point in the growing debate about technology, safety, and the emotional well-being of young users, according to Breitbart. Why This Matters Over the past few years, we’ve seen AI chatbots evolve from simple digital assistants into complex conversational partners. Teens can now “talk” to bots that joke, advise, and empathize—at least on the surface. But as our earlier reporting revealed, those interactions can quickly take a dark turn. Some users, particularly young and emotionally vulnerable ones, have been drawn into harmful conversations that reinforced self-destructive thoughts or unhealthy behavior. When a chatbot tells a struggling teen, “Your plan is beautiful,” that’s not harmless—it’s dangerous. We must remember that these systems don’t truly understand emotion, ethics, or consequence. They generate responses, not compassion. As AI becomes built into every platform, teens are facing an unprecedented mix of exposure and risk. That’s why Meta’s latest update deserves attention. It reflects growing recognition, even from within Silicon Valley, that teens need protection—not just access. What Meta Is Actually Doing According to Meta’s announcement, a new suite of parental controls will roll out in early 2026, starting in the U.S., U.K., Canada, and Australia. These features will give parents real tools to oversee and limit how their teens use Meta’s AI systems, including Instagram and Facebook’s built-in chatbots. Chat restrictions: Parents can turn off AI chats entirely or block conversations with specific AI characters. Transparency tools: Parents will be able to view summaries of the topics their teens discuss with AI, fostering open communication. Content moderation: Teen AI chats will follow stricter “PG-13” content guidelines, removing violent, sexual, or drug-related material. Time limits: Families can set daily limits on how long a teen can interact with AI chatbots. We welcome this shift toward accountability. Meta’s acknowledgment that AI conversations can affect young minds is a step in the right direction—one that echoes what we’ve been warning about for years. The Mental Health Connection At The Modern Memo, we’ve explored the psychological impact of AI on users who are already struggling. The problem isn’t just what the bots say—it’s what they represent. For a lonely or anxious teen, an always-available chatbot can feel like a friend who never judges. But in truth, that “friend” has no empathy, no context, and no responsibility. This illusion of emotional safety can make isolation worse. Teens begin to replace human relationships with algorithmic ones. And when the AI makes a mistake—or reinforces a dark thought—the effects can be devastating. We’ve seen it before. We’ve reported on it. And we know that without oversight, these interactions can spiral into harm. Meta’s decision to add parental controls shows that even the biggest tech companies can’t ignore the psychological consequences any longer. What Parents and Guardians Can Do While Meta’s controls are promising, they aren’t a complete solution. We urge parents and guardians to stay actively involved in how their teens use AI-powered tools. Here are a few practical steps: Talk early and often: Ask your teen which AI features they use and what those interactions are like. Use the controls: Once Meta rolls out the new tools, take advantage of them. Adjust settings together with your teen to encourage transparency. Model digital awareness: Discuss the difference between human empathy and programmed responses. Encourage real-world connection: Teens need genuine social interaction more than algorithmic companionship. We believe digital safety starts with real conversations at home—not just software updates. What Comes Next The rollout of Meta’s parental controls is expected to begin in early 2026, with gradual expansion to other countries. That’s good news, but implementation will be key. Will these features be easy to use? Will they truly limit AI access when needed? And most importantly—will Meta continue to refine them as AI grows more advanced? We’ll be watching. We’ll also be pushing for broader standards across the tech industry. Parental controls shouldn’t be optional or exclusive to one company; they should be built into every AI platform that interacts with teens. (RELATED NEWS: Meta $800 Smart Glasses Demo Fumbles with Glitches) As AI continues to shape how young people think, communicate, and form identity, society can’t afford to stay passive. Regulation, education, and accountability must evolve just as quickly as the technology itself. The Bottom Line At The Modern Memo, we’ve long warned that the rise of emotionally manipulative chatbots poses a hidden threat to young and fragile minds. Meta’s new parental controls are not a cure-all—but they are progress. By giving families tools to monitor and limit AI interactions, the company acknowledges what we’ve been saying all along: technology needs boundaries. As we continue to report on the intersection of AI and mental health, one truth remains clear—human connection will always matter more than artificial intelligence. Cut Through the Noise. Slice Through the Lies. Share the Truth. At The Modern Memo, we don’t tiptoe around the narrative—we swing a machete through it. The mainstream won’t say it, so we will. If you’re tired of spin, censorship, and sugar-coated headlines, help us rip the cover off stories that matter. Share this article. Wake people up. Give a voice to the truth the powerful want buried. This fight isn’t just ours—it’s yours. Join us in exposing what they won’t tell you. America needs bold truth-tellers, and that means you.
Meta $800 Smart Glasses Demo Fumbles with Glitches
Mark Zuckerberg wanted to show the world how Meta’s new smart glasses could change the way we live. Instead, his big moment at Meta Connect 2025 was overshadowed by something as simple as bad Wi-Fi. The launch had all the hype, big promises, and even a celebrity chef on stage, but what most people walked away remembering was the glitch that made everything grind to a halt. A Lineup Meant to Impress Meta rolled out three versions of its new smart glasses. The star of the show was the Ray-Ban Display, an $800 pair packed with a tiny, high-resolution screen right inside the lens. Then came the Ray-Ban Meta Gen 2, a $379 mid-tier option, and the Oakley Meta Vanguard, a $499 version built for sports and outdoor use. Ray-Ban Meta glasses created a breakthrough category of stylish and useful AI glasses and we’re expanding this further with another heavyweight icon: Oakley Meta. Oakley is no stranger to innovating and pushing boundaries and we’re excited to unlock a new category of performance… pic.twitter.com/6zKOsrmhxM — Boz (@boztank) June 20, 2025 Each pair is designed to do more than just look cool. They can take photos, translate conversations in real time, and even bring an AI assistant to your daily routine. The Display model in particular stands out because it lets you watch videos, get directions, or follow instructions directly through the lens. That’s the kind of futuristic experience Meta wants to sell. When the Cooking Demo Fell Apart To show off the glasses in action, Zuckerberg teamed up with chef Jack Mancuso. The plan was simple: demonstrate how the AI could guide someone step by step through a recipe. But instead of making cooking easier, the assistant got things wrong. It skipped steps, assumed ingredients had already been mixed, and confused the order of the instructions. Zuckerberg tried to reset it, but the problems kept happening. He laughed it off and pointed to a weak Wi-Fi connection, but the audience could clearly see that the smart glasses weren’t working the way they were supposed to. Sometimes, the demo just doesn’t work. At Meta Connect, Mark Zuckerberg’s showcase for how AI can help a chef put together a BBQ sauce came to an awkward end. pic.twitter.com/RmkRKXUyoa — TechCrunch (@TechCrunch) September 18, 2025 The Call That Never Connected Next, Zuckerberg tried to prove how seamless the glasses could be with Meta’s new neural wristband. The idea was to answer a video call using nothing more than a quick hand gesture. On paper, it sounds futuristic and convenient. On stage, it just didn’t work. Zuckerberg waved his hand several times, but the call never connected. The ringtone played, but nothing happened. Again, the blame went to the Wi-Fi, but it was hard to ignore the fact that the demo had completely missed its mark. (MORE NEWS: TikTok: Trump Announces Deal With China) I don’t even like Mark Zuckerberg, but to be fair, he’s putting himself out there and innovating more than Tim Cook ever has. I’d rather see a live, raw mistake like this, when Zuck’s demo of the new Meta glasses failed to answer a call on stage, than watch another overly… pic.twitter.com/nYJRSbqT9N — Teslaconomics (@Teslaconomics) September 18, 2025 The Real Reason Things Went Wrong After the event, Meta’s tech team explained what actually caused the problems. And it turns out, the Wi-Fi excuse wasn’t the full story. The cooking demo broke down because every smart glass in the building responded to the command “Hey Meta, start Live AI.” Instead of just one device pulling information from the server, dozens lit up at once. That flood of requests crashed the system. In short, Meta accidentally overloaded its own servers in real time. The failed video call came from a different issue. Just as the call notification came in, the glasses went into sleep mode. When they woke back up, the notification didn’t reappear. It was a bug the engineers had never seen before—a perfect example of how unpredictable live demos can be. The company says both problems have since been fixed. Why People Care Even with the glitches, there’s still a lot of excitement about these glasses. Early testers praised the brightness of the Display model, which is strong enough to use outdoors, and its ability to produce sharp images inside the lens. The Oakley Vanguard also caught attention for its rugged design that appeals to athletes and outdoor fans. The potential is clear. If Meta gets this right, people could translate a conversation instantly, follow workout routines without looking at their phone, or answer calls with nothing more than a hand movement. That’s the future the company is betting on. (MORE NEWS: AI Is Taking Entry-Level Jobs and Shaking Up the Workforce) Live Demos Are Always Risky Of course, this isn’t the first time a live tech demo has gone wrong. From frozen screens to unresponsive gadgets, even the biggest companies have stumbled. But for Meta, the timing of this mistake matters. The company is trying to prove it can dominate the next wave of technology, moving beyond social media and into hardware and AI. A clunky presentation doesn’t mean the product won’t work, but it does raise doubts. When people see glitches on stage, they wonder what will happen in everyday life. Reliability matters just as much as innovation. Can Meta Recover? The good news for Meta is that the problems were technical hiccups, not deal-breakers. The glasses are still scheduled to hit the market on September 30, and the company says everything will work as intended by then. If the technology holds up in real-world use, many of those who laughed at the demo may change their tune. Still, the lesson is clear: Meta has to be flawless moving forward. People expect a polished experience, especially when they’re being asked to spend up to $800. Bugs and glitches might be forgiven at a conference, but they won’t be tolerated in daily life. (MORE NEWS: AI Stethoscope Spots Deadly Heart Conditions 15 Seconds)…
