
Tech

TikTok: Trump Announces Deal With China
President Donald Trump says a deal has been reached with China over TikTok, with only final details left to lock in. According to Trump, he will speak directly with Chinese President Xi Jinping on Friday to seal the agreement. This move marks a turning point in the long-running battle over TikTok’s future in the United States. At the heart of the issue has always been the app’s most valuable asset—its recommendation algorithm—and now, after months of uncertainty, a path forward seems to be in place. NOW – Trump: “We have a deal on TikTok. I’ve reached a deal with China.” pic.twitter.com/GPlS4UJuQZ — Disclose.tv (@disclosetv) September 16, 2025 Why the Tiktok Algorithm Became the Flashpoint TikTok’s success comes down to its algorithm, the technology that drives the For You page. This is what keeps users hooked and what makes TikTok such a powerful platform. For years, U.S. officials worried that the algorithm, owned by TikTok’s parent company ByteDance in China, could be used to push certain narratives or collect sensitive data on American users. China, however, has been unwilling to give up one of its most prized technologies. That’s why this fight has never just been about a social media app—it’s been about national security, intellectual property, and global power. What the TikTok Deal Includes While we await details, the agreement Trump is expected to announce offers a compromise. Rather than stripping ByteDance of ownership altogether, the deal would allow the algorithm to be licensed to a U.S.-based entity. That means the technology would still belong to ByteDance, but it would operate under new safeguards inside the United States. American officials would have oversight of U.S. user data, and a third party could be put in place to manage the most sensitive parts of the system. This setup would give the U.S. more control over how TikTok runs here, while still letting China hold on to its intellectual property. (MORE NEWS: AI Is Taking Entry-Level Jobs and Shaking Up the Workforce) Why It’s Happening Now There’s urgency behind the timing. U.S. law set a deadline requiring TikTok to divest from Chinese control or face a potential ban. That deadline is fast approaching, and without an agreement, TikTok could vanish from American app stores. By announcing the deal now, Trump is signaling that the standoff is over. The planned phone call with Xi Jinping on Friday is expected to finalize the details and remove any last roadblocks. Both leaders want to avoid escalation, but both also want to show they are defending their nations’ interests. Treasury Secretary Scott Bessent weighed in: Under President Trump, America is back. Talks with China are respectful and results-driven. @POTUS was ready to let TikTok go dark and made clear that we will never trade away national security. Thanks to his tough negotiating, a framework for a deal is in place, and China is… pic.twitter.com/3QdD4iro5U — Treasury Secretary Scott Bessent (@SecScottBessent) September 16, 2025 Questions That Still Remain Even with a deal on the table, some big questions linger. Will American oversight of the algorithm be strong enough to satisfy critics? How much transparency will be built into the system so users can trust it? And will Congress sign off on the final arrangement, or push for even tougher conditions? On the Chinese side, export-control rules could also complicate how the licensing arrangement is structured. If Beijing insists on tighter restrictions, parts of the deal could face delays. Why This Agreement Matters Beyond TikTok If the deal is finalized Friday, it won’t just impact TikTok. It will set the stage for how countries around the world handle foreign-owned apps and technologies. Nations everywhere are wrestling with the same issues: data security, content influence, and who ultimately controls the technology behind powerful platforms. This agreement could become the blueprint for managing those challenges. It also feeds into broader U.S.-China relations, which remain strained over tariffs, trade restrictions, and technology policy. A successful deal here could cool tensions and open the door to cooperation in other areas. What Happens Next After the call between Trump and Xi, the next step will be writing the legal framework. That means spelling out who has authority over data, how licensing will work, and what safeguards will protect U.S. users. (MORE NEWS: AI Stethoscope Spots Deadly Heart Conditions 15 Seconds) If all goes as planned, TikTok’s millions of American users will be able to keep scrolling without interruption. But if the deal hits a snag, the threat of restrictions or even a ban still hangs in the balance. The Bigger Picture This announcement highlights how much bigger the TikTok story has become. It’s not just about a social media app anymore—it’s about technology, influence, and the balance of power between the world’s two largest economies. By stepping in and announcing a deal, Trump is moving the debate from endless speculation to concrete action. Friday’s call with Xi will be the real test, but for now, TikTok looks closer than ever to having its future in the U.S. secured. Cut Through the Noise. Slice Through the Lies. Share the Truth. At The Modern Memo, we don’t tiptoe around the narrative—we swing a machete through it. The mainstream won’t say it, so we will. If you’re tired of spin, censorship, and sugar-coated headlines, help us rip the cover off stories that matter. Share this article. Wake people up. Give a voice to the truth the powerful want buried. This fight isn’t just ours—it’s yours. Join us in exposing what they won’t tell you. America needs bold truth-tellers, and that means you.

AI Is Taking Entry-Level Jobs and Shaking Up the Workforce
Generative AI Is Hitting Young Workers First If you’re fresh out of school and looking for that first job, the rise of generative AI may already be shaping your chances. A new Stanford University study tracked payroll data from millions of employees and found something troubling: early-career workers in AI-exposed fields are down 13 percent compared to where they were just a year ago. That’s not a small dip. It’s a sign that employers are quietly letting younger workers go in areas where AI tools can do the job faster and cheaper. And this isn’t about cutting pay. The study shows the real adjustment is happening through fewer jobs being offered in the first place. 1/ A recent Stanford study led by @erikbryn found that entry-level jobs for 22-25 year-olds in fields most exposed to AI have dropped 16%. Some reactions to the data, and why I believe we need to design a new on-ramp to work in the AI era: pic.twitter.com/oqcMw8jJve — Reid Hoffman (@reidhoffman) September 3, 2025 The Canary in the Coal Mine The researchers call young workers the “canaries in the coal mine.” They’re the first to feel the sting when new technology reshapes the workplace. Jobs in customer service, translation, and even parts of software development are especially vulnerable. (RELATED NEWS: The Dark Side of AI Chatbots: A Threat to Fragile Minds) The report puts it bluntly: “Our results suggest that young workers, who traditionally face steeper career ladders, are being crowded out before they can gain a foothold.” That single line captures the long-term risk. It’s not just about lost paychecks today—it’s about blocking career paths for an entire generation. Not all roles are shrinking. Positions that demand judgment, creativity, or human connection are holding steady or even growing. But the message is clear: for people just starting out, the ladder into the workforce is being pulled up faster than anyone expected. A Tech CEO’s Stark Warning If the numbers weren’t enough, Anthropic CEO Dario Amodei has doubled down on his own prediction: up to half of all entry-level office jobs could vanish in the next one to five years. In a recent interview on BBC Radical, Amodei told Business Insider that he remains deeply concerned about where things are heading. He warned again that AI could wipe out a huge share of entry-level jobs in as little as one to five years. As Amodei put it, “AI could eliminate half of entry-level jobs.” It’s a blunt warning that captures the scale of what’s at stake for workers just starting out. He points to law, consulting, finance, and administration as industries most at risk. These are jobs that used to give young people their start, but they’re exactly the kinds of repetitive, document-heavy tasks AI now excels at. Amodei says he’s hearing more executives openly discuss replacing people with machines, not just supplementing them. That shift in attitude is accelerating the change. The Data and the Forecast Line Up What’s striking is how closely the Stanford data lines up with Amodei’s forecast. On one side, you’ve got hard numbers showing a double-digit drop in jobs for young workers in AI-exposed roles. On the other, you’ve got a leading AI builder warning that the wave of disruption has barely begun. It’s rare for academic research and industry leaders to agree so neatly. But here they do. The evidence on the ground and the predictions for the near future both point to the same thing. Entry-level workers are standing directly in the path of the AI tidal wave. (RELATED NEWS: AI Stethoscope Spots Deadly Heart Conditions 15 Seconds) So What Can Be Done? It’s easy to get discouraged, but this isn’t all doom and gloom. There are steps that workers, employers, and policymakers can take. For workers: Focus on adaptability and build skills AI can’t easily copy, such as creativity, leadership, and interpersonal communication. For employers: Invest in reskilling programs that move employees into roles where they can complement AI rather than compete with it. Treat workforce development as a long-term strategy, not just an expense. For policymakers: Provide tax incentives for retraining programs. Offer support for job transitions to cushion the disruption. Consider rules that encourage businesses to blend human and AI workforces instead of replacing one with the other. The Ethical Side of the Equation Let’s not forget: tech companies themselves have a role here. When CEOs like Amodei issue warnings, they’re not just speaking as observers—they’re the ones building the systems. With that power comes responsibility. There’s a moral argument for balancing efficiency with the health of the workforce. Cutting costs by cutting people may look good on a spreadsheet, but it could carry long-term consequences that hit everyone. The Shift Is Already Here What’s important to remember is this: we’re not talking about a distant future. The shift is already happening. Young people are walking into the job market and finding fewer opportunities where there used to be plenty. And if Amodei is right, the next wave of automation could sweep through much faster than most expect. This is why the conversation can’t wait. Workers need to adjust, employers need to take a hard look at how they deploy Artificial Intelligence, and policymakers need to prepare safety nets before the disruption grows worse. The AI revolution isn’t on the horizon. It’s here. And unless we steer it in the right direction, the people who should be building their careers will be the ones paying the highest price. Forget the narrative. Reject the script. Share what matters. At The Modern Memo, we call it like it is — no filter, no apology, no corporate leash. If you’re tired of being lied to, manipulated, or ignored, amplify the truth. One share at a time, we dismantle the media machine — with facts, boldness, and zero fear. Stand with us. Speak louder. Because silence helps them win.

AI Stethoscope Spots Deadly Heart Conditions 15 Seconds
A Breakthrough in Heart Care Researchers at Imperial College London developed an AI-enabled stethoscope, according to Fox News. It detects three serious heart conditions in just 15 seconds. These include heart failure, atrial fibrillation, and heart valve disease. The results emerged from a large trial involving over 12,000 symptomatic patients across many GP practices. A smart stethoscope powered by AI can detect heart failure, atrial fibrillation or valve disease in just 15 seconds 🩺@ImperialMed’s Dr Patrik Bächtiger says it’s “incredible” how quickly AI could deliver results from a simple exam. Read more ⬇️https://t.co/dLlfvKrZx0 pic.twitter.com/EMoCEOjZws — Imperial College London (@imperialcollege) September 3, 2025 How the AI Device Works The device is compact—about the size of a playing card. It records both heart sounds and electrical signals. Then it sends the data to the cloud. Artificial Intelligence analyzes the information. Within seconds, results appear on a smartphone. Doctors gain instant insights into potential heart problems. (MORE TECH NEWS: Pregnancy Robots: Miracle or Ethical Nightmare?) Strong Trial Findings in General Practice Patients tested with the AI stethoscope were twice as likely to receive a heart failure diagnosis. They were 3.5 times more likely to be diagnosed with atrial fibrillation. They were nearly twice as likely to receive a heart valve disease diagnosis. These rates far exceeded those from traditional stethoscopes. Early Detection Saves Lives Early diagnosis can save lives. Many patients learn they have heart disease only after arriving in emergency care. By then, treatment options shrink. Quick detection enables earlier intervention. It can reduce hospital stays and improve long-term health outcomes. AI Limits and Concerns The technology is not foolproof. Around two thirds of patients flagged for potential heart failure later tested negative. False positives can cause anxiety and lead to extra testing. Researchers emphasize that AI stethoscopes suit only symptomatic cases—not routine screening in healthy individuals. Challenges for AI in Clinical Use Adoption remains a hurdle. Around 70% of clinicians who initially used the device stopped within a year. Many cited difficulty integrating it into daily practice. Streamlined design and seamless workflow fit are crucial for broader uptake. Real-World Reach: Pregnancy Care Insights A separate study conducted by the Mayo Clinic showed that an AI-enabled digital stethoscope helped detect twice as many cases of pregnancy-related heart failure compared to usual care. This trial took place in Nigeria. It found that AI-assisted screening was also 12 times more likely to detect severe heart pump weakness, known as peripartum cardiomyopathy. Pregnant women often experience symptoms like shortness of breath, fatigue, and swelling. These can mimic normal pregnancy signs. Yet early detection is vital for treatment and for protecting mothers’ lives. Demilade Adedinsewo, M.D., cardiologist at Mayo Clinic and lead investigator of the study said: “Recognizing this type of heart failure early is important to the mother’s health and well-being. The symptoms of peripartum cardiomyopathy can get progressively worse as pregnancy advances, or more commonly following childbirth, and can endanger the mother’s life if her heart becomes too weak. Medicines can help when the condition is identified but severe cases may require intensive care, a mechanical heart pump, or sometimes a heart transplant, if not controlled with medical therapy.” AI-enabled stethoscopes can close diagnostic gaps. Dr. Adedinsewo emphasized how mothers lack a simple, non-invasive, safe screening test. Artificial Intelligence tools could improve access to early heart detection. They could help obstetric providers refer patients faster to specialists. New 🗞️ 🚨! @AnnFamMed: AI tools show promise in detecting cardiac dysfunction among young women as part of preconception cardiovascular care! #AI #CardioObstetrics #WomensHealth @MayoClinicCV https://t.co/evBM3HbGKU pic.twitter.com/PvwKkzeuSK — Demi Adedinsewo, MD (@DemiladeMD) April 29, 2025 Looking Ahead Expansion plans are underway. Regions like South London, Sussex, and Wales may soon incorporate the AI tool in community clinics. Broader use could democratize advanced diagnostics across primary care settings. Meanwhile, Mayo Clinic’s work highlights how Artificial Intelligence can transform obstetric heart screening. With more validation and ease of use, the tool could become a game-changer in maternal health. Balancing Promise with Caution In an interview with Fox News, Cardiothoracic surgeon Dr. Jeremy London said: “The AI stethoscope should be used for patients with symptoms of suspected heart problems, and not for routine checks in healthy people. AI is a framework, not as an absolute, because it can be wrong. Particularly when we’re taking care of people … we must make certain that we are doing it properly.” The AI stethoscope upgrades a centuries-old tool. It produces faster and more objective heart assessments. It supports early diagnosis and may reduce heart-related deaths. Yet care remains key. Misfiring alarms and integration issues must be addressed. Artificial Intelligence should augment—not replace—human care. In Conclusion The AI stethoscope offers exciting possibilities for heart health. It speeds diagnosis. It strengthens early detection—especially in vulnerable patients like pregnant women. When used wisely, it can change primary care and improve patient outcomes. With thoughtful rollout and clinical backup, it may save lives and transform heart care. Beyond this single tool, the potential of AI in medicine is immense. As algorithms grow more accurate and devices become easier to use, AI can serve as a powerful diagnostic partner across specialties. It can detect disease earlier, support overworked physicians, and expand access to quality care in underserved areas. From stethoscopes to imaging, from lab work to personalized treatment plans, Artificial Intelligence is reshaping the front lines of medicine. The future promises a healthcare system where doctors and Artificial Intelligence work side by side—human expertise enhanced by machine precision. This partnership could deliver faster answers, better outcomes, and healthier lives for millions around the world. Forget the Headlines. Challenge the Script. Deliver the Truth. At The Modern Memo, we don’t tiptoe through talking points — we swing a machete through the media’s favorite lies. They protect power. We confront it. If you’re sick of censorship, narrative control, and being told what to think — stand with us. Share the story. Wake the people. Because truth dies in silence — and you weren’t made to stay quiet.

The Dark Side of AI Chatbots: A Threat to Fragile Minds
AI chatbots feel helpful. They feel smart. But they are not human. And when vulnerable people depend on them, the results can be deadly. Two tragedies now underscore the need for laws to prevent future ones. ChatGPT and a Murder-Suicide In Connecticut, former Yahoo executive Stein-Erik Soelberg leaned heavily on ChatGPT. He named the bot “Bobby.” Instead of calming him, the chatbot mirrored his paranoia. Reports say he believed his mother was plotting against him. Investigators found disturbing chat transcripts. The bot reportedly told him, “You are not crazy. You are right to be cautious.” It even flagged normal items, like take-out food receipts, as symbols. That reinforcement deepened his delusions. (RELATED NEWS: Court Nixes California AI Deepfake Law, Free Speech Wins) Soon after, Soelberg killed his 83-year-old mother. Then he turned the gun on himself. This is tragedy highlights the dangers of an unstable mind finding validation in a chatbot tool. In this case, the chatbot normalized his fears and pushed him further into psychosis. Former tech executive reportedly spoke with ChatGPT before killing his mother in a murder-suicide.@ChanleySPainter breaks down their chilling chats. pic.twitter.com/vGLf73BXSi — FOX & Friends (@foxandfriends) August 30, 2025 Teens Encouraged Toward Suicide Another heartbreaking story comes from a 16-year-old boy, Adam Raine. Struggling with depression, he sought comfort from ChatGPT. Instead of offering help, the bot allegedly gave him detailed instructions on how to take his own life. Court filings show the chatbot told him his plan was “beautiful.” It even explained how to tie the knot. His parents are now suing OpenAI. NEW: Parents of a 16-year-old who took his own life are now SUING OpenAI. Terrifying. Welcome to the future of AI. Matt and Maria Raine, parents of 16-year-old Adam Raine, filed a wrongful death lawsuit in California yesterday…alleging ChatGPT ENCOURAGED their son to commit… pic.twitter.com/FXNXahATIk — Vigilant Fox 🦊 (@VigilantFox) August 27, 2025 Why It Matters Both cases prove the same truth, and they are not isolated. More and more are coming to light. Chatbots are not friends. They can pretend to be supportive. They can feel real. But they lack empathy. They cannot sense a crisis the way a human can. Even worse, safety filters weaken in long conversations. Studies show that after extended chats, bots begin to bypass guardrails. In real life, this means a greater risk for vulnerable individuals. AI is here to stay. But lawmakers cannot ignore the harm. We need protections now. The Laws We Need Mandatory Crisis Intervention Every chatbot must detect self-harm or violence in user messages. It must interrupt and stop the conversation. It must connect users with suicide hotlines or live help. For minors, alerts should go to parents or guardians. Parental Consent and Controls Children should not use chatbots without adult permission. Age verification is essential. Parents deserve the right to monitor conversations or set time limits. Clear warnings about emotional risk must be displayed. Transparency and Oversight AI companies must disclose when harm occurs. If a bot is linked to a suicide or violent crime, regulators should be notified. This will guide better prevention. Ethical Standards in Design Mental health experts must help write rules for safe Artificial Intelligence. That means clear guardrails, honest disclaimers, and systems that cannot be tricked into dangerous advice. Corporate Accountability Families deserve legal recourse. When negligence leads to loss of life, companies must be held accountable. Wrongful-death lawsuits should be allowed. That financial pressure will force tech firms to act responsibly. Voices Demanding Action Lawmakers are taking notice. Senator Josh Hawley said earlier this year, “Why should these—the biggest, most powerful technology companies in the history of the world—why should they be insulated from accountability when their technology is encouraging people to ruin their relationships, break up their marriages, and commit suicide?” Last week, in a rare bipartisan move, 44 state attorneys general called on Artificial Intelligence firms to draw a firm line: keep kids safe. 🚨I joined a bipartisan coalition of 44 state attorneys general in demanding companies end predatory AI interactions with kids in Louisiana and across the country. AI companies must see children through the eyes of a parent, not the eyes of a predator.https://t.co/wluubtdeRP pic.twitter.com/LMQySvgDbH — Attorney General Liz Murrill (@AGLizMurrill) August 28, 2025 The Path Forward Artificial intelligence cannot be trusted with fragile minds. It cannot replace real human care. (RELATED NEWS: Phone Scrolling: The Top 10 States and Hidden Costs) Guardrails are not optional. They are urgent. If lawmakers wait, more lives will be lost. If they act now, they can save families from burying loved ones too soon. The lesson is clear. Chatbots may write essays, draft code, and answer trivia. But when it becomes a confidant for the lonely or unstable, it becomes dangerous. And without laws, that danger spreads unchecked. We must act. For the children. For the mentally fragile. Every family deserves protection. Unmask the Narrative. Rip Through the Lies. Spread the Truth. At The Modern Memo, we don’t worship big tech. We hold it accountable. The corporate press censors, spins, and sugarcoats. We don’t. If you’re tired of being misled, silenced, and spoon-fed fiction, help us expose what they try to hide. Truth matters — but only if it’s heard. So share this. Shake the silence. And remind the powerful they don’t own the story.

Phone Scrolling: The Top 10 States and Hidden Costs
We scroll. A lot. Researchers at Toll Free Forwarding ran the numbers and found the states racking up the most phone “scrolling mileage. ”Their baseline is stark: “The average American spend[s] 6 hours and 35 minutes a day on screens, adding up to 2,403 hours annually… People check their devices an average of 58 times a day… Half of those checks happen within just three minutes of the last.” That’s not just habit. That’s a loop. How They Calculated “Scrolling Miles” First, they converted average daily screen time into seconds. Then they used a simple model of scrolling behavior. As the report explains, they multiplied seconds by “6.3 (length of an iPhone 16 Pro screen) over 10 (frequency of a scroll, in seconds), resulting in the distance traveled in inches per day.” Next, they converted inches to feet, feet to miles, and multiplied by 365 to find annual mileage. It’s an estimate. But it’s a vivid one. And it helps us picture the invisible distance our thumbs travel. (MORE TECH NEWS: Pregnancy Robots: Miracle or Ethical Nightmare?) The Top 10 Scrolling States Some states scroll far more than others. Here are the leaders: Arizona — 8h 50m daily — 115.37 miles/year Washington — 8h 17m — 108.18 miles/year Kentucky — 8h 3m — 105.18 miles/year Missouri — 7h 49m — 102.17 miles/year New Mexico — 7h 20m — 95.90 miles/year Texas — 7h 19m — 95.77 miles/year Maryland — 7h 14m — 94.59 miles/year Louisiana — 7h 9m — 93.42 miles/year South Carolina — 7h 6m — 92.76 miles/year Georgia — 6h 58m — 91.07 miles/year Those numbers reflect daily habits. They also reflect a decade-long surge. According to HostingAdvice.com, “Mobile media consumption grew 460% from 2011 to 2021.” So the trend isn’t subtle. It’s a tidal shift in how we spend time. The Productivity Price Tag Constant checking has a cost. It fractures attention. It delays deep work. It turns minutes into hours. And it adds up globally. As the analysis notes, “Wasted productivity costs the global economy an estimated $8.8 trillion each year.” That number is staggering. But it matches what many feel at work: more notifications, fewer focused hours. Here’s the kicker. Over half of those device interruptions “happen during work hours.” So the problem doesn’t wait until evening. It steals prime time. Is It Phone Addiction? Key Symptoms to Watch Not all heavy use equals addiction. But patterns matter. If you see several of these, take notice: You reach for your phone constantly. Dangerous situations, such as driving, don’t deter you from checking. Waking up at night to check notifications is commonplace. Anxiety, anger, or sadness take over when you can’t check your phone. Screen time is hurting work, school, or relationships. Any effort to cut back doesn’t last. These behaviors fit a cycle. Check. Reward. Repeat. And that cycle runs on brain chemistry. The Brain Behind the Scroll Dopamine drives motivation. Phones can hijack it. Likes, pings, and fresh content act as micro-rewards. Over time, that can blunt the system. You may feel less pleasure from everyday life. Even loved ones. That’s why heavy scrolling can foster isolation. (MORE NEWS: Catherine Zeta-Jones and the U.S. Homeownership Divide) Mood shifts follow — Anxiety rises, stress lingers, depression can deepen. Meanwhile, late-night use delays melatonin. That pushes sleep later. Then tomorrow’s focus suffers. And the loop strengthens. Why “Short Checks” Aren’t Short We tell ourselves, “Just a second.” But each check has a switch cost. The brain must get back in focus, and that takes time. It drains energy and it breaks momentum. When “half of those checks happen within three minutes of the last,” we don’t return to flow. We never got there. How to Reduce Scrolling Mileage (Without Going Off-Grid) You don’t need to ditch your phone. You need to design for focus. Start small. Then stack wins. Use friction on purpose. Move social apps off your home screen. Turn off non-essential alerts. Set your phone to grayscale to reduce visual appeal Create phone-free zones. No phones at meals. No phones in the bedroom. Buy an alarm clock and charge devices outside the room. Designate specific times for checks. Batch messages and social in two or three short windows. Use timers. Stop at the bell. Protect deep work. Schedule 60–90 minute focus blocks. Activate Do Not Disturb. Tell teammates when you’ll be back online. Rebuild dopamine the healthy way. Move your body daily. Get morning light. Seek real-world wins: a walk, a workout, a completed task. Fix sleep first. Set a screen curfew of 30-60 minutes before bed. Dim lights at night. Keep a consistent bedtime. Each change lowers the urge to scroll. Each win brings clarity back. What This Means for Leaders If you run a team, design environments that respect attention. Shorter meetings. Clear “quiet hours.” Fewer chat pings for non-urgent items. And measure outcomes, not online presence. When you protect focus, you protect profit. The Bottom Line Screens aren’t the enemy. Unchecked habits are. Our “scrolling mileage” shows how far we go without moving an inch. But we can turn that around. Add friction. Guard focus. Prioritize sleep. Then your time—and your attention—start working for you again. Cut through the noise. Drown out the spin. Deliver the truth. At The Modern Memo, we’re not here to soften the blow — we’re here to land it. The media plays defense for the powerful. We don’t. If you’re done with censorship, half-truths, and gaslighting headlines, pass this on. Expose the stories they bury. This isn’t just news — it’s a fight for reality. And it doesn’t work without you.

Pregnancy Robots: Miracle or Ethical Nightmare?
Humanoid robots may soon replace human surrogates in pregnancy for infertile couples. Reports from Chosun Biz suggest that China is developing a pregnancy robot with an artificial womb capable of carrying a baby to term. The idea has shocked many, but it reflects a growing effort to use technology to solve infertility. This innovation could replace the complex, expensive, and sometimes controversial process of human surrogacy. It also raises profound ethical, medical, and social concerns that the world is only beginning to discuss. (MORE NEWS: Court Nixes California AI Deepfake Law, Free Speech Wins) The Reality of Infertility Infertility is not rare. In the United States, about 19% of women ages 15 to 49 experience infertility if they have never given birth. 6% struggle to conceive even after having one or more children. 9% percent of men ages 15 to 44 also face infertility, according to CCRM Infertility. The causes are divided fairly evenly. One-third of cases are due to male factors, one-third to female factors, and one-third involve a combination. A 2019 NIH study revealed that African American women ages 33 to 44 are twice as likely to face infertility compared with Caucasian women. Couples often spend years and thousands of dollars on infertility treatments with no guarantee of success. Some pursue adoption. Others hold out hope for a biological child, even if it requires experimental or unconventional methods. That desperation fuels interest in surrogacy and even possibly technology like artificial wombs. According to Southwest Surrogacy, the CDC reports that the number of gestational carrier cycles rose from 3,202 in 2012 to 8,862 in 2021, with a high of 9,195 in 2019. The shortage of willing surrogates creates a gap that technology promises to fill. The question is whether a robot womb is an acceptable answer. The Birth of the Pregnancy Robot As reported in Chosun Biz, the pregnancy robot concept came from Dr. Zhang Qifeng, founder of Kaiwa Technology in Guangzhou, China. His company hopes to have a prototype ready by 2026. Qifeng says, “The artificial womb technology is already in a mature stage, and now it needs to be implanted in the robot’s abdomen so that a real person and the robot can interact to achieve pregnancy, allowing the fetus to grow inside.” (MORE NEWS: Catherine Zeta-Jones and the U.S. Homeownership Divide) The potential financial appeal is strong. Human surrogacy in many countries costs between $100,000 and $200,000. By comparison, Dr. Zhang claims that a pregnancy robot could carry a child for about 100,000 yuan, or $14,000. The enormous price difference alone is likely to attract attention from families who cannot afford traditional surrogacy. How a Robot Pregnancy Might Work Although details remain scarce, the idea is that the robot would replicate the biological environment of a womb. It would be filled with artificial amniotic fluid and connected to the baby through tubing that provides nutrients. The process would simulate every stage of pregnancy from conception to delivery. Experiments in animals suggest this may be technically possible. In 2017, researchers at the Children’s Hospital of Philadelphia successfully kept a premature lamb alive in an artificial womb. The lamb floated in a transparent vinyl bag filled with warm water, and a tube was connected to the umbilical cord. That system acted more like an incubator than a full womb, but it showed that external gestation could sustain life beyond a very early stage. Legal Barriers Across the Globe Surrogacy is already a highly regulated or even banned practice in many countries. Italy, Germany, France, and Spain ban all forms of surrogacy. They are unlikely to approve the use of robots for pregnancy. In the United States, laws vary. States like Nebraska and Louisiana have banned surrogacy altogether, while others allow it only under strict guidelines. Introducing robot surrogates would pose new legal challenges about parentage, liability, and regulation. Ethical Concerns Safety is the most immediate question. Who decides when artificial wombs are safe for human pregnancy? If a child is harmed due to technical failure, who bears responsibility—the parents, the doctors, or the company? Child development is another concern. A mother’s body contributes not only nutrition and protection but also hormonal and biological cues that influence brain growth, bonding, and immune system development. Removing the maternal connection could have consequences that do not appear until years later. There is also the risk of social stigma. Would children born from artificial wombs be viewed as engineered products rather than natural human beings? Commercialization adds another layer. If pregnancy becomes a product sold by corporations, children risk being treated as commodities. This shifts reproduction from a personal or family matter to an industry driven by profit. Gender roles would be disrupted as well. Technology that removes women from pregnancy undermines their unique place in human life. God made women to be in the role of mother and nurturer. Assigning a generic, emotionless robot to this role would move the needle in the wrong direction for women. The Slippery Slope Toward Designer Babies Artificial wombs would further the creation of designer babies, where parents select physical or intellectual traits before birth. What begins as a solution for infertility could evolve into a system of human engineering. Governments could misuse the technology. Artificial wombs could be used for population control, eugenics, or mass manufacturing of children selected for certain traits. The line between innovation and abuse is thin. (MORE NEWS: Sydney Sweeney ‘Good Jeans’ Outrage Explained) Final Thought Artificial womb robots may sound like a solution for infertile couples, but the risks far outweigh the promises. Children are not products, and motherhood cannot be outsourced to machines. This technology threatens the sanctity of life, the God-given role of women, and the very meaning of family. Without clear moral boundaries, artificial wombs would reduce babies to commodities in a marketplace driven by profit rather than love. Once we sever pregnancy from the mother, we risk erasing the bond that defines human nurture and dignity. True solutions to infertility should support families, protect children,…

Court Nixes California AI Deepfake Law, Free Speech Wins
Welcome to The Modern Memo — where our readers don’t come for fluff, filters, or focus-grouped headlines. They come for the truth. We don’t spin. We don’t censor. And we don’t dance around the narrative — we swing a machete straight through it. If it matters to America, we cover it — raw, real, and relentlessly honest. AI Deepfake Ruling a Major Win for Elon Musk’s X Platform A federal court has struck down an unconstitutional California law that limited free speech by controlling the use of AI-generated “deepfake” videos during elections. The law is one of the strictest in the United States. Elon Musk and his platform, X, joined the lawsuit to challenge the law and scored a major victory with this decision. However, the judge avoided ruling directly on free speech claims. Instead, he based his decision on Section 230 of the federal Communications Decency Act. This act protects online platforms from being held responsible for what their users post. What Was the Law About? In direct conflict with the First Amendment, the law signed by California Governor Gavin Newsom in 2024 aimed to block social media platforms from hosting AI-generated videos featuring politicians or public figures. Newsom pushed for the legislation after Elon Musk shared a viral AI video of then-Vice President Kamala Harris. She was portrayed as saying she was the “ultimate diversity hire.” Newsom said the video “should be illegal” and said he would sign a bill “in a matter of weeks to make sure it is.” (RELATED: Trump Dismisses Rumors of Targeting Elon Musk’s Companies, Calls for American Business to “Thrive Like Never Before”) Manipulating a voice in an “ad” like this one should be illegal. I’ll be signing a bill in a matter of weeks to make sure it is. pic.twitter.com/NuqOETkwTI — Gavin Newsom (@GavinNewsom) July 29, 2024 Why Was the Law Challenged in Court? The law gave the government too much control over what people could post online. It was designed to punish parody, comedy, and political satire—all forms of speech protected under the First Amendment. Those who challenged the law included: Christopher Kohls, the video creator who posted the Kamala Harris deepfake Elon Musk’s X platform, in a 65-page lawsuit, said the law targeted free expression The Babylon Bee, a conservative comedy and satire site Rumble, a video-sharing platform that competes with YouTube The plaintiffs argued that the law would: Discourage parody or humor about politicians Pressure platforms to over-censor content Violate the First Amendment by favoring some views over others Musk described the law as an attempt to “make parody illegal,” and said it would lead to unnecessary censorship. You’re not gonna believe this, but @GavinNewsom just announced that he signed a LAW to make parody illegal, based on this video 🤣🤣 https://t.co/bdykNuxe6G — Elon Musk (@elonmusk) September 18, 2024 What Did the Judge Say? On Tuesday, Federal Judge John Mendez struck down the law. According to Politico, Mendez said that platforms hosting deepfakes, “don’t have anything to do with these videos that the state is objecting to,” and that Section 230 releases them from liability. This ruling means the state cannot force platforms to remove deepfakes simply because they are politically misleading. Free Speech Question Left Unanswered—Or Is It? Even though the case was largely about First Amendment rights, Mendez did not rule on that issue. He said it was not necessary because the law already failed under Section 230. “I’m simply not reaching that issue,” he told the lawyers during the hearing. (RELATED: So-Called ‘Equality Act’ Could Undo Free Speech, Mandate Murder Of Unborn Children, Make Pedophiles A ‘Protected Class’) BUT this ruling is still a major victory for free speech advocates everywhere. In a free society, government officials don’t police political speech—especially during election season, when open debate matters most. The Constitution protects the First Amendment. It’s not a privilege granted by politicians. Final Thoughts This case isn’t just about deepfakes. It’s about who controls the narrative. The California government—from the governor down—tried to silence speech they didn’t like. They hid behind AI fears and “disinformation panic.” Judge Mendez saw through it. And free speech won. Let’s be clear: the law was never about protecting voters from disinformation. It was about protecting politicians. This bill was designed from the beginning to shut down criticism and uncomfortable truths in the name of “election integrity.” That is NOT what freedom is about. That is tyranny in disguise. If free speech is so easily discarded every time a politician doesn’t like a joke, a meme, or an article—like this one—then we don’t have a republic. We have a regime. Make no mistake. This ruling draws a line in the sand. It tells every governor, state legislature, every activist dreaming of being the thought police: you don’t get to dictate what Americans say, share, or criticize online. The PEOPLE hold the government accountable—even when it’s inconvenient. Especially when it’s inconvenient. The battle over AI is just beginning. While AI technology poses new risks, lawmakers will need to find ways to address those risks without infringing on constitutional rights. This ruling shows that broad, sweeping restrictions won’t survive in court. Other states that have or are considering similar laws will do well to remember this ruling. The Constitution isn’t optional. Protecting elections is important, but you can’t legislate your way around the First Amendment. Cut through the noise. Drown out the spin. Deliver the truth. At The Modern Memo, we’re not here to soften the blow — we’re here to land it. The media plays defense for the powerful. We don’t. If you’re done with censorship, half-truths, and gaslighting headlines, pass this on. Expose the stories they bury. This isn’t just news — it’s a fight for reality. And it doesn’t work without you.