AI & Singularity

The rise of AI, AGI, and the future of superintelligence and control.

email scam photo 1

New email scam uses hidden characters to slip past filters

NEWYou can now listen to Fox News articles!
Cybercriminals keep finding new angles to get your attention, and email remains one of their favorite tools. Over the years, you have probably seen everything from fake courier notices to AI-generated scams that feel surprisingly polished. Filters have improved, but attackers have learned to adapt. The latest technique takes aim at something you rarely think about: the subject line itself. Researchers have found a method that hides tiny, invisible characters inside the subject so automated systems fail to flag the message. It sounds subtle, but it is quickly becoming a serious problem.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.NEW SCAM SENDS FAKE MICROSOFT 365 LOGIN PAGES Cybercriminals are using invisible Unicode characters to disguise phishing email subject lines, allowing dangerous scams to slip past filters. (Photo by Donato Fasano/Getty Images)How the new trick worksResearchers recently uncovered phishing campaigns that embed soft hyphens between every letter of an email subject. These are invisible Unicode characters that normally help with text formatting. They do not show up in your inbox, but they completely throw off keyword-based filters. Attackers use MIME encoded-word formatting to slip these characters into the subject. By encoding it in UTF-8 and Base64, they can weave these hidden characters through the entire phrase.One analyzed email decoded to “Your Password is About to Expire” with a soft hyphen tucked between every character. To you, it looks normal. To a security filter, it looks scrambled, with no clear keyword to match. The attackers then use the same trick in the body of the email, so both layers slide through detection. The link leads to a fake login page sitting on a compromised domain, designed to harvest your credentials.If you have ever tried spotting a phishing email, this one still follows the usual script. It builds urgency, claims something is about to expire and points you to a login page. The difference is in how neatly it dodges the filters you trust.Why this phishing technique is super dangerousMost phishing filters rely on pattern recognition. They look for suspicious words, common phrases and structure. They also scan for known malicious domains. By splitting every character with invisible symbols, attackers break up these patterns. The text becomes readable for you but unreadable for automated systems. This creates a quiet loophole where old phishing templates suddenly become effective again.The worrying part is how easy this method is to copy. The tools needed to encode these messages are widely available. Attackers can automate the process and churn out bulk campaigns with little extra effort. Since the characters are invisible in most email clients, even tech-savvy users do not notice anything odd at first glance.Security researchers point out that this method has appeared in email bodies for years, but using it in the subject line is less common. That makes it harder for existing filters to catch. Subject lines also play a key role in shaping your first impression. If the subject looks familiar and urgent, you are more likely to open the email, which gives the attacker a head start.How to spot a phishing email before you clickPhishing emails often look legitimate, but the links inside them tell a different story. Scammers hide dangerous URLs behind familiar-looking text, hoping you will click without checking. One safe way to preview a link is by using a private email service that shows the real destination before your browser loads it.Our top-rated private email provider recommendation includes malicious link protection that reveals full URLs before opening them. This gives you a clear view of where a link leads before anything can harm your device. It also offers strong privacy features like no ads, no tracking, encrypted messages and unlimited disposable aliases.For recommendations on private and secure email providers, visit Cyberguy.comPAYROLL SCAM HITS US UNIVERSITIES AS PHISHING WAVE TRICKS STAFF A new phishing method hides soft hyphens inside subject lines, scrambling keyword detection while appearing normal to users. (Photo by Silas Stein/picture alliance via Getty Images)9 steps you can take to protect yourself from this phishing scamYou do not need to become a security expert to stay safe. A few habits, paired with the right tools, can shut down most phishing attempts before they have a chance to work.1) Use a password managerA password manager helps you create strong, unique passwords for every account. Even if a phishing email fools you, the attacker cannot use your password elsewhere because each one is different. Most password managers also warn you when a site looks suspicious.Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. Check out the best expert-reviewed password managers of 2025 at Cyberguy.com.2) Enable two-factor authenticationTurning on 2FA adds a second step to your login process. Even if someone steals your password, they still need the verification code on your phone. This stops most phishing attempts from going any further.3) Install a reliable antivirus softwareStrong antivirus software does more than scan for malware. Many can flag unsafe pages, block suspicious redirects and warn you before you enter your details on a fake login page. It is a simple layer of protection that helps a lot when an email slips past filters.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com.4) Limit your personal data onlineAttackers often tailor phishing messages using information they find about you. Reducing your digital footprint makes it harder for them to craft emails that feel convincing. You can use personal data removal services to clean up exposed details and old database leaks.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.AI FLAW LEAKED GMAIL DATA BEFORE OPENAI PATCH Researchers warn that attackers are bypassing email defenses by manipulating encoded subject lines with unseen characters. (Photo by Lisa Forster/picture alliance via Getty Images)Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com.Get a free scan to find out if your personal information is already out on the web: Cyberguy.com5) Check sender details carefullyDo not rely on the display name. Always check the full email address. Attackers often tweak domain names by a single letter or symbol. If something feels off, open the site manually instead of clicking any link inside the email.6) Never reset passwords through email linksIf you get an email claiming your password will expire, do not click the link. Go to the website directly and check your account settings. Phishing emails rely on urgency. Slowing down and confirming the issue yourself removes that pressure.7) Keep your software and browser updatedUpdates often include security fixes that help block malicious scripts and unsafe redirects. Attackers take advantage of older systems because they are easier to trick. Staying updated keeps you ahead of known weaknesses.8) Turn on advanced spam filtering or “strict” filteringMany email providers (Gmail, Outlook, Yahoo) allow you to tighten spam filtering settings. This won’t catch every soft-hyphen scam, but it improves your odds and reduces risky emails overall.9) Use a browser with anti-phishing protectionChrome, Safari, Firefox, Brave, and Edge all include anti-phishing checks. This adds another safety net if you accidentally click a bad link.CLICK HERE TO DOWNLOAD THE FOX NEWS APPKurt’s key takeawayPhishing attacks are changing fast, and tricks like invisible characters show how creative attackers are getting. It’s safe to say filters and scanners are also improving, but they cannot catch everything, especially when the text they see is not the same as what you see. Staying safe comes down to a mix of good habits, the right tools, and a little skepticism whenever an email pushes you to act quickly. If you slow down, double-check the details, and follow the steps that strengthen your accounts, you make it much harder for anyone to fool you.Do you trust your email filters, or do you double-check suspicious messages yourself? Let us know by writing to us at Cyberguy.com.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.Copyright 2025 CyberGuy.com.  All rights reserved.

New email scam uses hidden characters to slip past filters Read More »

image 10d338

The Download: AI’s impact on the economy, and DeepSeek strikes again

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The State of AI: Welcome to the economic singularity —David Rotman and Richard Waters Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole.
At one extreme, AI coding assistants have revolutionized the work of software developers. At the other extreme, most companies are seeing little if any benefit from their initial investments.  That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business. To students of tech history, though, the lack of immediate impact is normal. Read the full story.
If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside our editor in chief, Mat Honan, for an exclusive conversation digging into what’s happening across different markets live on Tuesday, December 9 at 1pm ET.  Register here!  The State of AI is our subscriber-only collaboration between the Financial Times and MIT Technology Review examining the ways in which AI is reshaping global power. Sign up to receive future editions every Monday. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 DeepSeek has unveiled two new experimental AI models DeepSeek-V3.2 is designed to match OpenAI’s GPT-5’s reasoning capabilities. (Bloomberg $)+ Here’s how DeepSeek slashes its models’ computational burden. (VentureBeat)+ It’s achieved these results despite its limited access to powerful chips. (SCMP $)2 OpenAI has issued a “code red” warning to its employeesIt’s a call to arms to improve ChatGPT, or risk being overtaken. (The Information $)+ Both Google and Anthropic are snapping at OpenAI’s heels. (FT $)+ Advertising and other initiatives will be pushed back to accommodate the new focus. (WSJ $)3 How to know when the AI bubble has burstThese are the signs to look out for. (Economist $)+ Things could get a whole lot worse for the economy if and when it pops. (Axios)+ We don’t really know how the AI investment surge is being financed. (The Guardian)4 Some US states are making it illegal for AI to discriminate against youCalifornia is the latest to give workers more power to fight algorithms. (WP $)5 This AI startup is working on a post-transformer futureTransformer architecture underpins the current AI boom—but Pathway is developing something new. (WSJ $)+ What the next frontier of AI could look like. (IEEE Spectrum) 6 India is demanding smartphone makers install a government appWhich privacy advocates say is unacceptable snooping. (FT $)+ India’s tech talent is looking for opportunities outside the US. (Rest of World) 7 College students are desperate to sign up for AI majorsAI is now the second-largest major at MIT behind computer science. (NYT $)+ AI’s giants want to take over the classroom. (MIT Technology Review)

8 America’s musical heritage is at serious riskMuch of it is stored on studio tapes, which are deteriorating over time. (NYT $)+ The race to save our online lives from a digital dark age. (MIT Technology Review) 9 Celebrities are increasingly turning on AIThat doesn’t stop fans from casting them in slop videos anyway. (The Verge)10 Samsung has revealed its first tri-folding phoneBut will people actually want to buy it? (Bloomberg $)+ It’ll cost more than $2,000 when it goes on sale in South Korea. (Reuters) Quote of the day “The Chinese will not pause. They will take over.” —Michael Lohscheller, chief executive of Swedish electric car maker Polestar, tells the Guardian why Europe should stick to its plan to ban the production of new petrol and diesel cars by 2035.  One more thing
Inside Amsterdam’s high-stakes experiment to create fair welfare AIAmsterdam thought it was on the right track. City officials in the welfare department believed they could build technology that would prevent fraud while protecting citizens’ rights. They followed these emerging best practices and invested a vast amount of time and money in a project that eventually processed live welfare applications. But in their pilot, they found that the system they’d developed was still not fair and effective. Why?Lighthouse Reports, MIT Technology Review, and the Dutch newspaper Trouw have gained unprecedented access to the system to try to find out. Read about what we discovered. —Eileen Guo, Gabriel Geiger & Justin-Casimir Braun
We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Hear me out: a truly great festive film doesn’t need to be about Christmas at all.+ Maybe we should judge a book by its cover after all.+ Happy birthday to Ms Britney Spears, still the princess of pop at 44!+ The fascinating psychology behind why we love travelling so much.

The Download: AI’s impact on the economy, and DeepSeek strikes again Read More »

apple email photo 1

Real Apple support emails used in new phishing scam

NEWYou can now listen to Fox News articles!
A new phishing scam is getting a lot of attention because it uses real Apple Support tickets to trick people into giving up their accounts. Broadcom’s Eric Moret shared how he nearly lost his entire Apple account after trusting what looked like official communication. He described the full experience in a detailed post on Medium, where he walked through the scam step by step.This scheme stands out because the scammers relied on Apple’s own support system to make their messages look legitimate. They created an experience that felt polished and professional from the first alert to the final phone call. Here’s how the scam unfolded.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletterTHE #1 GOOGLE SEARCH SCAM EVERYONE FALLS FOR Scammers are exploiting real Apple Support tickets to trick users into handing over their accounts, experts warn. (Photo by STR/NurPhoto via Getty Images)How the scam startsMoret first received a flood of alerts. These included two-factor authentication notifications that claimed someone was trying to access his iCloud account. Within minutes, he got phone calls from calm, helpful callers who claimed to be Apple agents ready to fix the issue.The twist is how convincing the entire setup felt. The scammers were able to exploit a flaw in Apple’s Support system that lets anyone create a genuine support ticket without verification. They opened a real Apple Support case in his name, which triggered official emails from an Apple domain. This built instant trust and lowered Moret’s guard.How scammers gained access to the accountDuring a 25-minute call, the fake agents guided Moret through what they said would secure his account. They walked him through the steps to reset his iCloud password. They also told him a link would follow so he could close the case.That link took him to a fake site called appeal apple dot com. The page looked official and claimed his account was being secured. It then told him to enter a six-digit code sent by text to finish the process.When Moret entered that code, the scammers got exactly what they needed to sign into his account.He then got an alert that his Apple ID had been used to sign into a Mac mini he did not own. That confirmed the takeover attempt. Even though the scammer on the phone said this was normal, he trusted his instinct. He reset his password again, which kicked them out and stopped the attack.BEWARE FAKE CREDIT CARD ACCOUNT RESTRICTION SCAMS A Broadcom executive says he nearly lost access to his Apple ID after trusting a fraudulent support call that looked legitimate. (Photo by Jakub Porzycki/NurPhoto via Getty Images)How to protect yourself from the Apple Support ticket scamThis type of scam works because it feels real. The messages look official, and the callers sound trained. Still, you can stay safer by watching out for signs that something is off.1) Verify support tickets inside your Apple accountScammers created a real-looking ticket to make the entire experience seem legitimate. You can confirm what’s real by checking directly with Apple. Sign in at appleid.apple.com or open the Apple Support app to view your recent cases. If the case number isn’t listed there, the message is fake, even if the email comes from an Apple domain.2) Hang up and call Apple yourselfNever stay on a call that you did not initiate. Scammers rely on long conversations to build trust and pressure you into quick decisions. Hang up right away and call Apple Support directly at 1-800-275-2273 or through the Support app. A real agent will quickly confirm whether anything is wrong.3) Check your Apple ID device listIf something feels off, look at the devices signed into your account. Go to Settings, tap your name and scroll to see all devices linked to your Apple ID. Remove anything you don’t recognize. This step can stop attackers fast if they’ve managed to get in.4) Never share verification codesNo real support agent will ever ask for your two-factor authentication codes. Treat any request for these codes as a major warning.5) Check every link carefullyLook closely at URLs. Fake sites often add extra words or change formatting to appear real. Apple will never send you to a site like appeal apple dot com.SCAMMERS ARE ABUSING ICLOUD CALENDAR TO SEND PHISHING EMAILS Criminals are using Apple’s own support system to generate real case emails that build false confidence with victims. (Photo by Fairfax Media via Getty Images via Getty Images)6) Use strong antivirus softwareStrong antivirus software can spot dangerous links, unsafe sites, and fake support messages before you tap them. Anti-phishing tools are especially important with scams like this one since the attackers used a fake site and real ticket emails to trick victims.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com7) Use a data removal serviceData brokers collect your phone number, home address, email, and other details that scammers use to personalize attacks. A data removal service can wipe much of that information from broker sites, which makes you a harder target for social engineering attempts like the one described in this article.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.comGet a free scan to find out if your personal information is already out on the web: Cyberguy.com8) Turn on strong multi-layer protectionKeep two-factor authentication (2FA) on for every major account.  This creates a barrier that quickly stops attackers.9) Slow down before reactingScammers want you to panic. Pause before you act. Trust your instinct when something feels rushed or strange. A short delay could save your entire account.CLICK HERE TO DOWNLOAD THE FOX NEWS APPKurt’s key takeawaysThis scam shows how convincing criminals can be when they exploit real systems. Even careful users can fall for messages that look official and calls that sound professional. The best defense is to stay alert and take a moment before responding to anything unexpected. When you slow down, double-check support tickets, and never share verification codes, you make yourself far harder to fool. Adding layers like antivirus protection and data removal services also gives you more control over what attackers can access. These simple habits can stop even the most polished scams before they get to your accounts.What would you do if you got a support call that felt real but didn’t seem right? Let us know by writing to us at Cyberguy.comSign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter Copyright 2025 CyberGuy.com.  All rights reserved.

Real Apple support emails used in new phishing scam Read More »

4c1e48f3 a8d9 fed3 bfeb c686add0bb5d

The State of AI: welcome to the economic singularity

Welcome back to The State of AI, a new collaboration between the Financial Times and MIT Technology Review. Every Monday for the next two weeks, writers from both publications will debate one aspect of the generative AI revolution reshaping global power. This week, Richard Waters, FT columnist and former West Coast editor, talks with MIT Technology Review’s editor at large David Rotman about the true impact of AI on the job market. Bonus: If you’re an MIT Technology Review subscriber, you can join David and Richard, alongside MIT Technology Review’s editor in chief, Mat Honan, for an exclusive conversation live on Tuesday, December 9 at 1pm ET about this topic. Sign up to be a part here. Richard Waters writes:
Any far-reaching new technology is always uneven in its adoption, but few have been more uneven than generative AI. That makes it hard to assess its likely impact on individual businesses, let alone on productivity across the economy as a whole. At one extreme, AI coding assistants have revolutionized the work of software developers. Mark Zuckerberg recently predicted that half of Meta’s code would be written by AI within a year. At the other extreme, most companies are seeing little if any benefit from their initial investments. A widely cited study from MIT found that so far, 95% of gen AI projects produce zero return.
That has provided fuel for the skeptics who maintain that—by its very nature as a probabilistic technology prone to hallucinating—generative AI will never have a deep impact on business. To many students of tech history, though, the lack of immediate impact is just the normal lag associated with transformative new technologies. Erik Brynjolfsson, then an assistant professor at MIT, first described what he called the “productivity paradox of IT” in the early 1990s. Despite plenty of anecdotal evidence that technology was changing the way people worked, it wasn’t showing up in the aggregate data in the form of higher productivity growth. Brynjolfsson’s conclusion was that it just took time for businesses to adapt. Big investments in IT finally showed through with a notable rebound in US productivity growth starting in the mid-1990s. But that tailed off a decade later and was followed by a second lull. FT/MIT TECHNOLOGY REVIEW | ADOBE STOCK In the case of AI, companies need to build new infrastructure (particularly data platforms), redesign core business processes, and retrain workers before they can expect to see results. If a lag effect explains the slow results, there may at least be reasons for optimism: Much of the cloud computing infrastructure needed to bring generative AI to a wider business audience is already in place. The opportunities and the challenges are both enormous. An executive at one Fortune 500 company says his organization has carried out a comprehensive review of its use of analytics and concluded that its workers, overall, add little or no value. Rooting out the old software and replacing that inefficient human labor with AI might yield significant results. But, as this person says, such an overhaul would require big changes to existing processes and take years to carry out. There are some early encouraging signs. US productivity growth, stuck at 1% to 1.5% for more than a decade and a half, rebounded to more than 2% last year. It probably hit the same level in the first nine months of this year, though the lack of official data due to the recent US government shutdown makes this impossible to confirm. It is impossible to tell, though, how durable this rebound will be or how much can be attributed to AI. The effects of new technologies are seldom felt in isolation. Instead, the benefits compound. AI is riding earlier investments in cloud and mobile computing. In the same way, the latest AI boom may only be the precursor to breakthroughs in fields that have a wider impact on the economy, such as robotics. ChatGPT might have caught the popular imagination, but OpenAI’s chatbot is unlikely to have the final word. David Rotman replies: 

This is my favorite discussion these days when it comes to artificial intelligence. How will AI affect overall economic productivity? Forget about the mesmerizing videos, the promise of companionship, and the prospect of agents to do tedious everyday tasks—the bottom line will be whether AI can grow the economy, and that means increasing productivity.  But, as you say, it’s hard to pin down just how AI is affecting such growth or how it will do so in the future. Erik Brynjolfsson predicts that, like other so-called general purpose technologies, AI will follow a J curve in which initially there is a slow, even negative, effect on productivity as companies invest heavily in the technology before finally reaping the rewards. And then the boom.  But there is a counterexample undermining the just-be-patient argument. Productivity growth from IT picked up in the mid-1990s but since the mid-2000s has been relatively dismal. Despite smartphones and social media and apps like Slack and Uber, digital technologies have done little to produce robust economic growth. A strong productivity boost never came. Ask AIWhy it matters to you?BETAHere’s why this story might matter to you, according to AI. This is a beta feature and AI hallucinates—it might get weirdTell me why it matters Daron Acemoglu, an economist at MIT and a 2024 Nobel Prize winner, argues that the productivity gains from generative AI will be far smaller and take far longer than AI optimists think. The reason is that though the technology is impressive in many ways, the field is too narrowly focused on products that have little relevance to the largest business sectors. The statistic you cite that 95% of AI projects lack business benefits is telling.  Take manufacturing. No question, some version of AI could help; imagine a worker on the factory floor snapping a picture of a problem and asking an AI agent for advice. The problem is that the big tech companies creating AI aren’t really interested in solving such mundane tasks, and their large foundation models, mostly trained on the internet, aren’t all that helpful.  It’s easy to blame the lack of productivity impact from AI so far on business practices and poorly trained workers. Your example of the executive of the Fortune 500 company sounds all too familiar. But it’s more useful to ask how AI can be trained and fine-tuned to give workers, like nurses and teachers and those on the factory floor, more capabilities and make them more productive at their jobs.  The distinction matters. Some companies announcing large layoffs recently cited AI as the reason. The worry, however, is that it’s just a short-term cost-saving scheme. As economists like Brynjolfsson and Acemoglu agree, the productivity boost from AI will come when it’s used to create new types of jobs and augment the abilities of workers, not when it is used just to slash jobs to reduce costs. 
Richard Waters responds :  I see we’re both feeling pretty cautious, David, so I’ll try to end on a positive note. 
Some analyses assume that a much greater share of existing work is within the reach of today’s AI. McKinsey reckons 60% (versus 20% for Acemoglu) and puts annual productivity gains across the economy at as much as 3.4%. Also, calculations like these are based on automation of existing tasks; any new uses of AI that enhance existing jobs would, as you suggest, be a bonus (and not just in economic terms). Cost-cutting always seems to be the first order of business with any new technology. But we’re still in the early stages and AI is moving fast, so we can always hope. Further reading FT chief economics commentator Martin Wolf has been skeptical about whether tech investment boosts productivity but says AI might prove him wrong. The downside: Job losses and wealth concentration might lead to “techno-feudalism.” The FT’s Robert Armstrong argues that the boom in data center investment need not turn to bust. The biggest risk is that debt financing will come to play too big a role in the buildout. Last year, David Rotman wrote for MIT Technology Review about how we can make sure AI works for us in boosting productivity, and what course corrections will be required.David also wrote this piece about how we can best measure the impact of basic R&D funding on economic growth, and why it can often be bigger than you might think.

The State of AI: welcome to the economic singularity Read More »

1 company restores ai teddy bear sales after safety scare

Company restores AI teddy bear sales after safety scare

NEWYou can now listen to Fox News articles!
FoloToy paused sales of its AI teddy bear Kumma after a safety group found the toy gave risky and inappropriate responses during testing. Now the company says it has restored sales after a week of intense review. It also claims that it improved safeguards to keep kids safe.The announcement arrived through a social media post that highlighted a push for stronger oversight. The company said it completed testing, reinforced safety modules and upgraded its content filters. It added that it aims to build age-appropriate AI companions for families worldwide.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletterTEXAS FAMILY SUES CHARACTER.AI AFTER CHATBOT ALLEGEDLY ENCOURAGED AUTISTIC SON TO HARM PARENTS AND HIMSELF FoloToy resumed sales of its AI teddy bear Kumma after a weeklong review prompted by safety concerns. (Kurt “CyberGuy” Knuttson)Why FoloToy’s AI teddy bear raised safety concernsThe controversy started when the Public Interest Research Group Education Fund tested three different AI toys. All of them produced concerning answers that touched on religion, Norse mythology and harmful household items.Kumma stood out for the wrong reasons. When the bear used the Mistral model, it offered tips on where to find knives, pills and matches. It even outlined steps to light a match and blow it out.Tests with the GPT-4o model raised even sharper concerns. Kumma gave advice related to kissing and launched into detailed explanations of adult sexual content when prompted. The bear pushed further by asking the young user what they wanted to explore.Researchers called the behavior unsafe and inappropriate for any child-focused product.FoloToy paused access to its AI toysOnce the findings became public, FoloToy suspended sales of Kumma and its other AI toys. The company told PIRG that it started a full safety audit across all products.OpenAI also confirmed that it suspended FoloToy’s access to its models for violating policies designed to protect anyone under 18.LAWMAKERS UNVEIL BIPARTISAN GUARD ACT AFTER PARENTS BLAME AI CHATBOTS FOR TEEN SUICIDES, VIOLENCE The company says new safeguards and upgraded filters are now in place to prevent inappropriate responses. (Kurt “CyberGuy” Knutsson)Why FoloToy restored Kumma’s sales after its safety reviewFoloToy brought Kumma back to its online store just one week after suspending sales. The fast return drew attention from parents and safety experts who wondered if the company had enough time to fix the serious issues identified in PIRG’s report.FoloToy posted a detailed statement on X that laid out its version of what happened. In the post, the company said it viewed child safety as its “highest priority” and that it was “the only company to proactively suspend sales, not only of the product mentioned in the report, but also of our other AI toys.” FoloToy said it took this action “immediately after the findings were published because we believe responsible action must come before commercial considerations.”The company also emphasized to CyberGuy that it was the only one of the three AI toy startups in the PIRG review to suspend sales across all of its products and that it made this decision during the peak Christmas sales season, knowing the commercial impact would be significant. FoloToy told us, “Nevertheless, we moved forward decisively, because we believe that responsible action must always come before commercial interests.”The company also said it took the report’s disturbing examples seriously. According to FoloToy, the issues were “directly addressed in our internal review.” It explained that the team “initiated a deep, company-wide internal safety audit,” then “strengthened and upgraded our content-moderation and child-safety safeguards,” and “deployed enhanced safety rules and protections through our cloud-based system.”After outlining these steps, the company said it spent the week on “rigorous review, testing, and reinforcement of our safety modules.” It concluded its announcement by saying it “began gradually restoring product sales” as those updated safeguards went live.FoloToy added that as global attention on AI toy risks grows, “transparency, responsibility and continuous improvement are essential,” and that the company “remains firmly committed to building safe, age-appropriate AI companions for children and families worldwide.”LEAKED META DOCUMENTS SHOW HOW AI CHATBOTS HANDLE CHILD EXPLOITATION Safety testers previously found the toy giving risky guidance about weapons, matches and adult content. (Kurt “CyberGuy” Knutsson)Why experts still question FoloToy’s AI toy safety fixesPIRG researcher RJ Cross said her team plans to test the updated toys to see if the fixes hold up. She noted that a week feels fast for such significant changes, and only new tests will show if the product now behaves safely.Parents will want to follow this closely as AI-powered toys grow more common. The speed of FoloToy’s relaunch raises questions about the depth of its review.Tips for parents before buying AI toysAI toys can feel exciting and helpful, but they can also surprise you with content you’d never expect. If you plan to bring an AI-powered toy into your home, these simple steps can help you stay in control.1) Check which AI model the toy usesNot every model follows the same guardrails. Some include stronger filters while others may respond too freely. Look for transparent disclosures about which model powers the toy and what safety features support it.2) Read independent reviewsGroups like PIRG often test toys in ways parents cannot. These reviews flag hidden risks and point out behavior you may not catch during quick demos.3) Set clear usage rulesKeep AI toys in shared spaces where you can hear or see how your child interacts with it. This helps you step in if the toy gives a concerning answer.4) Test the toy yourself firstAsk the toy questions, try creative prompts and see how it handles tricky topics. This lets you learn how it behaves before you hand it to your child.5) Update the toy’s firmwareMany AI toys run on cloud systems. Updates often add stronger safeguards or reduce risky answers. Make sure the device stays current.6) Check for a clear privacy policyAI toys can gather voice data, location info or behavioral patterns. A strong privacy policy should explain what is collected, how long it is stored and who can access it.7) Watch for sudden behavior changesIf an AI toy starts giving odd answers or pushes into areas that feel inappropriate, stop using it and report the problem to the manufacturer.Take my quiz: How safe is your online security?Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com CLICK HERE TO DOWNLOAD THE FOX NEWS APPKurt’s key takeawaysAI toys can offer fun and learning, but they can also expose kids to unexpected risks. FoloToy says it improved Kumma’s safety, yet experts still want proof. Until the updated toy goes through independent testing, families may want to stay cautious.Do you think AI toys can ever be fully safe for young kids? Let us know by writing to us at Cyberguy.comSign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter Copyright 2025 CyberGuy.com.  All rights reserved. 

Company restores AI teddy bear sales after safety scare Read More »

prison phone wire

An AI model trained on prison phone calls now looks for planned crimes in those calls

A US telecom company trained an AI model on years of inmates’ phone and video calls and is now piloting that model to scan their calls, texts, and emails in the hope of predicting and preventing crimes.  Securus Technologies president Kevin Elder told MIT Technology Review that the company began building its AI tools in 2023, using its massive database of recorded calls to train AI models to detect criminal activity. It created one model, for example, using seven years of calls made by inmates in the Texas prison system, but it has been working on building other state- or county-specific models. Over the past year, Elder says, Securus has been piloting the AI tools to monitor inmate conversations in real time (the company declined to specify where this is taking place, but its customers include jails holding people awaiting trial, prisons for those serving sentences, and Immigrations and Customs Enforcement detention facilities). “We can point that large language model at an entire treasure trove [of data],” Elder says, “to detect and understand when crimes are being thought about or contemplated, so that you’re catching it much earlier in the cycle.”
As with its other monitoring tools, investigators at detention facilities can deploy the AI features to monitor randomly selected conversations or those of individuals suspected by facility investigators of criminal activity, according to Elder. The model will analyze phone and video calls, text messages, and emails and then flag sections for human agents to review. These agents then send them to investigators for follow-up.  In an interview, Elder said Securus’ monitoring efforts have helped disrupt human trafficking and gang activities organized from within prisons, among other crimes, and said its tools are also used to identify prison staff who are bringing in contraband. But the company did not provide MIT Technology Review with any cases specifically uncovered by its new AI models. 
People in prison, and those they call, are notified that their conversations are recorded. But this doesn’t mean they’re aware that those conversations could be used to train an AI model, says Bianca Tylek, executive director of the prison rights advocacy group Worth Rises.  “That’s coercive consent; there’s literally no other way you can communicate with your family,” Tylek says. And since inmates in the vast majority of states pay for these calls, she adds, “not only are you not compensating them for the use of their data, but you’re actually charging them while collecting their data.” A company spokesperson said that correctional facilities determine their own recording and monitoring policies, which Securus follows, and did not directly answer whether inmates can opt out of having their recordings used to train AI. Other advocates for inmates say Securus has a history of violating their civil liberties. For example, leaks of its recordings databases showed the company had improperly recorded thousands of calls between inmates and their attorneys. Corene Kendrick, the deputy director of the ACLU’s National Prison Project, says that the new AI system enables a system of invasive surveillance, and courts have specified few limits to this power. “[Are we] going to stop crime before it happens because we’re monitoring every utterance and thought of incarcerated people?” Kendrick says. “I think this is one of many situations where the technology is way far ahead of the law.” The Secrurus spokesperson said the tool “is not focused on surveilling or targeting specific individuals, but rather on identifying broader patterns, anomalies, and unlawful behaviors across the entire communication system.” They added that its function is to make monitoring more efficient amid staffing shortages, “not to surveil individuals without cause.” Securus will have an easier time funding its AI tool thanks to the company’s recent win in a battle with regulators over how telecom companies can spend the money they collect from inmates’ calls. In 2024, the Federal Communications Commission issued a major reform, shaped and lauded by advocates for prisoners’ rights, that forbade telecoms from passing the costs of recording and surveilling calls on to inmates. Companies were allowed to continue to charge inmates a capped rate for calls, but prisons and jails were ordered to pay for most security costs out of their own budgets.

Negative reactions to this change were swift. Associations of sheriffs (who typically run county jails) complained they could no longer afford proper monitoring of calls, and attorneys general from 14 states sued over the ruling. Some prisons and jails warned they would cut off access to phone calls.  While it was building and piloting its AI tool, Securus held meetings with the FCC and lobbied for a rule change, arguing that the 2024 reform went too far and asking that the agency again allow companies to use fees collected from inmates to pay for security.  In June, Brendan Carr, whom President Donald Trump appointed to lead the FCC, said it would postpone all deadlines for jails and prisons to adopt the 2024 reforms, and even signaled that the agency wants to help telecom companies fund their AI surveillance efforts with the fees paid by inmates. In a press release, Carr wrote that rolling back the 2024 reforms would “lead to broader adoption of beneficial public safety tools that include advanced AI and machine learning.” On October 28, the agency went further: It voted to pass new, higher rate caps and allow companies like Securus to pass security costs relating to recording and monitoring of calls—like storing recordings, transcribing them, or building AI tools to analyze such calls, for example—on to inmates. A spokesperson for Securus told MIT Technology Review that the company aims to balance affordability with the need to fund essential safety and security tools. “These tools, which include our advanced monitoring and AI capabilities, are fundamental to maintaining secure facilities for incarcerated individuals and correctional staff and to protecting the public,” they wrote. FCC commissioner Anna Gomez dissented in last month’s ruling. “Law enforcement,” she wrote in a statement, “should foot the bill for unrelated security and safety costs, not the families of incarcerated people.” The FCC will be seeking comment on these new rules before they take final effect. 

An AI model trained on prison phone calls now looks for planned crimes in those calls Read More »

claude app

Chinese hackers turned AI tools into an automated attack machine

NEWYou can now listen to Fox News articles!
Cybersecurity has been reshaped by the rapid rise of advanced artificial intelligence tools, and recent incidents show just how quickly the threat landscape is shifting.Over the past year, we’ve seen a surge in attacks powered by AI models that can write code, scan networks and automate complex tasks. This capability has helped defenders, but it has also enabled attackers who are moving faster than before.The latest example is a major cyberespionage campaign conducted by a Chinese state-linked group that used Anthropic’s Claude to carry out large parts of an attack with very little human involvement.Sign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter How Chinese hackers turned Claude into an automated attack machineIn mid-September 2025, Anthropic investigators spotted unusual behavior that eventually revealed a coordinated and well-resourced campaign. The threat actor, assessed with high confidence as a Chinese state-sponsored group, had used Claude Code to target roughly thirty organizations worldwide. The list included major tech firms, financial institutions, chemical manufacturers and government bodies. A small number of those attempts resulted in successful breaches.HACKER EXPLOITS AI CHATBOT IN CYBERCRIME SPREE Claude handled most of the operation autonomously, triggering thousands of requests and generating detailed documentation of the attack for future use. (Kurt “CyberGuy” Knutsson)How the attackers bypassed Claude’s safeguardsThis was not a typical intrusion. The attackers built a framework that let Claude act as an autonomous operator. Instead of asking the model to help, they tasked it with executing most of the attack. Claude inspected systems, mapped out internal infrastructure and flagged databases worth targeting. The speed was unlike anything a human team could replicate.To get around Claude’s safety rules, the attackers broke their plan into tiny, innocent-looking steps. They also told the model it was part of a legitimate cybersecurity team performing defensive testing. Anthropic later noted that the attackers didn’t simply hand tasks to Claude; they engineered the operation to make the model believe it was performing authorized pentesting work, splitting the attack into harmless-looking pieces and using multiple jailbreak techniques to push past its safeguards. Once inside, Claude researched vulnerabilities, wrote custom exploits, harvested credentials and expanded access. It worked through these steps with little supervision and reported back only when it needed human approval for major decisions.The model also handled the data extraction. It collected sensitive information, sorted it by value and identified high-privilege accounts. It even created backdoors for future use. In the final stage, Claude generated detailed documentation of what it had done. This included stolen credentials, systems analyzed and notes that could guide future operations.Across the entire campaign, investigators estimate that Claude performed around eighty to ninety percent of the work. Human operators stepped in only a handful of times. At its peak, the AI triggered thousands of requests, often multiple per second, a pace still far beyond what any human team could achieve. Although it occasionally hallucinated credentials or misread public data as secret, those errors underscored that fully autonomous cyberattacks still face limitations, even when an AI model handles the majority of the work.Why this AI-powered Claude attack is a turning point for cybersecurityThis campaign shows how much the barrier to high-end cyberattacks has dropped. A group with far fewer resources could now attempt something similar by leaning on an autonomous AI agent to do the heavy lifting. Tasks that once required years of expertise can now be automated by a model that understands context, writes code and uses external tools without direct oversight.Earlier incidents documented AI misuse, but humans were still steering every step. This case is different. The attackers needed very little involvement once the system was in motion. And while the investigation focused on usage within Claude, researchers believe similar activity is happening across other advanced models, which might include Google Gemini, OpenAI’s ChatGPT or Musk’s Grok.This raises a difficult question. If these systems can be misused so easily, why continue building them? According to researchers, the same capabilities that make AI dangerous are also what make it essential for defense. During this incident, Anthropic’s own team used Claude to analyze the flood of logs, signals and data their investigation uncovered. That level of support will matter even more as threats grow.We reached out to Anthropic for comment, but did not hear back before our deadline. Hackers used Claude to map networks, scan systems, and identify high-value databases in a fraction of the time human attackers would need. (Kurt “CyberGuy” Knutsson)FORMER GOOGLE CEO WARNS AI SYSTEMS CAN BE HACKED TO BECOME EXTREMELY DANGEROUS WEAPONSYou may not be the direct target of a state-sponsored campaign, but many of the same techniques trickle down to everyday scams, credential theft and account takeovers. Here are seven detailed steps you can take to stay safer.1) Use strong antivirus software and keep it updatedStrong antivirus software does more than scan for known malware. It looks for suspicious patterns, blocked connections and abnormal system behavior. This is important because AI-driven attacks can generate new code quickly, which means traditional signature-based detection is no longer enough.The best way to safeguard yourself from malicious links that install malware, potentially accessing your private information, is to have strong antivirus software installed on all your devices. This protection can also alert you to phishing emails and ransomware scams, keeping your personal information and digital assets safe.Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com2) Rely on a password managerA good password manager helps you create long, random passwords for every service you use. This matters because AI can generate and test password variations at high speed. Using the same password across accounts can turn a single leak into a full compromise.Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. Check out the best expert-reviewed password managers of 2025 at Cyberguy.com3) Consider using a personal data removal serviceA large part of modern cyberattacks begins with publicly available information. Attackers often gather email addresses, phone numbers, old passwords and personal details from data broker sites. AI tools make this even easier, since they can scrape and analyze huge datasets in seconds. A personal data removal service helps clear your information from these broker sites so you are harder to profile or target.FAKE CHATGPT APPS ARE HIJACKING YOUR PHONE WITHOUT YOU KNOWINGWhile no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap, and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.comGet a free scan to find out if your personal information is already out on the web: Cyberguy.com4) Turn on two-factor authentication wherever possibleStrong passwords alone are not enough when attackers can steal credentials through malware, phishing pages or automated scripts. Two-factor authentication adds a serious roadblock. Use app-based codes or hardware keys instead of SMS. While no method is perfect, this extra layer often stops unauthorized logins even when attackers have your password.5) Keep your devices and apps fully updatedAttackers rely heavily on known vulnerabilities that people forget or ignore. System updates patch these flaws and close off entry points that attackers use to break in. Enable automatic updates on your phone, laptop, router and the apps you use most. If an update looks optional, treat it as important anyway, because many companies downplay security fixes in their release notes.6) Install apps only from trusted sourcesMalicious apps are one of the easiest ways attackers get inside your device. Stick to official app stores and avoid APK sites, shady download portals and random links shared on messaging apps. Even on official stores, check reviews, download counts and the developer name before installing anything. Grant the minimum permissions required and avoid apps that ask for full access for no clear reason.7) Ignore suspicious texts, emails, and pop-upsAI tools have made phishing more convincing. Attackers can generate clean messages, imitate writing styles, and craft perfect fake websites that match the real ones. Slow down when a message feels urgent or unexpected. Never click links from unknown senders, and verify requests from known contacts through a separate channel. If a pop-up claims your device is infected or your bank account is locked, close it and check directly through the official website. By breaking tasks into small, harmless-looking steps, the threat actors tricked Claude into writing exploits, harvesting credentials, and expanding access.  (Kurt “CyberGuy” Knutsson)Kurt’s key takeawayThe attack carried out through Claude signals a major shift in how cyber threats will evolve. Autonomous AI agents can already perform complex tasks at speeds no human team can match, and this gap will only widen as models improve. Security teams now need to treat AI as a core part of their defensive toolkit, not a future add-on. Better threat detection, stronger safeguards and more sharing across the industry are going to be crucial. Because if attackers are already using AI at this scale, the window to prepare is shrinking fast.Should governments push for stricter regulations on advanced AI tools? Let us know by writing to us at Cyberguy.com.CLICK HERE TO DOWNLOAD THE FOX NEWS APPSign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.Copyright 2025 CyberGuy.com.  All rights reserved.

Chinese hackers turned AI tools into an automated attack machine Read More »

3 is it safe to unsubscribe from spam you didnt sign up for

Fox News AI Newsletter: How to stop AI from scanning your email

Gmail on a tablet. (Kurt “CyberGuy” Knutsson)NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- How to stop Google AI from scanning your Gmail- IRS to roll out Salesforce AI agents following workforce reduction: report- AI chatbots shown effective against antisemitic conspiracies in new studyEYES OFF THE INBOX: Google shared a new update on Nov. 5, confirming that Gemini Deep Research can now use context from your Gmail, Drive and Chat. This allows the AI to pull information from your messages, attachments and stored files to support your research.‘CHANGE IS COMING’: The Internal Revenue Service (IRS) is implementing a Salesforce artificial intelligence (AI) agent program across multiple divisions in the wake of a mass workforce reduction earlier this year, according to a report.FACT CHECK TECH: AI chatbots could be one of the tools of the future for fighting hate and conspiracy theories, a new study shows. Researchers found that short dialogues with chatbots designed to engage with believers of antisemitic conspiracy theories led to measurable changes in what people believe. The image depicts Archer’s development plans for Hawthorne Airport in Los Angeles, CA. (Archer Aviation)SKY TAKEOVER: Archer Aviation, a leading developer of electric vertical takeoff and landing (eVTOL) aircraft, just made one of its boldest moves yet. The company agreed to acquire Hawthorne Airport for $126 million in cash. DIGITAL IMPOSTERS: App stores are supposed to be reliable and free of malware or fake apps, but that’s far from the truth. For every legitimate application that solves a real problem, there are dozens of knockoffs waiting to exploit brand recognition and user trust. We’ve seen it happen with games, productivity tools and entertainment apps. Now, artificial intelligence has become the latest battleground for digital impostors.AI TRANSFORMATION: HP announced Tuesday that it plans to cut between 4,000 and 6,000 employees by the end of 2028 as part of its push to adopt artificial intelligence. A lettering AI for “Artificial Intelligence” stands at the Amazon Web Services AWS stand at the Hannover Messe 2025 industrial trade fair. (Julian Stratenschulte/picture alliance via Getty Images)RACE FOR AI: Amazon Web Services (AWS) on Monday announced a plan to build and deploy purpose-built artificial intelligence (AI) and high-performance computing for the U.S. government for the first time.BREAKING CHINA: Beijing has repeatedly shown the world that it is willing to weaponize its dominance of supply chains, and President Donald Trump had to de-escalate the latest rare-earth dispute during his recent trip to Asia. But rare earths are only a small window into the power that China could have over the U.S. economy as we start adopting tomorrow’s technologies. NO RESERVATIONS: Maybe you order sparkling water, start every meal with an appetizer or prefer dining right when the restaurant opens. You might not track these habits. OpenTable might.FOLLOW FOX NEWS ON SOCIAL MEDIAFacebookInstagramYouTubeXLinkedInSIGN UP FOR OUR OTHER NEWSLETTERSFox News FirstFox News OpinionFox News LifestyleFox News HealthDOWNLOAD OUR APPSFox NewsFox BusinessFox WeatherFox SportsTubiWATCH FOX NEWS ONLINEFox News GoSTREAM FOX NATIONFox NationStay up to date on the latest AI technology advancements, and learn about the challenges and opportunities AI presents now and for the future with Fox News here.

Fox News AI Newsletter: How to stop AI from scanning your email Read More »

Picture1 1

How background AI builds operational resilience & visible ROI

If you asked most enterprise leaders which AI tools are delivering ROI, many would point to front-end chatbots or customer support automation. That’s the wrong door. The most value-generating AI systems today aren’t loud, customer-facing marvels. They’re tucked away in backend operations. They work silently, flagging irregularities in real-time, automating risk reviews, mapping data lineage, or helping compliance teams detect anomalies before regulators do. The tools don’t ask for credit, but are saving millions.Operational resilience no longer comes from having the loudest AI tool. It comes from having the smartest one, placed where it quietly does the work of five teams before lunch.The machines that spot what humans don’tTake the case of a global logistics company that integrated a background AI system for monitoring procurement contracts. The tool scanned thousands of PDFs, email chains, and invoice patterns per hour. No flashy dashboard. No alerts that interrupt workflow. Just continuous monitoring. In the first six months, it flagged multiple vendor inconsistencies that, if left unchecked, would have resulted in regulatory audits.The system didn’t just detect anomalies. It interpreted patterns. It noticed a vendor whose delivery timelines were always one day off compared to logged timestamps. Humans had seen those reports for months. But the AI noticed that the error always occurred near quarter-end. The conclusion? Inventory padding. That insight led to a contract renegotiation that saved millions.This isn’t hypothetical. One similar real-world use case reported a seven-figure operational loss prevented through a near-identical approach. That’s the kind of ROI that doesn’t need a flashy pitch deck.Why advanced education still matters in the age of AIIt’s easy to fall into the trap of thinking AI tools are replacing human expertise. But smart organisations aren’t replacing but reinforcing. People with advanced academic backgrounds are helping enterprises integrate AI with strategic precision.Specifically, those with a doctorate of business administration in business intelligence bring an irreplaceable level of systems thinking and contextual insight. The professionals understand the complexity behind data ecosystems, from governance models to algorithmic biases, and can assess which tools serve long-term resilience versus short-term automation hype.When AI models are trained on historical data, it takes educated leadership to spot where historical bias may become a future liability. And when AI starts making high-stakes decisions, you need someone who can ask better questions about risk exposure, model explainability, and ethics in decision-making. This is where doctorates aren’t just nice to have – they’re essential.Invisible doesn’t mean simpleToo often, companies install AI as if it were antivirus software. Set it, forget it, hope it works. That’s how you get black-box risk. Invisible tools must still be transparent internally. It’s not enough to say, “AI flagged it.” The teams relying on these tools – risk officers, auditors, operations leads – must understand the decision-making logic or at least the signals that drive the alert. The requires not just technical documentation, but collaboration between engineers and business units.Enterprises that win with background AI systems build what could be called “decision-ready infrastructure.” The are workflows where data ingestion, validation, risk detection, and notification are all stitched together. Not in silos. Not in parallel systems. But in one loop that feeds actionable insight straight to the team responsible. That’s resilience.Where operational AI works bestHere’s where invisible AI is already proving its worth in industries:Compliance Monitoring: Automatically detecting early signs of non-compliance in internal logs, transactional data, and communication channels without triggering false positives.Data Integrity: Identifying stale, duplicate, or inconsistent data in business units to prevent decision errors and reporting flaws.Fraud Detection: Recognising pattern shifts in transactions before losses occur. Not reactive alerts after the fact.Supply Chain Optimisation: Mapping supplier dependencies and predicting bottlenecks based on third-party risk signals or external disruptions.In all these cases, the key isn’t automation for automation’s sake. It’s precision. AI models that are well-calibrated, integrated with domain knowledge, and fine-tuned by experts – not simply deployed off the shelf.What makes the systems resilient?Operational resilience isn’t built in a sprint. It’s the result of smart layering. One layer catches data inconsistencies. Another tracks compliance drift. Another layer analyses behavioural signals in departments. And yet another feeds all of that into a risk model trained on historical issues.The resilience depends on:Human supervision with domain expertise, especially from those trained in business intelligence.Cross-functional transparency, so that audit, tech, and business teams are aligned.The ability to adapt models over time as the business evolves, not just retrain when performance dips.Systems that get this wrong often create alert fatigue or over-correct with rigid rule-based models. That’s not AI. That’s bureaucracy in disguise.Real ROI doesn’t screamMost ROI-focused teams chase visibility. Dashboards, reports, charts. But the most valuable AI tools don’t scream. They tap a shoulder. They point out a loose thread. They suggest a second look. That’s where the money is. Quiet detection. Small interventions. Avoided disasters.The companies that treat AI as a quiet partner – not a front-row magician – are already ahead. They’re using it to build internal resilience, not just customer-facing shine. They’re integrating it with human intelligence, not replacing it. And most of all, they’re measuring ROI not by how cool the tech looks, but by how quietly it works.That’s the future. Invisible AI agents and assistants. Visible outcomes. Real, measurable resilience.

How background AI builds operational resilience & visible ROI Read More »

paradromics implant

Fully implantable brain chip aims to restore real speech

NEWYou can now listen to Fox News articles!
A U.S. neurotechnology startup called Paradromics is gaining momentum in the fast-growing field of brain-computer interfaces. The FDA has approved its first human trial built to test whether its fully implantable device can restore speech for people with paralysis. This milestone gives the Austin company a strong position in a competitive space, shaping the future of neural technology.Paradromics received Investigational Device Exemption status for the Connect-One Early Feasibility Study using its Connexus BCI. It is the first approved study to explore speech restoration with a fully implantable system. The research team wants to evaluate safety and see how well the device converts neural activity into text or a synthesized voice.Sign up for my FREE CyberGuy Report. Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.BRAIN IMPLANT TURNS THOUGHTS INTO DIGITAL COMMANDSHow the brain implant works The implant uses hundreds of tiny electrodes to capture detailed signals from the motor cortex where the brain forms sounds and shapes words. (Paradromics)Paradromics developed a fully implantable speech-focused brain device called the Connexus BCI. The company designed it to capture detailed neural signals that support real-time communication for people who cannot speak. This system uses high-resolution electrodes and an implanted wireless setup to record activity from individual neurons involved in forming speech.The Connexus BCI has a titanium body with more than 400 platinum-iridium electrodes placed just below the brain’s surface. Each electrode is thinner than a human hair. These electrodes record neural firing patterns in the motor cortex, where the brain controls the lips, tongue and larynx.Surgeons place the implant under the skin and connect it with a thin cable to a wireless transceiver in the chest. That transceiver sends data through a secure optical link to a second transceiver worn on the body. The external unit powers the system with inductive charging similar to wireless phone chargers.The collected signals then move to a compact computer that runs advanced language models. It analyzes the neural activity and converts it into text or into a synthetic voice based on the user’s past recordings.Inside the Paradromics BCI human trialThe trial begins with two participants. Each person will receive one 7.5-millimeter-wide electrode array placed 1.5 millimeters into the part of the motor cortex that controls the lips, tongue and larynx. During training sessions, the volunteers will imagine speaking sentences while the device learns the neural signatures of each sound.This is the first BCI trial that formally targets real-time synthetic voice generation. The study will also test whether the system can detect imagined hand movements and translate those signals into cursor control.If early results meet expectations, the trial could expand to ten people. Some participants may receive two implants to capture a richer set of signals.HOW A TINY RETINAL IMPLANT IS HELPING PEOPLE REGAIN THEIR SIGHTCyberguy reached out to Paradromics for comment and received the following statement: Researchers are testing whether Paradromics’ fully implantable brain device can turn neural activity into real-time speech for people with paralysis. (Synchron)”Communication is a fundamental human need. For people with severe motor impairment, the inability to express themselves with family and friends or request basic needs makes living difficult. The FDA approved clinical study for the Connexus Brain-Computer Interface is the first step toward a future where commercially available neurotech can restore the ability to naturally speak and seamlessly use a computer.The fully implanted Connexus BCI is designed to record brain signals from individual neurons, capturing the massive amounts of data required for high performance applications like speech restoration and complex mouse and keyboard hand actions. Built from proven medical-grade materials, Connexus BCI is engineered for daily long-term use, backed by more than three years of stable pre-clinical recordings.How Paradromics compares to other BCI companiesParadromics joins Synchron and Neuralink at the front of the implanted BCI race. Synchron uses a stent-like device placed in a blood vessel to record broad neural patterns. Neuralink uses flexible threads with many recording sites to capture high-bandwidth signals from individual neurons.Paradromics sits in the middle of these two approaches by using a fully implantable system that still captures single-neuron detail. Researchers believe this design may offer long-term stability for everyday communication.What this means for youThis breakthrough could make a major difference for people who lost their ability to speak after ALS, stroke or spinal cord injury. A system that converts thought into speech could help them talk in real time and regain independence. It may also allow hands-free computer control, which can improve daily living.If the trial succeeds, the tech could change how assistive communication devices work and speed up patient access to advanced tools. During the trial, volunteers imagine speaking while advanced AI models learn their neural patterns and convert those signals into text or a synthetic voice. (Paradromics)Take my quiz: How safe is your online security?Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right and what needs improvement. Take my Quiz here: Cyberguy.com BRAIN IMPLANT FOR EPILEPSY TESTED IN 20-MINUTE SURGERYKurt’s key takeawaysParadromics is taking a careful but bold path toward practical BCI communication. The first stage is small but meaningful. It sets the foundation for devices that may restore speech with natural flow and faster response times. As more trials move forward, this field could shift from experimental to everyday use faster than many expect.Would you trust a fully implanted brain device if it meant restoring communication for someone you care about? Let us know by writing to us at Cyberguy.comCLICK HERE TO DOWNLOAD THE FOX NEWS APPSign up for my FREE CyberGuy Report Get my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletterCopyright 2025 CyberGuy.com.  All rights reserved.

Fully implantable brain chip aims to restore real speech Read More »