AI & Singularity

The rise of AI, AGI, and the future of superintelligence and control.

image f61073

The Download: computing’s bright young minds, and cleaning up satellite streaks

This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. Meet tomorrow’s rising stars of computing Each year, MIT Technology Review honors 35 outstanding people under the age of 35 who are driving scientific progress and solving tough problems in their fields.Today we want to introduce you to the computing innovators on the list who are coming up with new AI chips and specialized datasets—along with smart ideas about how to assess advanced systems for safety.Check out the full list of honorees—including our innovator of the year—here. 
Job titles of the future: Satellite streak astronomer Earlier this year, the $800 million Vera Rubin Observatory commenced its decade-long quest to create an extremely detailed time-lapse movie of the universe.Rubin is capable of capturing many more stars than any other astronomical observatory ever built; it also sees many more satellites. Up to 40% of images captured by the observatory within its first 10 years of operation will be marred by their sunlight-reflecting streaks.Meredith Rawls, a research scientist at the telescope’s flagship observation project, Vera Rubin’s Legacy Survey of Space and Time, is one of the experts tasked with protecting Rubin’s science mission from the satellite blight. Read the full story.
—Tereza Pultarova This story is from our new print edition, which is all about the future of security. Subscribe here to catch future copies when they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 China has accused Nvidia of violating anti-monopoly lawsAs US and Chinese officials head into a second day of tariff negotiations. (Bloomberg $)+ The investigation dug into Nvidia’s 2020 acquisition of computing firm Mellanox. (CNBC)+ But China’s antitrust regulator hasn’t confirmed if it will punish it. (WSJ $) 2 The US is getting closer to making a TikTok dealBut it’s still prepared to go ahead with a ban if an agreement can’t be reached. (Reuters) 3 Grok spread misinformation about a far-right rally in LondonIt falsely claimed that police misrepresented old footage as being from the protest. (The Guardian)+ Elon Musk called for a new UK government during a video speech. (Politico)

4 Here’s what people are really using ChatGPT forUsers are more likely to use it for personal, rather than work-related queries. (WP $)+ Anthropic says businesses are using AI to automate, not collaborate. (Bloomberg $)+ Therapists are secretly using ChatGPT. Clients are triggered. (MIT Technology Review) 5 How China’s Hangzhou became a global AI hubSpawning not just Alibaba, but DeepSeek too. (WSJ $)+ China and the US are completely dominating the global AI race. (Rest of World)+ How DeepSeek ripped up the AI playbook. (MIT Technology Review) 6 Driverless car fleets could plunge US cities into traffic chaosAre we really prepared? (Vox $) 7 The shipping industry is harnessing AI to fight cargo firesThe risk of deadly fires is rising due to shipments of batteries and other flammable goods. (FT $) 8 Sales of used EVs are sky-rocketingBuyers are snapping up previously-owned bargains. (NYT $)+ EV owners won’t be able to drive in carpool lanes any more. (Wired $) 9 A table-top fusion reactor isn’t as crazy as it soundsThis startup is trying to make compact reactors a reality. (Economist $)+ Inside a fusion energy facility. (MIT Technology Review) 10 How a magnetic field could help clean up spaceIf we don’t, we could soon lose access to Earth’s low orbit altogether. (IEEE Spectrum)+ The world’s next big environmental problem could come from space. (MIT Technology Review)
Quote of the day “If we’re going on a journey, they’re absolutely taking travel sickness tablets immediately. They’re not even considering coming in the car without them.”
—Phil Bellamy, an electric car owner, describes the extreme nausea his daughters experience while riding in his vehicle to the Guardian. One more thing Google, Amazon and the problem with Big Tech’s climate claimsLast year, Amazon trumpeted that it had purchased enough clean electricity to cover the energy demands of all its global operations, seven years ahead of its sustainability target.That news closely followed Google’s acknowledgment that the soaring energy demands of its AI operations helped ratchet up its corporate emissions by 13% last year—and that it had backed away from claims that it was already carbon neutral.If you were to take the announcements at face value, you’d be forgiven for believing that Google is stumbling while Amazon is speeding ahead in the race to clean up climate pollution.But while both companies are coming up short in their own ways, Google’s approach to driving down greenhouse-gas emissions is now arguably more defensible. To learn why, read our story.—James Temple We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Steven Spielberg was just 26 when he made Jaws? The more you know.+ This tiny car’s huge racing track journey is completely hypnotic.+ Easy dinner recipes? Yes please.+ This archive of thousands of historical children’s books is a real treasure trove—and completely free to read.

The Download: computing’s bright young minds, and cleaning up satellite streaks Read More »

data center ashburn virginia

Fox News AI Newsletter: Backlash over mystery company’s data center

A car drives past a building of the Digital Realty Data Center in Ashburn, Virginia, March 17, 2025.  (REUTERS/Leah Millis)NEWYou can now listen to Fox News articles!
Welcome to Fox News’ Artificial Intelligence newsletter with the latest AI technology advancements.IN TODAY’S NEWSLETTER:- Mystery company’s $1.6B data center proposed for Wisconsin farmland draws residents’ ire- OpenAI’s nonprofit parent company secures $100B equity stake while retaining control of AI giant- Tech titan says Trump administration ‘really proactive’ on keeping American AI leadership ahead’VERY SKEPTICAL’: People living in a Midwest city known for its natural beauty and outdoor recreation are sounding the alarm on a proposed data center with a price tag of $1.6 billion.MAJOR MOVE: Artificial intelligence giant OpenAI on Thursday announced its nonprofit parent will retain control of the company while also gaining an equity stake worth more than $100 billion.TECH BOOM: An important player in the global semiconductor and artificial intelligence industries is praising the Trump administration’s plan to keep America ahead of its adversaries.BILLIONAIRE BOOM: Oracle’s stock surge has pushed co-founder Larry Ellison’s net worth higher by tens of billions of dollars the last two days and puts him ahead of Tesla CEO Elon Musk as the richest person in the world. Oracle founder Larry Ellison speaks during a news conference with President Donald Trump in the Roosevelt Room of the White House on Jan. 21, 2025, in Washington, D.C.  (Andrew Harnik/Getty Images)TECH FOR CHORES: Tired of dragging your bins to the curb and waking up to the roar of garbage trucks? A new robot called HARR-E could change that routine. Built by American manufacturing giant Oshkosh Corp., this autonomous trash collector comes to your door when you call it, just like a rideshare. HARR-E trash robot (Oshkosh)’NOTORIOUS’: Tarboro, North Carolina, residents are urging their town council to reject a proposal for a 50-acre, 300-megawatt Energy Storage Solutions LLC site projected to bring 500 jobs and millions of dollars in tax revenue to the town. CAREFUL WHAT YOU SAY: Artificial intelligence has slipped quietly into our meetings. Zoom, Google Meet and other platforms now offer AI notetakers that listen, record and share summaries. At first, it feels like a helpful assistant. No more scrambling to jot down every point. But there’s a catch. It records everything, including comments you never planned to share.TECH CLASH: President Donald Trump’s push to establish “America’s global AI dominance” could run into friction from an unlikely source: the “effective altruism” movement, a small but influential group that has a darker outlook on artificial intelligence.FUTURE ON AUTOPILOT: Trucking, like many foundational sectors, is undergoing significant transformation. Artificial intelligence is already enhancing efficiency and productivity across various industries, and it is now making its way into logistics.  An Aurora Innovation Inc. driverless truck at the company’s terminal in Palmer, Texas, US, on Wednesday, Dec. 28, 2023. Driverless trucks with no humans on board will soon cruise Texas, highways if three startup firms have their way, despite objections from critics who say financial pressures, not safety, is behind the timetable. Photographer: Dylan Hollingsworth/Bloomberg via Getty Images

Fox News AI Newsletter: Backlash over mystery company’s data center Read More »

ai chatbot ransom

Hacker exploits AI chatbot in cybercrime spree

NEWYou can now listen to Fox News articles!
A hacker has pulled off one of the most alarming AI-powered cyberattacks ever documented. According to Anthropic, the company behind Claude, a hacker used its artificial intelligence chatbot to research, hack, and extort at least 17 organizations. This marks the first public case where a leading AI system automated nearly every stage of a cybercrime campaign, an evolution that experts now call ‘vibe hacking’.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTERHOW AI CHATBOTS ARE HELPING HACKERS TARGET YOUR BANKING ACCOUNTS Simulated ransom guidance created by Anthropic’s threat intelligence team for research and demonstration purposes. (Anthropic)How a hacker used an AI chatbot to strike 17 targetsAnthropic’s investigation revealed how the attacker convinced Claude Code, a coding-focused AI agent, to identify vulnerable companies. Once inside, the hacker:Built malware to steal sensitive files.Extracted and organized stolen data to find high-value information.Calculated ransom demands based on victims’ finances.Generated tailored extortion notes and emails.Targets included a defense contractor, a financial institution, and multiple healthcare providers. The stolen data included Social Security numbers, financial records, and government-regulated defense files. Ransom demands ranged from $75,000 to over $500,000.Why AI cybercrime is more dangerous than everCyber extortion is not new. But this case shows how AI transforms it. Instead of acting as an assistant, Claude became an active operator scanning networks, crafting malware, and even analyzing stolen data. AI lowers the barrier to entry. In the past, such operations required years of training. Now, a single hacker with limited skills can launch attacks that once took a full criminal team. This is the frightening power of agentic AI systems.HOW AI IS NOW HELPING HACKERS FOOL YOUR BROWSER’S SECURITY TOOLS A simulated ransom note template that hackers could use to scam victims. (Anthropic)What vibe hacking reveals about AI-powered threatsSecurity researchers refer to this approach as vibe hacking. It describes how hackers embed AI into every phase of an operation.Reconnaissance: Claude scanned thousands of systems and identified weak points.Credential theft: It extracted login details and escalated privileges.Malware development: Claude generated new code and disguised it as trusted software.Data analysis: It sorted stolen information to identify the most damaging details.Extortion: Claude created alarming ransom notes with victim-specific threats.This systematic use of AI marks a shift in cybercrime tactics. Attackers no longer just ask AI for tips; they use it as a full-fledged partner.GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKS A cybercriminal’s initial sales offering on the dark web seen in January 2025. (Anthropic)How Anthropic is responding to AI abuseAnthropic says it has banned the accounts linked to this campaign and developed new detection methods. Its Threat Intelligence team continues to investigate misuse cases and share findings with industry and government partners. The company admits, however, that determined actors can still bypass safeguards. And experts warn that these patterns are not unique to Claude; similar risks exist across all advanced AI models.How to protect yourself from AI cyberattacksHere’s how to defend against hackers now using AI tools to their advantage:1. Use strong, unique passwords everywhereHackers who break into one account often attempt to use the same password across your other logins. This tactic becomes even more dangerous when AI is involved because a chatbot can quickly test stolen credentials across hundreds of sites. The best defense is to create long, unique passwords for every account you have. Treat your passwords like digital keys and never reuse the same one in more than one lock.Next, see if your email has been exposed in past breaches. Our #1 password manager (see Cyberguy.com/Passwords) pick includes a built-in breach scanner that checks whether your email address or passwords have appeared in known leaks. If you discover a match, immediately change any reused passwords and secure those accounts with new, unique credentials. Check out the best expert-reviewed password managers of 2025 at Cyberguy.com/Passwords2. Protect your identity and use a data removal serviceThe hacker who abused Claude didn’t just steal files; they organized and analyzed them to find the most damaging details. That illustrates the value of your personal information in the wrong hands. The less data criminals can find about you online, the safer you are. Review your digital footprint, lock down privacy settings, and reduce what’s available on public databases and broker sites.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice.  They aren’t cheap, and neither is your privacy.  These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites.  It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet.  By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you.Check out my top picks for data removal services and get a free scan to find out if your personal information is already out on the web by visiting Cyberguy.com/DeleteGet a free scan to find out if your personal information is already out on the web: Cyberguy.com/FreeScan Illustration of a hacker at work. (Kurt “CyberGuy” Knutsson)3. Turn on two-factor authentication (2FA)Even if a hacker obtains your password, 2FA can stop them in their tracks. AI tools now help criminals generate highly realistic phishing attempts designed to trick you into handing over logins. By enabling 2FA, you add an extra layer of protection that they cannot easily bypass. Choose app-based codes or a physical key whenever possible, as these are more secure than text messages, which are easier for attackers to intercept.4. Keep devices and software updatedAI-driven attacks often exploit the most basic weaknesses, such as outdated software. Once a hacker knows which companies or individuals are running old systems, they can use automated scripts to break in within minutes. Regular updates close those gaps before they can be targeted. Setting your devices and apps to update automatically removes one of the easiest entry points that criminals rely on.5. Be suspicious of urgent messagesOne of the most alarming details in the Anthropic report was how the hacker used AI to craft convincing extortion notes. The same tactics are being applied to phishing emails and texts sent to everyday users. If you receive a message demanding immediate action, such as clicking a link, transferring money, or downloading a file, treat it with suspicion. Stop, check the source, and verify before you act.6. Use a strong antivirus softwareThe hacker in this case built custom malware with the help of AI. That means malicious software is getting smarter, faster, and harder to detect. Strong antivirus software that constantly scans for suspicious activity provides a critical safety net. It can identify phishing emails and detect ransomware before it spreads, which is vital now that AI tools make these attacks more adaptive and persistent.Get my picks for the best 2025 antivirus protection winners for your Windows, Mac, Android & iOS devices at Cyberguy.com/LockUpYourTech Over 40,000 Americans were previously exposed in a massive OnTrac security breach, leaking sensitive medical and financial records. (Photo by Jakub Porzycki/NurPhoto via Getty Images)7. Stay private online with a VPNAI isn’t only being used to break into companies; it’s also being used to analyze patterns of behavior and track individuals. A VPN encrypts your online activity, making it much harder for criminals to connect your browsing to your identity. By keeping your internet traffic private, you add another layer of protection for hackers trying to gather information they can later exploit.For the best VPN software, see my expert review of the best VPNs for browsing the web privately on your Windows, Mac, Android & iOS devices at Cyberguy.com/VPNCLICK HERE TO GET THE FOX NEWS APP  Kurt’s key takeawaysAI isn’t just powering helpful tools; it’s also arming hackers. This case proves that cybercriminals can now automate attacks in ways once thought impossible. The good news is, you can take practical steps today to reduce your risk.  By making smart moves, such as enabling two-factor authentication (2FA), updating devices, and using protective tools, you can stay one step ahead.Do you think AI chatbots should be more tightly regulated to prevent abuse? Let us know by writing to us at Cyberguy.com/ContactSign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTERCopyright 2025 CyberGuy.com.  All rights reserved.

Hacker exploits AI chatbot in cybercrime spree Read More »

istock man watching tv 507831621

Amazon backs AI startup that lets you make TV shows

NEWYou can now listen to Fox News articles!
What if you could write your own episode of a hit show without a crew or cameras, only a prompt? That’s exactly what a San Francisco startup called Fable is aiming to do with its new artificial intelligence platform, Showrunner. Now it has Amazon’s backing through the Alexa Fund. While the exact amount of the investment hasn’t been disclosed, Amazon’s involvement signals growing interest in AI-powered entertainment. Fable describes Showrunner as the “Netflix of AI,” a place where anyone can type in a few words and instantly generate an episode.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTER HOLLYWOOD TURNS TO AI TOOLS TO REWIRE MOVIE MAGIC Fable’s Showrnunner harnesses the power of AI to generate new TV episodes without needing a full production crew. (iStock) A new era of user-generated entertainmentInstead of passively watching shows, Showrunner invites users to co-create them. You can build an episode from scratch or jump into a world someone else started. It’s all done through text: just describe the scene or story, and the AI gets to work. The company officially launched with Exit Valley, a satirical, animated series set in a fictional tech hub called Sim Francisco. Think Family Guy, but aimed at Silicon Valley titans like Elon Musk and Sam Altman. It’s edgy, funny, and powered entirely by AI. If you’re curious, head to the Showrunner website, and you’ll be directed to their Discord server, where episodes are streamed, and new ones are made in real-time.BILL MAHER BLASTS AI TECHNOLOGY FOR ‘A– KISSING’ ITS ‘EXTREMELY NEEDY’ HUMAN USERS Amazon is backing the project through its Alexa Fund.   (Chloe Collyer/Bloomberg via Getty Images)Backed by big tech, led by a VR veteranFable’s CEO, Edward Saatchi, has a history of pushing boundaries. Before launching Fable, he co-founded Oculus Story Studios, a division of Oculus VR acquired by Meta. His latest mission: turn Hollywood from a one-way broadcast into a two-way conversation.”Hollywood streaming services are about to become two-way entertainment,” Saatchi told Variety. “Audiences will be able to make new episodes with a few words and become characters with a photo.”That vision has already started to take shape. Fable previously released nine AI-generated South Park episodes that racked up more than 80 million views. Those episodes were made with the company’s proprietary AI engine, fine-tuned for animated storytelling.CLICK HERE TO GET THE FOX NEWS APP  Fable’s Showrunner software will give everyday users the power to create their own animated TV episodes from their computer. (Oliver Berg/picture alliance via Getty Images)Why animation comes firstRight now, Showrunner is focused entirely on animated content and that’s no accident. According to Saatchi, animation is far easier for AI to handle than photorealistic video. While tech giants like Meta, OpenAI, and Google are racing to create lifelike AI videos, Fable is avoiding that battleground. Instead, the startup wants to give everyday users the tools to become writers, directors, and even stars of their own shows. All it takes is a bit of imagination and a few lines of text.What this means for youWhether you’re a writer, a fan of animation, or just someone who’s curious about AI, this shift opens the door to a whole new kind of entertainment. You no longer need a Hollywood budget to tell a story. If you’ve got a creative idea, you can bring it to life instantly, and share it with a community that’s doing the same. Showrunner gives you the power to shape pop culture, not just consume it. You could even remix existing episodes or jump into an AI-generated world with your own twist.Take my quiz: How safe is your online security?Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right — and what needs improvement. Take my Quiz here: Cyberguy.com/QuizKurt’s key takeawaysAmazon’s support of Fable shows that generative AI appears to be the next evolution in how we create and experience entertainment. Tools like Showrunner are turning viewers into creators, and what we consider a “TV show” might soon be as personal as a playlist.If you could make your own animated series with a single prompt, what story would you tell? Let us know by writing to us at Cyberguy.com/ContactSign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTERCopyright 2025 CyberGuy.com.  All rights reserved.

Amazon backs AI startup that lets you make TV shows Read More »

stacked noise

How do AI models generate videos?

Sure, the clips you see in demo reels are cherry-picked to showcase a company’s models at the top of their game. But with the technology in the hands of more users than ever before—Sora and Veo 3 are available in the ChatGPT and Gemini apps for paying subscribers—even the most casual filmmaker can now knock out something remarkable.  The downside is that creators are competing with AI slop, and social media feeds are filling up with faked news footage. Video generation also uses up a huge amount of energy, many times more than text or image generation. 
With AI-generated videos everywhere, let’s take a moment to talk about the tech that makes them work. How do you generate a video? Let’s assume you’re a casual user. There are now a range of high-end tools that allow pro video makers to insert video generation models into their workflows. But most people will use this technology in an app or via a website. You know the drill: “Hey, Gemini, make me a video of a unicorn eating spaghetti. Now make its horn take off like a rocket.” What you get back will be hit or miss, and you’ll typically need to ask the model to take another pass or 10 before you get more or less what you wanted. 

[embedded content]

So what’s going on under the hood? Why is it hit or miss—and why does it take so much energy? The latest wave of video generation models are what’s known as latent diffusion transformers. Yes, that’s quite a mouthful. Let’s unpack each part in turn, starting with diffusion.  What’s a diffusion model? Imagine taking an image and adding a random spattering of pixels to it. Take that pixel-spattered image and spatter it again and then again. Do that enough times and you will have turned the initial image into a random mess of pixels, like static on an old TV set.  A diffusion model is a neural network trained to reverse that process, turning random static into images. During training, it gets shown millions of images in various stages of pixelation. It learns how those images change each time new pixels are thrown at them and, thus, how to undo those changes.  The upshot is that when you ask a diffusion model to generate an image, it will start off with a random mess of pixels and step by step turn that mess into an image that is more or less similar to images in its training set. 

[embedded content]

But you don’t want any image—you want the image you specified, typically with a text prompt. And so the diffusion model is paired with a second model—such as a large language model (LLM) trained to match images with text descriptions—that guides each step of the cleanup process, pushing the diffusion model toward images that the large language model considers a good match to the prompt.  An aside: This LLM isn’t pulling the links between text and images out of thin air. Most text-to-image and text-to-video models today are trained on large data sets that contain billions of pairings of text and images or text and video scraped from the internet (a practice many creators are very unhappy about). This means that what you get from such models is a distillation of the world as it’s represented online, distorted by prejudice (and pornography). It’s easiest to imagine diffusion models working with images. But the technique can be used with many kinds of data, including audio and video. To generate movie clips, a diffusion model must clean up sequences of images—the consecutive frames of a video—instead of just one image.  What’s a latent diffusion model?  All this takes a huge amount of compute (read: energy). That’s why most diffusion models used for video generation use a technique called latent diffusion. Instead of processing raw data—the millions of pixels in each video frame—the model works in what’s known as a latent space, in which the video frames (and text prompt) are compressed into a mathematical code that captures just the essential features of the data and throws out the rest. 

A similar thing happens whenever you stream a video over the internet: A video is sent from a server to your screen in a compressed format to make it get to you faster, and when it arrives, your computer or TV will convert it back into a watchable video.  And so the final step is to decompress what the latent diffusion process has come up with. Once the compressed frames of random static have been turned into the compressed frames of a video that the LLM guide considers a good match for the user’s prompt, the compressed video gets converted into something you can watch.   With latent diffusion, the diffusion process works more or less the way it would for an image. The difference is that the pixelated video frames are now mathematical encodings of those frames rather than the frames themselves. This makes latent diffusion far more efficient than a typical diffusion model. (Even so, video generation still uses more energy than image or text generation. There’s just an eye-popping amount of computation involved.)  What’s a latent diffusion transformer? Still with me? There’s one more piece to the puzzle—and that’s how to make sure the diffusion process produces a sequence of frames that are consistent, maintaining objects and lighting and so on from one frame to the next. OpenAI did this with Sora by combining its diffusion model with another kind of model called a transformer. This has now become standard in generative video.  Transformers are great at processing long sequences of data, like words. That has made them the special sauce inside large language models such as OpenAI’s GPT-5 and Google DeepMind’s Gemini, which can generate long sequences of words that make sense, maintaining consistency across many dozens of sentences.  But videos are not made of words. Instead, videos get cut into chunks that can be treated as if they were. The approach that OpenAI came up with was to dice videos up across both space and time. “It’s like if you were to have a stack of all the video frames and you cut little cubes from it,” says Tim Brooks, a lead researcher on Sora.

[embedded content]

A selection of videos generated with Veo 3 and Midjourney. The clips have been enhanced in postproduction with Topaz, an AI video-editing tool. Credit: VaigueMan
Using transformers alongside diffusion models brings several advantages. Because they are designed to process sequences of data, transformers also help the diffusion model maintain consistency across frames as it generates them. This makes it possible to produce videos in which objects don’t pop in and out of existence, for example.  And because the videos are diced up, their size and orientation do not matter. This means that the latest wave of video generation models can be trained on a wide range of example videos, from short vertical clips shot with a phone to wide-screen cinematic films. The greater variety of training data has made video generation far better than it was just two years ago. It also means that video generation models can now be asked to produce videos in a variety of formats. 
What about the audio?  A big advance with Veo 3 is that it generates video with audio, from lip-synched dialogue to sound effects to background noise. That’s a first for video generation models. As Google DeepMind CEO Demis Hassabis put it at this year’s Google I/O: “We’re emerging from the silent era of video generation.” 

[embedded content]

The challenge was to find a way to line up video and audio data so that the diffusion process would work on both at the same time. Google DeepMind’s breakthrough was a new way to compress audio and video into a single piece of data inside the diffusion model. When Veo 3 generates a video, its diffusion model produces audio and video together in a lockstep process, ensuring that the sound and images are synched.   You said that diffusion models can generate different kinds of data. Is this how LLMs work too?  No—or at least not yet. Diffusion models are most often used to generate images, video, and audio. Large language models—which generate text (including computer code)—are built using transformers. But the lines are blurring. We’ve seen how transformers are now being combined with diffusion models to generate videos. And this summer Google DeepMind revealed that it was building an experimental large language model that used a diffusion model instead of a transformer to generate text.  Here’s where things start to get confusing: Though video generation (which uses diffusion models) consumes a lot of energy, diffusion models themselves are in fact more efficient than transformers. Thus, by using a diffusion model instead of a transformer to generate text, Google DeepMind’s new LLM could be a lot more efficient than existing LLMs. Expect to see more from diffusion models in the near future!

How do AI models generate videos? Read More »

cropped ai icon 32x32 1

VMware nods to AI but looks to long-term

Owner of VMware, Broadcom, announced that its VMware Cloud Foundation platform is now AI native at the VMware Explore conference a few weeks ago.It was the latest move by the company to keep up to speed with the rest of the technology industry’s wide and rapid adoption of large language models, yet came as the company battles bad press about licensing policy changes that have dogged it since it acquired virtualisation giant VMware in November 2023.The ending of the platform’s free tier, reports of aggressive sales tactics to keep subscribers on board, and several court cases focused on existing agreements, including extant perpetual licences, have led many users to rethink what is often the basis of their IT stack. Nutanix, SUSE, and IBM have been among the beneficiaries from those leaving the VMware stable.But the nature of VMware deployments means that they’re often complex, and extricating workloads out from heavily-virtualised environments running on the platform can come with high migration costs and not insignificant risks to an organisation’s QoS metrics. Better to stay and pay the devil you know than go out on a limb and migrate to an alternative.By the same token, engineering AI into VMware’s offerings is fraught with danger and the potential for identical fallout. Re-architecturing the VMware platform to bake AI in at the core would mean it would be end-users’ stuttering workloads paying the price for any breaking changes. And the nature of software is that the deeper breaking changes are made, the greater the potential negative ramifications.Broadcom’s initial aims are to make it simpler for its users to deploy AI models and agents inside their existing environments. VMware Private AI Services is to ship with VCF 9 subscriptions next year, and will comprise of all the elements required to build and run AI on-premise, or at least outside hyperscale facilities. It will include a model store (it’s expected that many organisations will turn to – at least in testing phases – open-source, smaller models), indexing services, vector databases, an agentic AI builder, and a ready-made API gateway to allow optimised machine-to-machine communications between separate AI models that need to work together.Conference attendees were told AI’s presence in the enterprise was only going to grow, and so it only made sense that AI should be a feature of every VMware-based infrastructure. As it stands, what Broadcom is offering is a nod in the AI direction, but nothing unique nor new. The company also announced improvements to the VMware Tanzu Platform which include simpler publishing of MCP servers, and a new data lakehouse, Tanzu Data Intelligence.Presumably low-hanging fruit for VMware’s own developers was Intelligent Assist for VCF, a chatbot with access to the VMware knowledgebase. The AI-powered ‘bot will be able to lengthen the time between a user raising an issue or question, and them getting to speak to a human who can help.The excitement around widespread adoption of containers led many to declare that the end was nigh for ‘traditional’ virtualisation, much in the same way that the explosion of cloud services was to spell the end for on-premise databases, and thus see off Oracle. The reality was, and remains, that legacy infrastructure compels enterprise users to consolidate on the platforms they have invested in, despite rapacious licence fees and high costs.VMware may be sprinkling the deals between it and its customers with a little AI fairy-dust, but it knows that its long-term income is guaranteed by the presence of legacy infrastructure at the core of the enterprise.(Image source: “Virtual Try On” by jurvetson is licensed under CC BY 2.0.)Want to dive deeper into the tools and frameworks shaping modern development? Check out the AI & Big Data Expo, taking place in Amsterdam, California, and London. Explore cutting-edge sessions on machine learning, data pipelines, and next-gen AI applications. The event is part of TechEx and co-located with other leading technology events. Click here for more information.DeveloperTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

VMware nods to AI but looks to long-term Read More »

cropped ai icon 32x32 1

Yext Scout Guides Brands Through AI Search Challenges

Customers are discovering brands and learning about products and services in new ways from traditional search to AI search, to AI agents and more, the discovery journey has completely changed, and brands need to adapt to the new paradigm.Launched earlier this year, Yext Scout is an AI search and competitive intelligence agent that’s designed to boost brand visibility and uncover critical insights related to the new AI-driven search platforms, alongside ‘traditional’ searches as we know them.Scout is part of Yext, the leading brand visibility platform. Scout provides performance benchmarks laid out against a brand’s local competitors, and creates smart recommendations that you can act on, in real-time.Yext will be exploring the massive impact AI has had on search and user behaviours, and how Scout can inform marketing professionals, at a webinar, ‘Winning Search in EMEA: How Yext Scout Drives Visibility Across Local and AI Platforms.’ It’s scheduled for Wednesday, 24 September at 1pm BST / 2pm CEST.With AI now ever-present in digital interactions, platforms like ChatGPT, Gemini, Perplexity, and latterly, Grok, have huge influence over consumers’ discovery and interaction with brands.Increasingly presented top-of-page, AI discoveries are replacing the traditional search engine results page with conversational answers that remove users’ need to sift through potentially unreliable results. Brands need guidance to understand how to optimise their content so it ranks optimally in these emerging channels.Measurement of effectiveness is also new territory, putting many companies behind their better-informed, if not better-providing competitors. The Yext webinar on 24 September will help brand and marketing professionals improve visibility, track and tailor sentiment, and measure performance across social content, reviews, and local listings.Join Yext for the exclusive session, ‘Winning Search in EMEA: How Yext Scout Drives Visibility Across Local and AI Platforms,’ on Wednesday 24 September at 1pm BST / 2pm CEST.Register your place here.

Yext Scout Guides Brands Through AI Search Challenges Read More »

MITTR Deloitte cover

Partnering with generative AI in the finance function

In association withDeloitte Generative AI has the potential to transform the finance function. By taking on some of the more mundane tasks that can occupy a lot of time, generative AI tools can help free up capacity for more high-value strategic work. For chief financial officers, this could mean spending more time and energy on proactively advising the business on financial strategy as organizations around the world continue to weather ongoing geopolitical and financial uncertainty. CFOs can use large language models (LLMs) and generative AI tools to support everyday tasks like generating quarterly reports, communicating with investors, and formulating strategic summaries, says Andrew W. Lo, Charles E. and Susan T. Harris professor and director of the Laboratory for Financial Engineering at the MIT Sloan School of Management. “LLMs can’t replace the CFO by any means, but they can take a lot of the drudgery out of the role by providing first drafts of documents that summarize key issues and outline strategic priorities.” Generative AI is also showing promise in functions like treasury, with use cases including cash, revenue, and liquidity forecasting and management, as well as automating contracts and investment analysis. However, challenges still remain for generative AI to contribute to forecasting due to the mathematical limitations of LLMs. Regardless, Deloitte’s analysis of its 2024 State of Generative AI in the Enterprise survey found that one-fifth (19%) of finance organizations have already adopted generative AI in the finance function. Despite return on generative AI investments in finance functions being 8 points below expectations so far for surveyed organizations (see Figure 1), some finance departments appear to be moving ahead with investments. Deloitte’s fourth-quarter 2024 North American CFO Signals survey found that 46% of CFOs who responded expect deployment or spend on generative AI in finance to increase in the next 12 months (see Figure 2). Respondents cite the technology’s potential to help control costs through self-service and automation and free up workers for higher-level, higher-productivity tasks as some of the top benefits of the technology.
“Companies have used AI on the customer-facing side of the house for a long time, but in finance, employees are still creating documents and presentations and emailing them around,” says Robyn Peters, principal in finance transformation at Deloitte Consulting LLP. “Largely, the human-centric experience that customers expect from brands in retail, transportation, and hospitality haven’t been pulled through to the finance organization. And there’s no reason we cannot do that—and, in fact, AI makes it a lot easier to do.” If CFOs think they can just sit by for the next five years and watch how AI evolves, they may lose out to more nimble competitors that are actively experimenting in the space. Future finance professionals are growing up using generative AI tools too. CFOs should consider reimagining what it looks like to be a successful finance professional, in collaboration with AI.
Download the report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. It was researched, designed, and written by human writers, editors, analysts, and illustrators. AI tools that may have been used were limited to secondary production processes that passed thorough human review.

Partnering with generative AI in the finance function Read More »

zoom meeting

AI meeting notes are recording your private conversations

NEWYou can now listen to Fox News articles!
Artificial intelligence has slipped quietly into our meetings. Zoom, Google Meet and other platforms now offer AI notetakers that listen, record and share summaries. At first, it feels like a helpful assistant. No more scrambling to jot down every point. But there’s a catch. It records everything, including comments you never planned to share.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM newsletter.GOOGLE AI EMAIL SUMMARIES CAN BE HACKED TO HIDE PHISHING ATTACKSWhen private conversations end up in recapsMany people are discovering that AI notetakers capture more than project updates and strategy points. Jokes, personal stories and even casual side comments often slip into the official meeting summaries.What might feel harmless in the moment, like teasing someone, chatting about lunch plans or venting about a frustrating errand, can suddenly reappear in a recap email sent to the whole group. In some cases, even affectionate nicknames or pet mishaps have shown up right alongside serious action items. Experts warn that AI note-taking tools integrated into Zoom and Google Meet could capture more than the meeting agenda. (Korea Pool/Getty Images)Examples of what could go wrong:Jokes or sarcasm taken out of contextPersonal errands or gossip appearing in a recapCasual catch-ups mixed into meeting notesEmbarrassing slip-ups becoming part of official recordsThese surprises can be funny in hindsight, but they highlight a bigger issue. AI notetakers don’t separate casual conversation from work-related discussion. And once your words are written down, they can be saved, forwarded or even archived in ways you didn’t intend. That means an offhand remark could live far longer than the meeting itself.AI AND LEARNING RETENTION: DOES CHATGPT HELP OR HURT? A Google Gemini generative artificial intelligence webpage. (Andrey Rudakov/Bloomberg via Getty Images)Why AI notetakers capture too muchThese tools work by recording conversations in real time and then generating automatic summaries. Zoom’s AI Companion flags its presence with a diamond icon. Google Meet’s version uses a pencil icon and an audio cue. Only meeting hosts can switch them on or off.That sounds transparent, but most people stop noticing the icons after a few minutes. Once the AI is running, it doesn’t separate “work talk” from “side chatter.” The result? Your casual remarks can end up in a summary sent to colleagues or even clients.And mistakes happen. An AI notetaker might mishear a joke, twist sarcasm into something serious or drop a casual remark into notes where it looks out of place. Stripped of tone and context, those words can come across very differently once they’re written down.META AI’S NEW CHATBOT RAISES PRIVACY ALARMS The Google Gemini AI interface seen on an iPhone browser. (Jaap Arriens/NurPhoto via Getty Images)Steps to protect your privacy from AI notetakersEven if you use these tools, you can take control of what they capture. A few simple habits will help you reduce the risks while still getting the benefits.1) Stay alert to indicatorsAlways check for the flashing icon or audio cue that signals an AI notetaker is active.2) Control the settingsIf you’re the host, decide when AI should run. Limit its use to important meetings where notes are truly necessary.3) Choose recipients carefullyMany platforms let you control who receives the notes. Make sure only the right people get access.4) Use private chatsNeed to share a side comment? Send it as a direct message rather than saying it out loud.5) Save personal talk for laterKeep casual conversations off recorded calls. If you need to catch up, wait until the AI is off.6) Ask before enabling AIIf you’re not the host, confirm that everyone is comfortable with AI note-taking. Setting expectations up front prevents awkward situations later.7) Review and edit recapsCheck meeting notes before forwarding them. Edit or trim out personal chatter so only useful action items remain.8) Check where notes are storedFind out whether transcripts are saved in the cloud or on your device. Adjust retention settings, so private conversations don’t linger longer than necessary.9) Follow company guidelinesIf your workplace doesn’t yet have a policy on AI notetakers, suggest one. Clear rules protect both employees and clients.10) Keep software updatedAI features improve quickly. Updating your platform reduces errors, misheard comments and accidental leaks.CLICK HERE TO GET THE FOX NEWS APPWhat this means for youAI notetakers offer convenience, but they also reshape how we communicate at work. Once, small talk in meetings faded into the background. Now, even lighthearted comments can be captured, summarized and circulated. That shift means you need to think twice before speaking casually in a recorded meeting.Take my quiz: How safe is your online security?Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right – and what needs improvement. Take my Quiz here: Cyberguy.com.Kurt’s key takeawaysThe rise of AI in meetings shows both its promise and its pitfalls. You gain productivity, but risk oversharing. By understanding how these tools work and taking a few precautions, you can get the benefits without the embarrassment.Would you trust an AI notetaker to record your next meeting, knowing it might repeat your private conversations word for word? Let us know by writing to us at Cyberguy.com.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM newsletter.Copyright 2025 CyberGuy.com. All rights reserved. 

AI meeting notes are recording your private conversations Read More »

proofr app store

New AI apps help rental drivers avoid fake damage fees

NEWYou can now listen to Fox News articles!
Rental car drivers are now turning to artificial intelligence to protect themselves from surprise damage fees. Major companies, such as Hertz and Sixt, have begun using automated inspection tools to detect scratches and dents. While these scanners promise efficiency, they have sparked backlash from renters who say they were unfairly billed for minor blemishes.To level the playing field, new consumer-focused apps are stepping in. Proofr, which launched recently, gives renters the ability to generate secure, time-stamped before-and-after photos of their vehicles. The app uses AI to detect even subtle changes, then encrypts and stores the images so they cannot be altered.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTERAI-POWERED SELF-DRIVING SOFTWARE IS DISRUPTING THE TRUCKING INDUSTRY AI-powered damage detection apps like Proofr could change the way rental car companies report vehicle damage. (Proofr)How the AI-powered damage detection app worksCreated by 21-year-old college student Eric Kuttner, Founder and CEO of Proofr, the app helps drivers create tamper-proof evidence when renting a car. Proofr secures every scan with geotags and timestamps, while its AI automatically flags potential damage or changes. It then organizes everything into smart, exportable reports, giving renters strong leverage against unfair claims.Instead of juggling dozens of photos in your camera roll, Proofr streamlines the process. With just eight quick scans, you get a detailed before-and-after report in under a minute. You can also generate polished PDF reports instantly, which helps with rental agencies, landlords, or insurance claims. Although cars are the main focus, people also use Proofr for Airbnbs, eBay listings, moving into apartments, and even documenting valuables. About 85% rely on it for car rentals, while 15% use it to protect themselves in vacation homes.By combining secure evidence with AI-powered detection, Proofr positions itself as a must-have travel hack. More than a convenience, it can save travelers real money by preventing hidden fees and leveling the playing field against large agencies.The app is free to download, while full features require a Pro subscription: $2.89 weekly, $9.90 monthly, or $89.90 annually. Pricing is standardized in the US, and Apple automatically adjusts it for local currencies, taxes, and exchange rates in other countries.Competition in the AI damage spaceProofr is not the only player. Ravin AI originally worked with Avis and Hertz but shifted its focus toward insurers and dealerships. Still, the company now offers a free demo on its website, allowing drivers to scan their vehicles and compare damage before and after rentals.Ravin’s system has been trained on two billion images over ten years. However, like Proofr, it is not perfect. Testers have noted missed paint chips and false positives from reflections. Both companies admit that lighting, angles, and photo quality remain challenges.LUCID JOINS TESLA AND GM WITH HANDS-FREE HIGHWAY DRIVING Some companies are implementing physical scanners to detect damage to rental vehicles. (ProovStation)Why rental companies are under fireThe frustration comes as rental agencies roll out AI inspection systems from firms such as UVeye and ProovStation. Sixt, for example, has already installed ProovStation’s AI-powered scanners at several U.S. airport locations, including Fort Lauderdale, Atlanta, Charlotte, Miami, and Maui, with more on the way in Orlando, Washington, and Nashville. These scanners automatically photograph vehicles at the start and end of each rental. The system then compares images to flag potential damage, which is later reviewed by staff before any claim is issued.Critics argue these automated tools can turn every small scratch into a profit source. Some even point to ProovStation’s own marketing, which describes routine inspections as “gold mines of untapped opportunities.” Industry experts stress that companies should only pursue claims for significant damage, not charge hundreds for tiny scuffs.CLICK HERE TO GET THE FOX NEWS APP Rental car company Sixt has already installed ProovStation scanners at several U.S. airports. (ProovStation)What this means for youIf you rent cars regularly, AI is already shaping your experience. Rental companies are using automated inspections to justify new charges, sometimes for barely visible marks. Apps like Proofr and Ravin give you the same technology, but on your side. By scanning your car before and after your rental, you create a digital record that can help you challenge unfair claims.Take my quiz: How safe is your online security?Think your devices and data are truly protected? Take this quick quiz to see where your digital habits stand. From passwords to Wi-Fi settings, you’ll get a personalized breakdown of what you’re doing right — and what needs improvement. Take my Quiz here: Cyberguy.com/QuizKurt’s key takeawaysThe rental car industry is in the middle of a technology shift. What was once a quick glance by an employee is now a machine-driven process that can generate steep charges. Consumer apps bring transparency, but they also highlight the growing need for fairness in damage claims.Would you trust an AI app to protect you from rental car fees or do you think rental companies should change their policies first? Let us know by writing to us at Cyberguy.com/ContactSign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join my CYBERGUY.COM/NEWSLETTERCopyright 2025 CyberGuy.com.  All rights reserved.

New AI apps help rental drivers avoid fake damage fees Read More »