memoment editorial

5 25 SPARC Tokamak Hall in Devens

Google’s electricity demand is skyrocketing

We got two big pieces of energy news from Google this week. The company announced that it’s signed an agreement to purchase electricity from a fusion company’s forthcoming first power plant. Google also released its latest environmental report, which shows that its energy use from data centers has doubled since 2020. Taken together, these two bits of news offer a fascinating look at just how desperately big tech companies are hunting for clean electricity to power their data centers as energy demand and emissions balloon in the age of AI. Of course, we don’t know exactly how much of this pollution is attributable to AI because Google doesn’t break that out. (Also a problem!) So, what’s next and what does this all mean?  Let’s start with fusion: Google’s deal with Commonwealth Fusion Systems is intended to provide the tech giant with 200 megawatts of power. This will come from Commonwealth’s first commercial plant, a facility planned for Virginia that the company refers to as the Arc power plant. The agreement represents half its capacity. What’s important to note here is that this power plant doesn’t exist yet. In fact, Commonwealth still needs to get its Sparc demonstration reactor, located outside Boston, up and running. That site, which I visited in the fall, should be completed in 2026.
(An aside: This isn’t the first deal between Big Tech and a fusion company. Microsoft signed an agreement with Helion a couple of years ago to buy 50 megawatts of power from a planned power plant, scheduled to come online in 2028. Experts expressed skepticism in the wake of that deal, as my colleague James Temple reported.) Nonetheless, Google’s announcement is a big moment for fusion, in part because of the size of the commitment and also because Commonwealth, a spinout company from MIT’s Plasma Science and Fusion Center, is seen by many in the industry as a likely candidate to be the first to get a commercial plant off the ground. (MIT Technology Review is owned by MIT but is editorially independent.)
Google leadership was very up-front about the length of the timeline. “We would certainly put this in the long-term category,” said Michael Terrell, Google’s head of advanced energy, in a press call about the deal. The news of Google’s foray into fusion comes just days after the tech giant’s release of its latest environmental report. While the company highlighted some wins, some of the numbers in this report are eye-catching, and not in a positive way. Google’s emissions have increased by over 50% since 2019, rising 6% in the last year alone. That’s decidedly the wrong direction for a company that’s set a goal to reach net-zero greenhouse-gas emissions by the end of the decade. It’s true that the company has committed billions to clean energy projects, including big investments in next-generation technologies like advanced nuclear and enhanced geothermal systems. Those deals have helped dampen emissions growth, but it’s an arguably impossible task to keep up with the energy demand the company is seeing. Google’s electricity consumption from data centers was up 27% from the year before. It’s doubled since 2020, reaching over 30 terawatt-hours. That’s nearly the annual electricity consumption from the entire country of Ireland. As an outsider, it’s tempting to point the finger at AI, since that technology has crashed into the mainstream and percolated into every corner of Google’s products and business. And yet the report downplays the role of AI. Here’s one bit that struck me: “However, it’s important to note that our growing electricity needs aren’t solely driven by AI. The accelerating growth of Google Cloud, continued investments in Search, the expanding reach of YouTube, and more, have also contributed to this overall growth.” There is enough wiggle room in that statement to drive a large electric truck through. When I asked about the relative contributions here, company representative Mara Harris said via email that they don’t break out what portion comes from AI. When I followed up asking if the company didn’t have this information or just wouldn’t share it, she said she’d check but didn’t get back to me.

I’ll make the point here that we’ve made before, including in our recent package on AI and energy: Big companies should be disclosing more about the energy demands of AI. We shouldn’t be guessing at this technology’s effects. Google has put a ton of effort and resources into setting and chasing ambitious climate goals. But as its energy needs and those of the rest of the industry continue to explode, it’s obvious that this problem is getting tougher, and it’s also clear that more transparency is a crucial part of the way forward. This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.

Google’s electricity demand is skyrocketing Read More »

250627 AIagentstalking

Don’t let hype about AI agents get ahead of reality

Google’s recent unveiling of what it calls a “new class of agentic experiences” feels like a turning point. At its I/O 2025 event in May, for example, the company showed off a digital assistant that didn’t just answer questions; it helped work on a bicycle repair by finding a matching user manual, locating a YouTube tutorial, and even calling a local store to ask about a part, all with minimal human nudging. Such capabilities could soon extend far outside the Google ecosystem. The company has introduced an open standard called Agent-to-Agent, or A2A, which aims to let agents from different companies talk to each other and work together. The vision is exciting: Intelligent software agents that act like digital coworkers, booking your flights, rescheduling meetings, filing expenses, and talking to each other behind the scenes to get things done. But if we’re not careful, we’re going to derail the whole idea before it has a chance to deliver real benefits. As with many tech trends, there’s a risk of hype racing ahead of reality. And when expectations get out of hand, a backlash isn’t far behind. Let’s start with the term “agent” itself. Right now, it’s being slapped on everything from simple scripts to sophisticated AI workflows. There’s no shared definition, which leaves plenty of room for companies to market basic automation as something much more advanced. That kind of “agentwashing” doesn’t just confuse customers; it invites disappointment. We don’t necessarily need a rigid standard, but we do need clearer expectations about what these systems are supposed to do, how autonomously they operate, and how reliably they perform. And reliability is the next big challenge. Most of today’s agents are powered by large language models (LLMs), which generate probabilistic responses. These systems are powerful, but they’re also unpredictable. They can make things up, go off track, or fail in subtle ways—especially when they’re asked to complete multistep tasks, pulling in external tools and chaining LLM responses together. A recent example: Users of Cursor, a popular AI programming assistant, were told by an automated support agent that they couldn’t use the software on more than one device. There were widespread complaints and reports of users cancelling their subscriptions. But it turned out the policy didn’t exist. The AI had invented it.
In enterprise settings, this kind of mistake could create immense damage. We need to stop treating LLMs as standalone products and start building complete systems around them—systems that account for uncertainty, monitor outputs, manage costs, and layer in guardrails for safety and accuracy. These measures can help ensure that the output adheres to the requirements expressed by the user, obeys the company’s policies regarding access to information, respects privacy issues, and so on. Some companies, including AI21 (which I cofounded and which has received funding from Google), are already moving in that direction, wrapping language models in more deliberate, structured architectures. Our latest launch, Maestro, is designed for enterprise reliability, combining LLMs with company data, public information, and other tools to ensure dependable outputs. Still, even the smartest agent won’t be useful in a vacuum. For the agent model to work, different agents need to cooperate (booking your travel, checking the weather, submitting your expense report) without constant human supervision. That’s where Google’s A2A protocol comes in. It’s meant to be a universal language that lets agents share what they can do and divide up tasks. In principle, it’s a great idea.In practice, A2A still falls short. It defines how agents talk to each other, but not what they actually mean. If one agent says it can provide “wind conditions,” another has to guess whether that’s useful for evaluating weather on a flight route. Without a shared vocabulary or context, coordination becomes brittle. We’ve seen this problem before in distributed computing. Solving it at scale is far from trivial.
There’s also the assumption that agents are naturally cooperative. That may hold inside Google or another single company’s ecosystem, but in the real world, agents will represent different vendors, customers, or even competitors. For example, if my travel planning agent is requesting price quotes from your airline booking agent, and your agent is incentivized to favor certain airlines, my agent might not be able to get me the best or least expensive itinerary. Without some way to align incentives through contracts, payments, or game-theoretic mechanisms, expecting seamless collaboration may be wishful thinking. None of these issues are insurmountable. Shared semantics can be developed. Protocols can evolve. Agents can be taught to negotiate and collaborate in more sophisticated ways. But these problems won’t solve themselves, and if we ignore them, the term “agent” will go the way of other overhyped tech buzzwords. Already, some CIOs are rolling their eyes when they hear it. That’s a warning sign. We don’t want the excitement to paper over the pitfalls, only to let developers and users discover them the hard way and develop a negative perspective on the whole endeavor. That would be a shame. The potential here is real. But we need to match the ambition with thoughtful design, clear definitions, and realistic expectations. If we can do that, agents won’t just be another passing trend; they could become the backbone of how we get things done in the digital world. Yoav Shoham is a professor emeritus at Stanford University and cofounder of AI21 Labs. His 1993 paper on agent-oriented programming received the AI Journal Classic Paper Award. He is coauthor of Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations, a standard textbook in the field.

Don’t let hype about AI agents get ahead of reality Read More »

crew11

NASA Sets Briefings for SpaceX Crew-11 Mission to Space Station

NASA and its partners will discuss the upcoming crew rotation to the International Space Station during a pair of news conferences on Thursday, July 10, from the agency’s Johnson Space Center in Houston.
First is an overview news conference at 12 p.m. EDT with mission leadership discussing final launch and mission preparations on the agency’s YouTube channel.
Next, crew will participate in a news conference at 2 p.m. on NASA’s YouTube channel, followed by individual astronaut interviews at 3 p.m. This is the final media opportunity with Crew-11 before they travel to NASA’s Kennedy Space Center in Florida for launch.
The Crew-11 mission, targeted to launch in late July/early August, will carry NASA astronauts Zena Cardman and Mike Fincke, JAXA (Japan Aerospace Exploration Agency) astronaut Kimiya Yui, and Roscosmos cosmonaut Oleg Platonov to the orbiting laboratory. The crew will launch aboard a SpaceX Dragon spacecraft on the company’s Falcon 9 rocket from Launch Complex 39A.
United States-based media seeking to attend in person must contact the NASA Johnson newsroom no later than 5 p.m. on Monday, July 7, at 281-483-5111 or jsccommu@mail.nasa.gov. A copy of NASA’s media accreditation policy is available online.
Any media interested in participating in the news conferences by phone must contact the Johnson newsroom by 9:45 a.m. the day of the event. Media seeking virtual interviews with the crew must submit requests to the Johnson newsroom by 5 p.m. on Monday, July 7.
Briefing participants are as follows (all times Eastern and subject to change based on real-time operations):
12 p.m.: Mission Overview News Conference

Steve Stich, manager, Commercial Crew Program, NASA Kennedy
Bill Spetch, operations integration manager, International Space Station Program, NASA Johnson
NASA’s Space Operations Mission Directorate representative
Sarah Walker, director, Dragon Mission Management, SpaceX
Mayumi Matsuura, vice president and director general, Human Spaceflight Technology Directorate, JAXA

2 p.m.: Crew News Conference

Zena Cardman, Crew-11 commander, NASA
Mike Fincke, Crew-11 pilot, NASA
Kimiya Yui, Crew-11 mission specialist, JAXA
Oleg Platonov, Crew-11 mission specialist, Roscosmos

3 p.m.: Crew Individual Interview Opportunities

Crew-11 members available for a limited number of interviews

Selected as a NASA astronaut in 2017, Cardman will conduct her first spaceflight. The Williamsburg, Virginia, native holds a bachelor’s degree in Biology and a master’s in Marine Sciences from the University of North Carolina at Chapel Hill. At the time of selection, she was pursuing a doctorate in geosciences. Cardman’s geobiology and geochemical cycling research focused on subsurface environments, from caves to deep sea sediments. Since completing initial training, Cardman has supported real-time station operations and lunar surface exploration planning. Follow @zenanaut on X and @zenanaut on Instagram.
This will be Fincke’s fourth trip to the space station, having logged 382 days in space and nine spacewalks during Expedition 9 in 2004, Expedition 18 in 2008, and STS-134 in 2011, the final flight of space shuttle Endeavour. Throughout the past decade, Fincke has applied his expertise to NASA’s Commercial Crew Program, advancing the development and testing of the SpaceX Dragon spacecraft and Boeing Starliner spacecraft toward operational certification. The Emsworth, Pennsylvania, native is a graduate of the United States Air Force Test Pilot School and holds bachelors’ degrees from the Massachusetts Institute of Technology, Cambridge, in both aeronautics and astronautics, as well as Earth, atmospheric and planetary sciences. He also has a master’s degree in aeronautics and astronautics from Stanford University in California. Fincke is a retired U.S. Air Force colonel with more than 2,000 flight hours in over 30 different aircraft. Follow @AstroIronMike on X and Instagram.
With 142 days in space, this will be Yui’s second trip to the space station. After his selection as a JAXA astronaut in 2009, Yui flew as a flight engineer for Expedition 44/45 and became the first Japanese astronaut to capture JAXA’s H-II Transfer Vehicle using the station’s robotic arm. In addition to constructing a new experimental environment aboard Kibo, he conducted a total of 21 experiments for JAXA. In November 2016, Yui was assigned as chief of the JAXA Astronaut Group. He graduated from the School of Science and Engineering at the National Defense Academy of Japan in 1992. He later joined the Air Self-Defense Force at the Japan Defense Agency (currently the Ministry of Defense). In 2008, Yui joined the Air Staff Office at the Ministry of Defense as a lieutenant colonel. Follow @astro_kimiya on X.
The Crew-11 mission also will be Platonov’s first spaceflight. Before his selection as a cosmonaut in 2018, Platonov earned a degree in engineering from Krasnodar Air Force Academy in aircraft operations and air traffic management. He also earned a bachelor’s degree in state and municipal management in 2016 from the Far Eastern Federal University in Vladivostok, Russia. Assigned as a test cosmonaut in 2021, he has experience in piloting aircraft, zero gravity training, scuba diving, and wilderness survival.
For more information about the mission, visit:
https://www.nasa.gov/commercialcrew
-end-
Claire O’Shea / Joshua FinchHeadquarters, Washington202-358-1100claire.a.o’shea@nasa.gov / joshua.a.finch@nasa.gov
Sandra Jones / Joseph ZakrzewskiJohnson Space Center, Houston281-483-5111sandra.p.jones@nasa.gov / Joseph.a.zakrzewski@nasa.gov

NASA Sets Briefings for SpaceX Crew-11 Mission to Space Station Read More »

image 1152x648 1

New evidence that some supernovae may be a “double detonation”

In other cases, another member of the system will go on to form a second white dwarf. If gravitational instabilities bring these two objects together, then their collision will create a single object with a much higher mass. This will also restart fusion, leading to an explosion.
We have found evidence for both of these events happening. However, there are some questions about whether they happen often enough to explain the frequency of type Ia supernovae that we see. Both mechanisms require stars of sufficient mass orbiting within a reasonably close distance for either mass transfer or a collision to occur. So, astronomers have been considering other ways of blowing up a white dwarf.
The most promising option appears to be a double detonation. This can also require the transfer of some helium-rich material from another companion, but it can also occur if the white dwarf ends up with some unfused helium left on its surface. Regardless of how it ends up there, the helium can start fusing if enough of it pools up, or simply if its movement causes a sufficiently high local density in one region. However it happens, once fusion starts, the entire surface of the white dwarf will quickly follow, creating detonation number one.
That in turn will create compression in the carbon-oxygen portion of the white dwarf, pushing it past the density needed for that to start fusing. Once again, the initiation of fusion heats and compresses nearby material, creating a chain reaction that triggers widespread fusion in the white dwarf, blowing it to pieces as part of detonation two.

A shell game
The key thing about this is that it allows the explosion of white dwarfs before they reach a mass sufficient enough to trigger the fusion of their carbon and oxygen. Instead, it can potentially happen any time enough helium gathers on their surface. A double-detonation event would also be very difficult to detect, as the explosions would happen in rapid succession, and the environment in the immediate surroundings of a type Ia supernova is going to be complex and difficult to resolve.

New evidence that some supernovae may be a “double detonation” Read More »

meatball w black background

NASA Awards Simulation and Advanced Software Services II Contract

NASA has awarded a contract to MacLean Engineering & Applied Technologies, LLC of Houston to provide simulation and advanced software services to the agency.
The Simulation and Advanced Software Services II (SASS II) contract includes services from Oct. 1, 2025, through Sept. 30, 2030, with a maximum potential value not to exceed $150 million. The contract is a single award, indefinite-delivery/indefinite-quality contract with the capability to issue cost-plus-fixed-fee task orders and firm-fixed-price task orders.
Under the five-year SASS II contract, the awardee is tasked to provide simulation and software services for space-based vehicle models and robotic manipulator systems; human biomechanical representations for analysis and development of countermeasures devices; guidance, navigation, and control of space-based vehicles for all flight phases; and space-based vehicle on-board computer systems simulations of flight software systems. Responsibilities also include astronomical object surface interaction simulation of space-based vehicles, graphics support for simulation visualization and engineering analysis, and ground-based and onboarding systems to support human-in-the-loop training.
Major subcontractors include Tietronix Software Inc. in Houston and VEDO Systems, LLC, in League City, Texas.
For information about NASA and agency programs, visit:
https://www.nasa.gov/
-end-
Tiernan DoyleHeadquarters, Washington202-358-1600tiernan.doyle@nasa.gov
Chelsey BallarteJohnson Space Center, Houston281-483-5111Chelsey.n.ballarte@nasa.gov

NASA Awards Simulation and Advanced Software Services II Contract Read More »

Screenshot 2025 07 01 at 8.42.26E280AFAM 640x867 1

Rice could be key to brewing better non-alcoholic beer

Rice enhances flavor profiles for nonalcoholic beer, reduces fermentation time, and may contribute to flavor stability.

Credit:

Paden Johnson/CC BY-NC-SA

He and his team—including Christian Schubert, a visiting postdoc from the Research Institute for Raw Materials and Beverage Analysis in Berlin—brewed their own non-alcoholic beers, ranging from those made with 100 percent barley malt to ones made with 100 percent rice. They conducted a volatile chemical analysis to identify specific compounds present in the beers and assembled two sensory panels of tasters (one in the US, one in Europe) to assess aromas, flavors, and mouthfeel.
The panelists determined the rice-brewed beers had less worty flavors, and the chemical analysis revealed why: lower levels of aldehyde compounds. Instead, other sensory attributes emerged, most notably vanilla or buttery notes. “If a brewer wanted a more neutral character, they could use nonaromatic rice,” the authors wrote. Along with brewing beers with 50 percent barley/50 percent rice, this would produce non-alcoholic beers likely to appeal more broadly to consumers.
The panelists also noted that higher rice content resulted in beers with a fatty/creamy mouthfeel—likely because higher rice content was correlated with increased levels of larger alcohol molecules, which are known to contribute to a pleasant mouthfeel. But it didn’t raise the alcohol content above the legal threshold for a nonalcoholic beer.
There were cultural preferences, however. The US panelists didn’t mind worty flavors as much as the European tasters did, which might explain why the former chose beers brewed with 70 percent barley/30 percent rice as the optimal mix. Their European counterparts preferred the opposite ratio (30 percent barley/70 percent rice). The explanation “may lie in the sensory expectations shaped by each region’s brewing traditions,” the authors wrote. Fermentation also occurred more quickly as the rice content increased because of higher levels of glucose and fructose.
The second study focused on testing 74 different rice cultivars to determine their extract yields, an important variable when it comes to an efficient brewing process, since higher yields mean brewers can use less grain, thereby cutting costs. This revealed that cultivars with lower amylose content cracked more easily to release sugars during the mashing process, producing the highest yields. And certain varieties also had lower gelatinization temperatures for greater ease of processing.
International Journal of Food Science, 2025. DOI: 10.1080/10942912.2025.2520907  (About DOIs)
Journal of the American Society of Brewing Chemists, 2025. DOI: 10.1080/03610470.2025.2499768

Rice could be key to brewing better non-alcoholic beer Read More »

1 costcos ev charging move signals major shift

Costco’s EV charging move signals major shift

NEWYou can now listen to Fox News articles!
Costco has always been a go-to destination for bulk groceries, electronics and even gas. Now, the retailer is making headlines by bringing fast, reliable EV charging to its parking lots. With electric vehicles becoming more popular and over a million new EVs registered in 2024 alone, the need for convenient charging options has never been greater. Costco EV charging stations are stepping up to meet this demand, offering a seamless way for drivers to power up while they shop.MORE THAN A DOZEN STATES SUE DEPARTMENT OF TRANSPORTATION OVER EV CHARGING STATION FUNDS Costco ultra-fast charging stations. (Electrify America)Costco partners with Electric Era to bring fast EV charging to storesIn a move that’s turning heads, Costco partnered with Electric Era, a startup founded by former SpaceX engineers, to install ultra-fast charging stations at select locations. The North Port, Florida, warehouse was among the first to benefit, with six fast chargers installed in just 54 days-an impressive turnaround in an industry where installations can take months or even years. These chargers deliver up to 200 kWh, allowing most EVs to reach 80% charge in just 20 to 60 minutes. That’s enough time to shop for groceries, grab a slice of pizza and return to a car ready for the road. WHAT IS ARTIFICIAL INTELLIGENCE (AI)?How Costco EV charging stations work and what makes them uniqueCostco EV charging stations stand out for several reasons. They offer fast charging, which means less waiting and more time for shopping, and their battery-backed system minimizes the need for major grid upgrades, allowing installations to be completed more quickly and efficiently. These stations are also highly reliable, boasting over 98% uptime and more than 90% session reliability, so EV drivers can count on them to keep moving.The chargers themselves are user-friendly, equipped with both CCS and NACS connectors, 24/7 monitoring, automatic fault detection, over-the-air updates, and even integration with Costco’s loyalty program. Large screens at the stations display promotions and store information, making the experience even more engaging. By strategically placing these chargers at popular warehouse locations, Costco not only offers greater convenience for drivers, but also encourages customers to spend more time in-store, benefiting both shoppers and the retailer. Costco ultra-fast charging station. (Electrify America)Costco expands EV charging access through Electrify America partnershipCostco isn’t stopping with Electric Era. The retailer has also teamed up with Electrify America to roll out DC fast chargers at select locations in California, Colorado and Florida. These stations deliver up to 350 kW and are compatible with nearly all EV makes and models, making them a practical option for a wide range of drivers.GET FOX BUSINESS ON THE GO BY CLICKING HEREThe future of EV charging at Costco: Nationwide expansion on the horizonWith over 500 warehouses across the U.S., Costco has the potential to dramatically expand the nation’s fast-charging infrastructure. If the North Port pilot proves successful, thousands of new charging stalls could pop up nationwide, making EV ownership easier and more appealing. Costco ultra-fast charging station. (Electrify America)Kurt’s key takeawaysCostco EV charging stations are more than just a convenience; they’re a glimpse into the future of retail and transportation. By integrating fast, reliable charging with the everyday shopping experience, Costco is helping to drive America’s transition to electric vehicles, one parking lot at a time.CLICK HERE TO GET THE FOX NEWS APPIs Costco quietly becoming one of the most powerful players in America’s EV revolution, or should it be betting on other fuels instead? Let us know by writing to us at Cyberguy.com/Contact.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.Copyright 2025 CyberGuy.com.  All rights reserved.  

Costco’s EV charging move signals major shift Read More »

1 new google ai makes robots smarter without the cloud

New Google AI makes robots smarter without the cloud

NEWYou can now listen to Fox News articles!
Google DeepMind has introduced a powerful on-device version of its Gemini Robotics AI. This new system allows robots to complete complex tasks without relying on a cloud connection. Known as Gemini Robotics On-Device, the model brings Gemini’s advanced reasoning and control capabilities directly into physical robots. It is designed for fast, reliable performance in places with poor or no internet connectivity, making it ideal for real-world, latency-sensitive environments.GOOGLE WORKING TO DECODE DOLPHIN COMMUNICATION USING AI Robots using Gemini Robotics On-Device. (Google)Smarter robots that work anywhereUnlike its cloud-connected predecessor, this version runs entirely on the robot itself. It can understand natural language, perform fine motor tasks and generalize from very little data, all without requiring an internet connection. According to Carolina Parada, head of robotics at Google DeepMind, the system is “small and efficient enough” to operate directly onboard. Developers can use the model in situations where connectivity is limited, without sacrificing intelligence or flexibility. WHAT IS ARTIFICIAL INTELLIGENCE (AI)?Easy to adapt and trainGemini Robotics On-Device can be customized with just 50 to 100 demonstrations. The model was first trained using Google’s ALOHA robot, but it has already been adapted to other platforms like Apptronik’s Apollo humanoid and the Franka FR3. For the first time, developers can fine-tune a DeepMind robotics model. Google is offering access through its trusted tester program and has released a full SDK to support experimentation and development. Robot using Gemini Robotics On-Device. (Google)Local control means more privacy and reliabilitySince the artificial intelligence runs directly on the robot, all data stays local. This approach offers better privacy for sensitive applications, such as in healthcare. It also allows robots to continue operating during internet outages or in isolated environments. Google sees this version as a strong fit for remote, security-sensitive, or infrastructure-poor settings. The system delivers faster response times and fewer points of failure, opening up new possibilities for robot deployment in real-world settings.Safety requires developer inputThe on-device model does not include built-in semantic safety features. Google recommends that developers build safety systems into their robots using tools like the Gemini Live API and trusted low-level controllers. The company is limiting access to select developers to better study safety risks and real-world applications. While the hybrid model still offers more overall power, this version holds its own for most common use cases and helps push robotics closer to everyday deployment. Robots using Gemini Robotics On-Device. (Google)Kurt’s key takeawaysThe release of Gemini Robotics On-Device marks a turning point. Robots no longer need a constant cloud connection to be smart, adaptive, and useful. With faster performance and stronger privacy, these systems are ready to tackle real-world tasks in places where traditional robots might fail.CLICK HERE TO GET THE FOX NEWS APPWould you be comfortable handing off tasks to a robot that doesn’t need the internet to think? Let us know by writing to us at Cyberguy.com/Contact.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide – free when you join my CYBERGUY.COM/NEWSLETTER.Copyright 2025 CyberGuy.com.  All rights reserved.  

New Google AI makes robots smarter without the cloud Read More »