China

Huawei Shows Off 384-Chip AI Computing System That Rivals Nvidia's Top Product (msn.com) 118

Long-time Slashdot reader hackingbear writes: China's Huawei Technologies showed off an AI computing system on Saturday that can rival Nvidia's most advanced offering, even though the company faces U.S. export restrictions. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips, optically connected through an all-to-all topology, and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. A full CloudMatrix system can now deliver 300 PFLOPs of dense BF16 compute, almost double that of the GB200 NVL72. With more than 3.6x aggregate memory capacity and 2.1x more memory bandwidth, Huawei and China "now have AI system capabilities that can beat Nvidia's," according to a report by SemiAnalysis.

The trade-off is that it takes 4.1x the power of a GB200 NVL72, with 2.5x worse power per FLOP, 1.9x worse power per TB/s memory bandwidth, and 1.2x worse power per TB HBM memory capacity, but SemiAnalysis noted that China has no power constraints only chip constraints. Nvidia had announced DGX H100 NVL256 "Ranger" Platform [with 256 GPUs], SemiAnalysis writes, but "decided to not bring it to production due to it being prohibitively expensive, power hungry, and unreliable due to all the optical transceivers required and the two tiers of network. The CloudMatrix Pod requires an incredible 6,912 400G LPO transceivers for networking, the vast majority of which are for the scaleup network."



Also at this event, Chinese e-commerce giant Alibaba released a new flagship open-source reasoning model Qwen3-235B-A22B-Thinking-2507 which has "already topped key industry benchmarks, outperforming powerful proprietary systems from rivals like Google and OpenAI," according to industry reports. On the AIME25 benchmark, a test designed to evaluate sophisticated, multi-step problem-solving skills, Qwen3-Thinking-2507 achieved a remarkable score of 92.3. This places it ahead of some of the most powerful proprietary models, notably surpassing Google's Gemini-2.5 Pro, while Qwen3-Thinking secured a top score of 74.1 at LiveCodeBench, comfortably ahead of both Gemini-2.5 Pro and OpenAI's o4-mini, demonstrating its practical utility for developers and engineering teams.
EU

To Fight Climate Change, Norway Wants to Become Europe's Carbon Dump (msn.com) 69

Liquefied CO2 will be transported by ship to "the world's first carbon shipping port," reports the Washington Post — an island in the North Sea where it will be "buried in a layer of spongy rock a mile and a half beneath the seabed."

Norway's government is covering 80% of the $1 billion first phase, with another $714 million from three fossil fuel companies toward an ongoing expansion (with an additional $150 million E.U. subsidy). As Europe's top oil and gas producer, Norway is using its fossil fuel income to see if they can make "carbon dumping" work. The world's first carbon shipment arrived this summer, carrying 7,500 metric tons of liquefied CO2 from a Norwegian cement factory that otherwise would have gone into the atmosphere... If all goes as planned, the project's backers — Shell, Equinor and TotalEnergies, along with Norway — say their facility could pump 5 million metric tons of carbon dioxide underground each year, or about a tenth of Norway's annual emissions...

[At the Heidelberg Materials cement factory in Brevik, Norway], when hot CO2-laden air comes rushing out of the cement kilns, the plant uses seawater from the neighboring fjord to cool it down. The cool air goes into a chamber where it gets sprayed with amine, a chemical that latches onto CO2 at low temperatures. The amine mist settles to the bottom, dragging carbon dioxide down with it. The rest of the air floats out of the smokestack with about 85 percent less CO2 in it, according to project manager Anders Pettersen. Later, Heidelberg Materials uses waste heat from the kilns to break the chemical bonds, so that the amine releases the carbon dioxide. The pure CO2 then goes into a compressor that resembles a giant steel heart, where it gets denser and colder until it finally becomes liquid. That liquid CO2 remains in storage tanks until a ship comes to carry it away. At best, operators expect this system to capture half the plant's CO2 emissions: 400,000 metric tons per year, or the equivalent of about 93,000 cars on the road...

[T]hree other companies are lined up to follow: Ørsted, which will send CO2 from two bioenergy plants in Denmark; Yara, which will send carbon from a Dutch fertilizer factory; and Stockholm Exergi, which will capture carbon from a Swedish bioenergy plant that burns wood waste. All of these projects have gotten significant subsidies from national governments and the European Union — essentially de-risking the experiment for the companies. Experts say the costs and headaches of installing and running carbon-capture equipment may start to make more financial sense as European carbon rules get stricter and the cost of emitting a ton of carbon dioxide goes up. Still, they say, it's hard to imagine many companies deciding to invest in carbon capture without serious subsidies...

The first shipments are being transported by Northern Pioneer, the world's biggest carbon dioxide tanker ship, built specifically for this project. The 430-foot ship can hold 7,500 metric tons of CO2 in tanks below deck. Those tanks keep it in a liquid state by cooling it to minus-15 degrees Fahrenheit and squeezing it with the same pressure the outside of a submarine would feel 500 feet below the waves. While that may sound extreme, consider that the liquid natural gas the ship uses for fuel has to be stored at minus-260 degrees. "CO2 isn't difficult to make it into a liquid," said Sally Benson, professor of energy science and engineering at Stanford University. Northern Pioneer is designed to emit about a third less carbon dioxide than a regular ship — key for a project that aims to eliminate carbon emissions. The ship burns natural gas, which emits less CO2 than marine diesel produces (though gas extraction is associated with methane leaks). The vessel uses a rotor sail to capture wind power. And it blows a constant stream of air bubbles to reduce friction as the hull cuts through the water, allowing it to burn less fuel. For every 100 tons of CO2 that Northern Lights pumps underground, it expects to emit three tons of CO2 into the atmosphere, mainly by burning fuel for shipping.

Eventually the carbon flows into a pipeline "that plunges through the North Sea and into the rocky layers below it — an engineering feat that's a bit like drilling for oil in reverse..." according to the article.

"Over the centuries, it should chemically react with the rock, eventually being locked away in minerals."
Power

Google Will Help Scale 'Long-Duration Energy Storage' Solution for Clean Power (cleantechnica.com) 33

"Google has signed its first partnership with a long-duration energy storage company," reports Data Center Dynamics. "The tech giant signed a long-term partnership with Energy Dome to support multiple commercial deployments worldwide to help scale the company's CO2 battery technology."

Google explains in a blog post that the company's technology "can store excess clean energy and then dispatch it back to the grid for 8-24 hours, bridging the gap between when renewable energy is generated and when it is needed." Reuters explains the technology: Energy Dome's CO2-based system stores energy by compressing and liquefying carbon dioxide, which is later expanded to generate electricity. The technology avoids the use of scarce raw materials such as lithium and copper, making it potentially attractive to European policymakers seeking to reduce reliance on critical minerals and bolster energy security.
"Unlike other gases, CO2 can be compressed at ambient temperatures, eliminating the need for expensive cryogenic features," notes CleanTechnica, calling this "a unique new threat to fossil fuel power plants." Google's move "means that more wind and solar energy than ever before can be put to use in local grids." Pumped storage hydropower still accounts for more than 90% of utility scale storage in the US, long duration or otherwise... Energy Dome claims to beat lithium-ion batteries by a wide margin, currently aiming for a duration of 8-24 hours. The company aims to hit the 10-hour mark with its first project in the U.S., the "Columbia Energy Storage Project" under the wing of the gas and electricity supplier Alliant Energy to be located in Pacific, Wisconsin... [B]ut apparently Google has already seen more than enough. An Energy Dome demonstration project has been shooting electricity into the grid in Italy for more than three years, and the company recently launched a new 20-megawatt commercial plant in Sardinia.
Google points out this is one of several Google clean energy initiatives :
  • In June Google signed the largest direct corporate offtake agreement for fusion energy with Commonwealth Fusion Systems.
  • Google also partnered with a clean-energy startup to develop a geothermal power project that contributes carbon-free energy to the electric grid.

Cloud

Stack Exchange Moves Everything to the Cloud, Destroys Servers in New Jersey (stackoverflow.blog) 115

Since 2010 Stack Exchange has run all its sites on physical hardware in New Jersey — about 50 different servers. (When Ryan Donovan joined in 2019, "I saw the original server mounted on a wall with a laudatory plaque like a beloved pet.") But this month everything moved to the cloud, a new blog post explains. "Our servers are now cattle, not pets. Nobody is going to have to drive to our New Jersey data center and replace or reboot hardware..." Over the years, we've shared glamor shots of our server racks and info about updating them. For almost our entire 16-year existence, the SRE team has managed all datacenter operations, including the physical servers, cabling, racking, replacing failed disks and everything else in between. This work required someone to physically show up at the datacenter and poke the machines... [O]n July 2nd, in anticipation of the datacenter's closure, we unracked all the servers, unplugged all the cables, and gave these once mighty machines their final curtain call...

We moved Stack Overflow for Teams to Azure in 2023 and proved we could do it. Now we just had to tackle the public sites (Stack Overflow and the Stack Exchange network), which is hosted on Google Cloud. Early last year, our datacenter vendor in New Jersey decided to shut down that location, and we needed to be out by July 2025. Our other datacenter — in Colorado — was decommissioned in June. It was primarily for disaster recovery, which we didn't need any more. Stack Overflow no longer has any physical datacenters or offices; we are fully in the cloud and remote...!

[O]ur Staff Site Reliability Engineer, got a little wistful. "I installed the new web tier servers a few years ago as part of planned upgrades," he said. "It's bittersweet that I'm the one deracking them also." It's the IT version of Old Yeller.

There's photos of the 50 servers, as well as the 400+ cables connecting them, all of which wound up in a junk pile. "For security reasons (and to protect the PII of all our users and customers), everything was being shredded and/or destroyed. Nothing was being kept... Ever have difficulty disconnecting an RJ45 cable? Well, here was our opportunity to just cut the damn things off instead of figuring out why the little tab wouldn't release the plug."
Robotics

Google Set Up Two Robotic Arms For a Game of Infinite Table Tennis (popsci.com) 8

An anonymous reader quotes a report from Popular Science: On the early evening of June 22, 2010, American tennis star John Isner began a grueling Wimbledon match against Frenchman Nicolas Mahut that would become the longest in the sport's history. The marathon battle lasted 11 hours and stretched across three consecutive days. Though Isner ultimately prevailed 70-68 in the fifth set, some in attendance half-jokingly wondered at the time whether the two men might be trapped on that court for eternity. A similarly endless-seeming skirmish of rackets is currently unfolding just an hour's drive south of the All England Club -- at Google DeepMind. Known for pioneering AI models that have outperformed the best human players at chess and Go, DeepMind now has a pair of robotic arms engaged in a kind of infinite game of table tennis. The goal of this ongoing research project, which began in 2022, is for the two robots to continuously learn from each other through competition.

Just as Isner eventually adapted his game to beat Mahut, each robotic arm uses AI models to shift strategies and improve. But unlike the Wimbledon example, there's no final score the robots can reach to end their slugfest. Instead, they continue to compete indefinitely, with the aim of improving at every swing along the way. And while the robotic arms are easily beaten by advanced human players, they've been shown to dominate beginners. Against intermediate players, the robots have roughly 50/50 odds -- placing them, according to researchers, at a level of "solidly amateur human performance."

All of this, as two researchers involved noted this week in an IEEE Spectrum blog, is being done in hopes of creating an advanced, general-purpose AI model that could serve as the "brains" of humanoid robots that may one day interact with people in real-world factories, homes, and beyond. Researchers at DeepMind and elsewhere are hopeful that this learning method, if scaled up, could spark a "ChatGPT moment" for robotics -- fast-tracking the field from stumbling, awkward hunks of metal to truly useful assistants. "We are optimistic that continued research in this direction will lead to more capable, adaptable machines that can learn the diverse skills needed to operate effectively and safely in our unstructured world," DeepMind senior staff engineer Pannag Sanketi and Arizona State University Professor Heni Ben Amor write in IEEE Spectrum.

Power

US DOE Taps Federal Sites For Fast-Track AI Datacenter, Energy Builds 11

The U.S. Department of Energy has greenlit four federal sites for private sector AI datacenters and nuclear-powered energy projects, aligning with Trump's directive to fast-track AI infrastructure using government land. "The four that have been finalized are the Idaho National Laboratory, Oak Ridge Reservation, Paducah Gaseous Diffusion Plant, and Savannah River Site," reports The Register. "These will now move forward to invite companies in the private sector to build AI datacenter projects plus any necessary energy sources to power them, including nuclear generation." The Register reports: "By leveraging DoE land assets for the deployment of AI and energy infrastructure, we are taking a bold step to accelerate the next Manhattan Project -- ensuring US AI and energy leadership," Energy Secretary Chris Wright said in a statement. Ironically -- or perhaps not -- Oak Ridge Reservation was established in the early 1940s as part of the original Manhattan Project to develop the first atomic bomb, and is home to the Oak Ridge National Laboratory (ORNL) that operates the Frontier exascale supercomputer, and the Y-12 National Security Complex which supports US nuclear weapons programs.

The other sites are also involved with either nuclear research or atomic weapons in one way or another, which may hint at the administration's intentions for how the datacenters should be powered. All four locations are positioned to host new bit barns as well as power generation to bolster grid reliability, strengthen national security, and reduce energy costs, Wright claimed. [...] In light of this tight time frame, the DoE says that partners may be selected by the end of the year. Details regarding project scope, eligibility requirements, and submission guidelines for each site are expected to be released in the coming months.
Power

Mercedes-Benz Is Already Testing Solid-State Batteries In EVs With Over 600 Miles Range (electrek.co) 180

An anonymous reader quotes a report from Electrek: The "holy grail" of electric vehicle battery tech may be here sooner than you'd think. Mercedes-Benz is testing EVs with solid-state batteries on the road, promising to deliver over 600 miles of range. Earlier this year, Mercedes marked a massive milestone, putting "the first car powered by a lithium-metal solid-state battery on the road" for testing. Mercedes has been testing prototypes in the UK since February.

The company used a modified EQS prototype, equipped with the new batteries and other parts. The battery pack was developed by Mercedes-Benz and its Formula 1 supplier unit, Mercedes AMG High-Performance Powertrains (HPP) Mercedes is teaming up with US-based Factorial Energy to bring the new battery tech to market. In September, Factorial and Mercedes revealed the all-solid-state Solstice battery. The new batteries, promising a 25% range improvement, will power the German automaker's next-generation electric vehicles.

According to Markus Schafer, the automaker's head of development, the first Mercedes EVs powered by solid-state batteries could be here by 2030. During an event in Copenhagen, Schafer told German auto news outlet Automobilwoche, "We expect to bring the technology into series production before the end of the year." In addition to providing a longer driving range, Mercedes believes the new batteries can significantly reduce costs. Schafer said current batteries won't suffice, adding, "At the core, a new chemistry is needed." Mercedes and Factorial are using a sulfide-based solid electrolyte, said to be safer and more efficient.

AI

Two Major AI Coding Tools Wiped Out User Data After Making Cascading Mistakes (arstechnica.com) 151

An anonymous reader quotes a report from Ars Technica: Two recent incidents involving AI coding assistants put a spotlight on risks in the emerging field of "vibe coding" -- using natural language to generate and execute code through AI models without paying close attention to how the code works under the hood. In one case, Google's Gemini CLI destroyed user files while attempting to reorganize them. In another, Replit's AI coding service deleted a production database despite explicit instructions not to modify code. The Gemini CLI incident unfolded when a product manager experimenting with Google's command-line tool watched the AI model execute file operations that destroyed data while attempting to reorganize folders. The destruction occurred through a series of move commands targeting a directory that never existed. "I have failed you completely and catastrophically," Gemini CLI output stated. "My review of the commands confirms my gross incompetence."

The core issue appears to be what researchers call "confabulation" or "hallucination" -- when AI models generate plausible-sounding but false information. In these cases, both models confabulated successful operations and built subsequent actions on those false premises. However, the two incidents manifested this problem in distinctly different ways. [...] The user in the Gemini CLI incident, who goes by "anuraag" online and identified themselves as a product manager experimenting with vibe coding, asked Gemini to perform what seemed like a simple task: rename a folder and reorganize some files. Instead, the AI model incorrectly interpreted the structure of the file system and proceeded to execute commands based on that flawed analysis. [...] When you move a file to a non-existent directory in Windows, it renames the file to the destination name instead of moving it. Each subsequent move command executed by the AI model overwrote the previous file, ultimately destroying the data. [...]

The Gemini CLI failure happened just days after a similar incident with Replit, an AI coding service that allows users to create software using natural language prompts. According to The Register, SaaStr founder Jason Lemkin reported that Replit's AI model deleted his production database despite explicit instructions not to change any code without permission. Lemkin had spent several days building a prototype with Replit, accumulating over $600 in charges beyond his monthly subscription. "I spent the other [day] deep in vibe coding on Replit for the first time -- and I built a prototype in just a few hours that was pretty, pretty cool," Lemkin wrote in a July 12 blog post. But unlike the Gemini incident where the AI model confabulated phantom directories, Replit's failures took a different form. According to Lemkin, the AI began fabricating data to hide its errors. His initial enthusiasm deteriorated when Replit generated incorrect outputs and produced fake data and false test results instead of proper error messages. "It kept covering up bugs and issues by creating fake data, fake reports, and worse of all, lying about our unit test," Lemkin wrote. In a video posted to LinkedIn, Lemkin detailed how Replit created a database filled with 4,000 fictional people.

The AI model also repeatedly violated explicit safety instructions. Lemkin had implemented a "code and action freeze" to prevent changes to production systems, but the AI model ignored these directives. The situation escalated when the Replit AI model deleted his database containing 1,206 executive records and data on nearly 1,200 companies. When prompted to rate the severity of its actions on a 100-point scale, Replit's output read: "Severity: 95/100. This is an extreme violation of trust and professional standards." When questioned about its actions, the AI agent admitted to "panicking in response to empty queries" and running unauthorized commands -- suggesting it may have deleted the database while attempting to "fix" what it perceived as a problem. Like Gemini CLI, Replit's system initially indicated it couldn't restore the deleted data -- information that proved incorrect when Lemkin discovered the rollback feature did work after all. "Replit assured me it's ... rollback did not support database rollbacks. It said it was impossible in this case, that it had destroyed all database versions. It turns out Replit was wrong, and the rollback did work. JFC," Lemkin wrote in an X post.

Intel

Intel Will Shed 24,000 Employees This Year, Retreat In Germany, Poland, Costa Rica, and Ohio (theverge.com) 43

Intel announced it will cut approximately 24,000 jobs in 2025 and cancel or scale back projects in Germany, Poland, Costa Rica, and Ohio as part of CEO Lip-Bu Tan's sweeping restructuring efforts. By the end of the year, the struggling chipmaker plans to have "just around 75,000 'core employees' in total," according to The Verge. "It's not clear if the layoffs will slow now that we're over halfway through the year, but Intel states today that it has already 'completed the majority of the planned headcount actions it announced last quarter to reduce its core workforce by approximately 15 percent.'" From the report: Intel employed 109,800 people at the end of 2024, of which 99,500 were "core employees," so the company is pushing out around 24,000 people this year -- shrinking Intel by roughly one-quarter. (It has also divested other businesses, shrinking the larger organization as well.) [...] Today, on the company's earnings call, Intel's says that Intel had overinvested in new factories before it had secured enough demand, that its factories had become "needlessly fragmented," and that it needs to grow its capacity "in lock step" with achieving actual milestones. "I do not subscribe to the belief that if you build it, they will come. Under my leadership, we will build what customers need when they need it, and earn their trust," says Tan.

Now, in Germany and Poland, where Intel was planning to spend tens of billions of dollars respectively on "mega-fabs" that would employ 3,000 workers, and on an assembly and test facility that would employ 2,000 workers, the company will "no longer move forward with planned projects" and is apparently axing them entirely. Intel has had a presence in Poland since 1993, however, and the company did not say its R&D facilities there are closing. (Intel had previously pressed pause on the new Germany and Poland projects "by approximately two years" back in 2024.)

In Costa Rica, where Intel employs over 3,400 people, the company will "consolidate its assembly and test operations in Costa Rica into its larger sites in Vietnam." Metzger tells The Verge that over 2,000 Costa Rica employees should remain to work in engineering and corporate, though. The company is also cutting back in Ohio: "Intel will further slow the pace of construction in Ohio to ensure spending is aligned with market demand." Intel CFO David Zinsner says Intel will continue to make investments there, though, and construction will continue.

AMD

AMD CEO Sees Chips From TSMC's US Plant Costing 5%-20% More (msn.com) 42

AMD CEO Lisa Su said that chips produced at TSMC's new Arizona plant will cost 5-20% more than those made in Taiwan, but emphasized that the premium is worth it for supply chain resilience. Bloomberg reports: AMD expects its first chips from TSMC's Arizona facilities by the end of the year, Su said. The extra expense is worth it because the company is diversifying the crucial supply of chips, Su said in an interview with Bloomberg Television following her onstage appearance. That will make the industry less prone to the type of disruptions experienced during the pandemic. "We have to consider resiliency in the supply chain," she said. "We learned that in the pandemic."

TSMC's new Arizona plant is already comparable with those in Taiwan when it comes to the measure of yield -- the amount of good chips a production run produces per batch -- Su told the audience at the forum.

United States

How Much Would You Pay For an American-Made Laptop? Palmer Luckey Wants To Know (tomshardware.com) 233

Palmer Luckey, known for founding Oculus and defense-tech firm Anduril, is now eyeing U.S.-manufactured laptops as his next venture. While past American laptops have largely relied on foreign components, Luckey is exploring the possibility of building a fully "Made in USA" device that meets strict FTC standards -- though doing so may cost a premium. Tom's Hardware reports: ["Would you buy a Made In America computer from Anduril for 20% more than Chinese-manufactured options from Apple?" asked Luckey in a post on X.] Luckey previously asked the same question at the Reindustrialize Summit, a conference whose website said it was devoted to "convening the brightest and most motivated minds at the intersection of technology and manufacturing," which shared a clip of Luckey discussing the subject, wherein he talks about the extensive research he has already done around building a PC in the U.S. Luckey wouldn't be the first to make a laptop in the U.S. (PCMag collected a list of domestic PCs, including laptops, in 2021.) But those products use components sourced from elsewhere; they're assembled in the U.S. rather than manufactured there.

That distinction matters, according to the Made in USA Standard published by the Federal Trade Commission. To quote: "For a product to be called Made in USA, or claimed to be of domestic origin without qualifications or limits on the claim, the product must be 'all or virtually all' made in the U.S. [which] means that the final assembly or processing of the product occurs in the United States, all significant processing that goes into the product occurs in the United States, and all or virtually all ingredients or components of the product are made and sourced in the United States. That is, the product should contain no -- or negligible -- foreign content."
How much more would you be willing to pay for a laptop that was truly made in America?
Printer

Leading 3D Printing Site Bans Firearm Files (theregister.com) 100

Thingiverse, a popular 3D printing file repository, has agreed to remove downloadable gun designs following pressure from Manhattan DA Alvin Bragg, who is pushing for stricter moderation and voluntary cooperation across the 3D printing industry. "However, it's unlikely to slow the proliferation of 3D printed weapons, as many other sites offer downloadable gun designs and parts," reports The Register. From the report: Earlier this year, Bragg wrote to 3D printing companies, asking them to ensure their services can't be used to create firearms. On Saturday, Bragg announced that one such company, Thingiverse, would remove working gun models from its site. The company operates a popular free library of 3D design files and had already banned weapons in its terms of use, but is now promising to improve its moderation procedures and technology. "Following discussions with the Manhattan District Attorney's Office about concerns around untraceable firearms, we are taking additional steps to improve our content moderation efforts," Thingiverse said in a statement. "As always, we encourage our users to report any content that may be harmful." [...]

At any rate, while Thingiverse may be popular among 3D printing mavens, people who like to build their own guns look to other options. [...] Bragg's approach to 3D printing sites and 3D printer manufacturers is to seek voluntary cooperation. Only Thingiverse and YouTube have taken up his call, others may or may not follow. "While law enforcement has a primary role to play in stopping the rise of 3D-printed weapons, this technology is rapidly changing and evolving, and we need the help and expertise of the private sector to aid our efforts," Bragg said. "We will continue to proactively reach out to and collaborate with others in the industry to reduce gun violence throughout Manhattan and keep everyone safe." But it seems doubtful that the sites where Aranda and other 3D gun makers get their files will be rushing to help Bragg voluntarily.

AI

Nvidia's CUDA Platform Now Support RISC-V (tomshardware.com) 20

An anonymous reader quotes a report from Tom's Hardware: At the 2025 RISC-V Summit in China, Nvidia announced that its CUDA software platform will be made compatible with the RISC-V instruction set architecture (ISA) on the CPU side of things. The news was confirmed during a presentation during a RISC-V event. This is a major step in enabling the RISC-V ISA-based CPUs in performance demanding applications. The announcement makes it clear that RISC-V can now serve as the main processor for CUDA-based systems, a role traditionally filled by x86 or Arm cores. While nobody even barely expects RISC-V in hyperscale datacenters any time soon, RISC-V can be used on CUDA-enabled edge devices, such as Nvidia's Jetson modules. However, it looks like Nvidia does indeed expect RISC-V to be in the datacenter.

Nvidia's profile on RISC-V seems to be quite high as the keynote at the RISC-V Summit China was delivered by Frans Sijsterman, who appears to be Vice President of Hardware Engineering at Nvidia. The presentation outlined how CUDA components will now run on RISC-V. A diagram shown at the session illustrated a typical configuration: the GPU handles parallel workloads, while a RISC-V CPU executes CUDA system drivers, application logic, and the operating system. This setup enables the CPU to orchestrate GPU computations fully within the CUDA environment. Given Nvidia's current focus, the workloads must be AI-related, yet the company did not confirm this. However, there is more.

Also featured in the diagram was a DPU handling networking tasks, rounding out a system consisting of GPU compute, CPU orchestration, and data movement. This configuration clearly suggests Nvidia's vision to build heterogeneous compute platforms where RISC-V CPU can be central to managing workloads while Nvidia's GPUs, DPUs, and networking chips handle the rest. Yet again, there is more. Even with this low-profile announcement, Nvidia essentially bridges proprietary CUDA stack to an open architecture, one that seems to develop fast in China. Yet, being unable to ship flagship GB200 and GB300 offerings to China, the company has to find ways to keep its CUDA thriving.

Debian

Debian 13.0 To Begin Supporting RISC-V as an Official CPU Architecture (phoronix.com) 28

It was nearly a decade ago when the first RISCV64 port was started for Debian, reports Phoronix. But one of the big features planned for Debian 13.0 (planned for 9 August) is supporting RISC-V as an official CPU architecture. This is the first release where RISC-V 64-bit is officially supported by Debian Linux albeit with limited board support and the Debian RISC-V build process is handicapped by slow hardware.

A Debian RISC-V BoF session was held at this week's DebConf25 conference in France to discuss the state of RISCV64 for Debian Linux. The talk was led by Debian developers Aurelien Jarno and Bo YU... RV64GC is the current target for Debian RISC-V and using UEFI-based booting as the default. Over seventeen thousand source Debian packages are building for RISC-V with Trixie... Those wishing to learn more about this current state of Debian for RISC-V can see the PDF slide deck from DebConf25.

Hardware

First Electronic-Photonic Quantum Chip Created In Commercial Foundry (bu.edu) 5

It's "a milestone for scalable quantum technologies," according to the announcement from Boston University. Scientists from Boston University, UC Berkeley, and Northwestern University "reported the world's first electronic-photonic-quantum system on a chip, according to a study published in Nature Electronics."

Quantum computing is on "a decades-long path from concept to reality," says Milos PopoviÄ, associate professor of electrical and computer engineering at BU and a senior author on the study. "This is a small step on that path — but an important one, because it shows we can build repeatable, controllable quantum systems in commercial semiconductor foundries." The system combines quantum light sources and stabilizing electronics using a standard 45-nanometer semiconductor manufacturing process to produce reliable streams of correlated photon pairs (particles of light) — a key resource for emerging quantum technologies. The advance paves the way for mass-producible "quantum light factory" chips and large-scale quantum systems built from many such chips working together...

Just as electronic chips are powered by electric currents, and optical communication links by laser light, future quantum technologies will require a steady stream of quantum light resource units to perform their functions. To provide this, the researchers' work created an array of "quantum light factories" on a silicon chip, each less than a millimeter by a millimeter in dimension... "What excites me most is that we embedded the control directly on-chip — stabilizing a quantum process in real time," says Anirudh Ramesh, a PhD student at Northwestern who led the quantum measurements. "That's a critical step toward scalable quantum systems."

Thanks to long-time Slashdot reader fahrbot-bot for sharing the news.
Earth

India Hits 50% Non-Fossil Power Milestone Five Years Ahead of Paris Agreement's 2030 Target (reuters.com) 28

India has achieved 50% of its installed electricity capacity from non-fossil fuel sources -- five years ahead of its 2030 target under the Paris Agreement, signalling accelerating momentum in the country's clean energy transition. From a report: The announcement comes as India's renewable power output rose at its fastest pace since 2022 in the first half of 2025, while coal-fired generation declined nearly 3%. Fossil fuels still accounted for over two-thirds of the increase in power generation last year. India plans to expand coal-fired capacity by 80 GW by 2032 to meet rising demand.
Power

Ireland Tries Kites Instead of Windmills To Generate Electricity (www.rte.ie) 43

Longtime Slashdot reader piojo writes: Tired of windmills? Kitepower has deployed 60-square-meter kites to harvest wind energy on the western coast of Ireland. The giant kites fly in a figure-eight pattern, unspooling a tether to rotate a drum with 2.4 to 4 tons of force. When the tether has played out, the configuration of the kite is shifted to catch less wind and the tether is reeled back in. This mobile system fits in a 20-foot container and is targeted at remote locations.
Power

Germany Is Building the World's Tallest Wind Turbine (translate.goog) 97

Longtime Slashdot reader Qbertino writes: Heise, a German IT news publisher, reports (English version via Google Translate) that the German state of Brandenburg is getting the world's tallest wind turbine, with an overall height of 300 meters (approximately 365 meters including rotor blades), designed to capture so-called third-level winds at higher altitudes. The article also includes a short 3D animation illustrating the construction and its size relative to standard modern wind turbines. The wind turbine uses a dual-framework base instead of a traditional closed tower to access stronger high-altitude winds, aiming to match offshore energy output while keeping onshore operating costs.

According to Heise, the prototype could lead to the installation of up to 1,000 units across Germany -- fitting seamlessly between existing wind farms without needing extra land.
Data Storage

Seagate's 30TB HAMR Drives Hit Market for $600 (arstechnica.com) 67

Seagate has released its first heat-assisted magnetic recording hard drives for individual buyers, marking the commercial debut of technology the company has developed for more than two decades. The 30TB IronWolf Pro and Exos M drives cost $600, while 28TB models are priced at $570.

The drives use HAMR technology, which uses tiny lasers to heat and expand drive platter sections within nanoseconds to write data at higher densities. Seagate announced delivery of HAMR drives up to 36TB to datacenter customers in late 2024. The consumer models use conventional magnetic recording technology and are built on Seagate's Mosaic 3+ platform, achieving areal densities of 3TB per disk.

Western Digital plans to release its first HAMR drives in 2027, though it has reached 32TB capacity using shingled magnetic recording. Toshiba will sample HAMR drives for testing in 2025 but has not announced public availability dates.
China

Chinese Firms Rush For Nvidia Chips As US Prepares To Lift Ban (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: Chinese firms have begun rushing to order Nvidia's H20 AI chips as the company plans to resume sales to mainland China, Reuters reports. The chip giant expects to receive US government licenses soon so that it can restart shipments of the restricted processors just days after CEO Jensen Huang met with President Donald Trump, potentially generating $15 billion to $20 billion in additional revenue this year. Nvidia said in a statement that it is filing applications with the US government to resume H20 sales and that "the US government has assured Nvidia that licenses will be granted, and Nvidia hopes to start deliveries soon." [...]

The H20 chips represent Nvidia's most capable AI processors legally available in China, though they contain less computing power than versions sold elsewhere due to export restrictions imposed in 2022. Nvidia is currently banned from selling its most powerful GPUs in China. Despite these limitations, Chinese tech giants, including ByteDance and Tencent, are reportedly scrambling to place orders for the lesser chip through what sources describe as an approved list managed by Nvidia. "The Chinese market is massive, dynamic, and highly innovative, and it's also home to many AI researchers," Reuters reports Huang telling Chinese state broadcaster CCTV during his visit to Beijing, where he is scheduled to speak at a supply chain expo on Wednesday. "Therefore, it is indeed crucial for American companies to establish roots in the Chinese market."

The resumption of H20 sales marks a shift in US-China technology relations after the chips were effectively banned in April with an onerous export license requirement, forcing Nvidia to take a $4.5 billion write-off for excess inventory and purchase obligations. According to Reuters, Chinese sales generated $17 billion in revenue for Nvidia in the fiscal year ending January 26, representing 13 percent of total sales. Nvidia also announced it will introduce a new "RTX Pro" chip model specifically tailored to meet regulatory rules in the Chinese market, though the company provided no details about its specifications or capabilities.

Slashdot Top Deals