Forgot your password?
typodupeerror
Power

Half of Planned US Data Center Builds Have Been Delayed or Canceled 64

Despite hundreds of billions of dollars in investment, nearly half of planned U.S. data center projects are being delayed or canceled. "One major reason behind these setbacks is the availability of key electrical components -- such as transformers, switchgear, and batteries -- that are used both at data center sites and outside of them," reports Tom's Hardware. "Meanwhile, grid infrastructure is also stressed by electric vehicles and electrified heating systems." Tom's Hardware reports: Approximately 12 gigawatts (12 GW) of data center capacity is expected to come online in the U.S. in 2026, according to data by market intelligence firm Sightline Climate cited by Bloomberg. Yet only about one-third of that capacity is currently under active construction because of various constraints.

Electrical infrastructure represents less than 10% of total data center cost, but it is as vital as compute hardware. A delay in any single element of the power chain can halt the entire project, which makes transformers, switchgear, and similar devices critical items despite their relatively small share of CapEx. Due to high demand, lead times for high-power transformers have expanded dramatically in the U.S.: delivery typically took 24 to 30 months before 2020, but waiting periods can stretch to as long as five years today, according to Sightline Climate cited by Bloomberg. For AI data centers, this is a catastrophe as their deployment cycles are under 18 months.

To address shortages, companies are turning to global markets. As a result, Canada, Mexico, and South Korea became the biggest suppliers of high-power transformers for AI data centers to AI data centers. At the same time, imports of high-power transformers from China surged from fewer than 1,500 units in 2022 to more than 8,000 units in 2025 through October, according to Wood Mackenzie data cited by Bloomberg. The volatility of exports from China does not end with transformers, as the PRC accounts for over 40% of U.S. battery imports, while its share in certain transformer and switchgear categories remains near 30%, according to Bloomberg.
This discussion has been archived. No new comments can be posted.

Half of Planned US Data Center Builds Have Been Delayed or Canceled

Comments Filter:
  • by Himmy32 ( 650060 ) on Friday April 03, 2026 @12:12PM (#66075628)
    Timelines of power infrastructure aren't planned in the same manner as private industries who are chasing AI fads and quarterly profits. Unsurprising that lead times for products are more aligned to that traditional case.
    • The bigger factor is that their whole "AI" thing is collapsing before their eyes, because once "AI" is explained, you can either replace "AI" with "magic" or a quantifiable, describable existing technology. And in this particular bubble, it's not a describable existing technology, it's "magic". And while investors might be fooled, the accountants aren't, and management's starting to figure it out (or aren't fooled in the first place but are intentionally misleading investors).

      They're not exactly helping t

      • by Himmy32 ( 650060 ) on Friday April 03, 2026 @02:24PM (#66075840)

        It's not the accountants that are the roadblock. It's the second or third levels in the supply chain that are resistant to build out rapid additional capacity.

        This is the same story for RAM providers where additional manufacturing lines are long timelines. Building extra capacity for demonstrated short term demand that may not last by the time of completion is a large risk. In the mean time, they can already rake in additional profit off that raised demand and limited supply from other competitors that are making the same cost/benefit evaluations.

        • by ctilsie242 ( 4841247 ) on Friday April 03, 2026 @03:28PM (#66075918)

          This can result in bad things in the long run, though. For example, if someone finds some new technology that is like DRAM, except doesn't need a refresh signal, something like Optane, but an order of magnitude faster. When you get extreme demand pulls, people will start doing something about it, even make long term projects because they know that the RAM issue may go away short-term, but it will hit them in the future. Or, DRAM will be minimized and Optane-like memory will be used as "swap", similar to how mainframes had internal and external memory. Sounds hackish, but this would change the tune of computing completely.

          Or... worst case... and this is a scary thing... developers think about optimizing around hardware resources. We used to be able to have an office suites that could easily fit in 192k, and full-fledged word processors like Word 3.0 on Mac that fit comfortably on two 800k disks.

          • Or... worst case... and this is a scary thing... developers think about optimizing around hardware resources. We used to be able to have an office suites that could easily fit in 192k, and full-fledged word processors like Word 3.0 on Mac that fit comfortably on two 800k disks.

            Not being a developer myself, can you explain how this is some how a bad thing?

            I understand why games with all their massive graphic files have to take up a large foot print in storage, but I don't understand why so many other tools also need to take up tons of space. Your example is Word 3.0 is a perfect example. I'm sure 99% of word processing could be handled by Word 3.0 and really, the only reason to keep changing things is to keep a revenue stream alive for Microsoft.

            Of course, I also wonder why Window

            • by kackle ( 910159 )

              Or... worst case... and this is a scary thing... developers think about optimizing around hardware resources.

              Not being a developer myself, can you explain how this is some how a bad thing?

              I think he was being sarcastic, as in, we should be optimizing around hardware resources.

        • by mjwx ( 966435 )

          It's not the accountants that are the roadblock. It's the second or third levels in the supply chain that are resistant to build out rapid additional capacity.

          This is the same story for RAM providers where additional manufacturing lines are long timelines. Building extra capacity for demonstrated short term demand that may not last by the time of completion is a large risk. In the mean time, they can already rake in additional profit off that raised demand and limited supply from other competitors that are making the same cost/benefit evaluations.

          And why shouldn't they be "resistant"?

          They go out and spend the money to increase capacity and this whole AI fad falls in a heap long before they recoup the investment, the techbros aren't going to pick up the tab. Hell, they were planning to screw them on price from the very beginning.

        • You basically rephrased what I said, but yeah.
    • We need to plan on what to do once AI does not need all the data centers, electricity and cooling.

      What is being built is based on a continual increase in compute needs and does not take into account the inevitable 100x, 1000x , ... reduction in compute cost due to better hardware or algorithms.

  • I love... (Score:5, Funny)

    by uem-Tux ( 682053 ) on Friday April 03, 2026 @12:14PM (#66075630) Homepage
    ...the smell of bubbles popping in the morning.
    • by Tablizer ( 95088 )

      Smells like ... recession.

      (But finally I can buy PC parts.)

      • Might very well be the opposite if all the non-AI companies betting on AI suddenly have to go on a hiring spree to rehire the people they thought they could replace with snake-oil.

        It'd be nice to get back to Biden's full employment again.

        • Re: (Score:3, Insightful)

          by Tablizer ( 95088 )

          Almost nobody actually laid of employees because of AI, that was just an excuse to downsize in slow markets. If sales were growing, the same number of employees could do more work via bots such that they wouldn't actually reduce head-count. The proper business move under gained efficiency in a normal economy is to chase market share, not lay off.

          • There's at least some evidence on some level that the C-suite class actually believes all this bullshit. Hence the mandates forcing people to use AI and giving people bad performance reviews if they don't use it.

            This isn't to say there hasn't also been a lot of redundancies blamed on AI that wouldn't have happened anyway, I've said as much myself, but certainly we've had plenty of cases where the C-suite have assumed that AI can fill in the gaps.

            I think once the AI bubble pops, rehiring skilled workers will

          • by tlhIngan ( 30335 )

            Almost nobody actually laid of employees because of AI, that was just an excuse to downsize in slow markets. If sales were growing, the same number of employees could do more work via bots such that they wouldn't actually reduce head-count. The proper business move under gained efficiency in a normal economy is to chase market share, not lay off.

            Yeah, AI layoffs were layoffs to avoid the stock price from going down. Just like RTO mandates.

            What it really means is despite the numbers, the economy is in far wo

          • by cowdung ( 702933 )

            Sure they did.. they laid of 10s of thousands of employees because they needed to free funds for building data centers. They aren't cheap. And the spending is totally bonkers. No Google, Amazon, Meta, etc.. can afford such an investment without consequences.. so they free up cash by dumping people and cancelling "non-essential" projects.

            100s of thousands have lost jobs at this point to fund this.

            My hope is that in a year or two the fever will end and they will start reallocating the money to people again. O

            • by Tablizer ( 95088 )

              Re: "because they needed to free funds for building data centers."

              Yes, for some co's that's certainly the case. But it's not because "bots took jobs", but rather "bots need funds".

          • by Tablizer ( 95088 )

            Correction: "laid off"

      • (But finally I can buy PC parts.)

        If these are Chinese made PC parts, you'll have to work out a deal with tariff-boy first.

    • Speaking of which, I like a news item I saw this morning that RAM prices are collapsing as a massive deal that OpenAI had made w/ Samsung and Hynix to corner 40% of DRAM production [youtube.com] is not going to materialize

      Can't happen soon enough. Five years ago, I bought this laptop at Costco for $250. Yesterday, looking at the latest Costco catalog, there are no laptops less than $750 (no, the Neo hasn't arrived there, and I didn't notice any Chromebooks either, which also I had bought from Costco around that time

    • "bubbles popping"

      All of the constraints mentioned are supply-side, not demand-side.

      That may obscure a market bubble popping, but is not evidence of it.

  • Good. (Score:3, Insightful)

    by MachineShedFred ( 621896 ) on Friday April 03, 2026 @12:23PM (#66075644) Journal

    Maybe we can stop with the infinite build for endless capacity bullshit.

  • Sounds familiar (Score:5, Informative)

    by CEC-P ( 10248912 ) on Friday April 03, 2026 @12:23PM (#66075646)
    Shoutout to my fellow IT workers who have tried to get electrical utilities to do literally anything. Our small site UPS was reporting voltage out of range and they fixed it after 5 calls, 4 staff technician visits, 2 recorders, and 4 weeks. It was a bad component in the sub-station that was "very old" according to them. Their solution to stopping 129V from coming in overnight was to switch that one off and hope the rest can handle the load.
    Now imaging trying to order several megawatts. Usually, the new construction department is different than maintenance, but sounds like they're having a similar experience.
    • Doesn't even have to be a plant trying to get more capacity. Corporate utilities look at the cost, X, and what you're willing to reasonably pay, Y, and refuse to build if Y is less than 2-3 times, at a minimum, what X is. Which is a big part of why common infrastructure shouldn't be in corporate hands in the first place, but publicly operated. This is legitimately a serious problem for households in the US since the end of the public push for rural electrification in the 90s. The only thing that's chang

      • Hmm, you would be surprised where Starlink provides coverage to. Probably easier to get Starlink for a remote location then water, sewer or mains power. In which case, you could run a small off-grid home off 10KwH batteries along with solar and wind, have a well dug (or if you are in a high rain fall area, water catchment+osmosis) and of course gonna need septic, though I suppose you might be able to go with a composting toilet. Never used one, so ask a real prepper for that information.

        • You'd be amazed where terrestrial cellular reaches just between T-Mobile and independent local telcos that doesn't involve directly contributing to an antiamerican terrorist billionaire's wealth. I speak from experience on this (and rural electrification).
    • Usually, the new construction department is different than maintenance, but sounds like they're having a similar experience.

      The lead time of bringing new capacity online is often measured in years. Even with infinite AI money being thrown at the issue, the lead time for many of the necessary generation and transmission components is quite long (last I checked the large transformers needed for large power and substations had an up to 5 year delivery wait).

  • by UnknowingFool ( 672806 ) on Friday April 03, 2026 @12:24PM (#66075650)

    As more and more datacenters were being announced, some skeptics kept asking about how datacenters would be powered and cooled. There was concern that the infrastructure was not adequate. "Trust me bro," always seem to be the answer. It turns out building megawatt datacenters requires a great deal of meticulous planning. Who knew?

    Something else that has been brought up is that with delays, the hardware in these datacenters might be obsolete by the time they are built. Previous datacenters like Google ones were built with hardware that was not the cutting edge but were stable and reliable. AI always needs the latest and greatest processors. However, by the time the datacenter is fully built, those processors are no longer the latest and greatest.

    Also there is the next question: "Where did all the money go?" If datacenters are being delayed or canceled, what happened to all the money that used to start the datacenter project. Investors might start asking too many questions about what happened to their investments.

    • As more and more datacenters were being announced, some skeptics kept asking about how datacenters would be powered and cooled. There was concern that the infrastructure was not adequate. "Trust me bro," always seem to be the answer. It turns out building megawatt datacenters requires a great deal of meticulous planning. Who knew?

      I completely agree with this. My state was all about banning ICE vehicles and gas stoves and furnaces in about a decade...but had very few plans to handle the terawatt capacity requirements...and this was *before* datacenters got a seat at the table.

      Something else that has been brought up is that with delays, the hardware in these datacenters might be obsolete by the time they are built.

      I'm...not quite sure I agree with this one as much...

      AI always needs the latest and greatest processors.

      This...I think, has some wiggle room. Sure, training new models requires greater amounts of compute power, and as newer models and services develop, there will be a need to increase compute power. However, tha

      • This...I think, has some wiggle room. Sure, training new models requires greater amounts of compute power, and as newer models and services develop, there will be a need to increase compute power. However, that doesn't mean that older models are useless. They may not be front-and-center, but they can still be used in lesser capacities. ChatGPT 3.5 isn't quite as awesome as v5, but if it's what is used to serve up ads in ChatGPT sessions, the hardware is still perfectly fit-for-purpose. Same goes for Google or Microsoft - older boards may not be front-and-center, but they can still do boring, smaller-scope tasks that are still useful.

        Older models do not generate investment which is the primary source of AI funding. While better models can be developed over time, the cheat code for all models is just to use more powerful hardware. Also logistics has been ignored by AI companies as vital. They want the most powerful hardware now. The reality that datacenters take years to build means the hardware they buy now will not bet the latest when the datacenter comes online requires forethought and planning. They would rather cancel the whole cont

    • by dgatwood ( 11270 )

      Something else that has been brought up is that with delays, the hardware in these datacenters might be obsolete by the time they are built. Previous datacenters like Google ones were built with hardware that was not the cutting edge but were stable and reliable. AI always needs the latest and greatest processors. However, by the time the datacenter is fully built, those processors are no longer the latest and greatest.

      Unless the companies are completely incompetent, they aren't having the processors manufactured until they have a plan for bringing the building online, including power delivery.

      Everything else in the data center is pretty much the same no matter what hardware you put in the racks. You still need floors, walls, and a ceiling or roof. You still need places for cables to go between racks (either above or below). The floors still need to be built to handle high static weight loads where the racks are. You

      • Unless the companies are completely incompetent, they aren't having the processors manufactured until they have a plan for bringing the building online, including power delivery.

        Not from what I can see. NVidia is getting tons of orders for processors. Also the RAM shortage is because AI datacenters are buying all available memory and convincing the RAM foundries to make as much high bandwidth AI server memory as possible. When the bubble bursts, will these companies be left with orders no one wants. For example, Micron has stopped selling consumer memory in order to make HBM3E which is not consumer RAM sticks. Maybe Micron could sell some of that RAM to non AI datacenters, but the

        • by dgatwood ( 11270 )

          Unless the companies are completely incompetent, they aren't having the processors manufactured until they have a plan for bringing the building online, including power delivery.

          Not from what I can see. NVidia is getting tons of orders for processors. Also the RAM shortage is because AI datacenters are buying all available memory and convincing the RAM foundries to make as much high bandwidth AI server memory as possible.

          Yeah, but that doesn't mean they're ordering so far ahead that they don't have electrical connection approval for the building. Approval for connecting to the grid should happen before they even break ground for the building, after whcih it takes anywhere from one to three years *after* they break ground before the data center opens.

          It can take a year or more just to get the high-kVA transformers for the building, which are often built on demand when ordered, not built and warehoused ahead of time. And it

          • Yeah, but that doesn't mean they're ordering so far ahead that they don't have electrical connection approval for the building

            You would be surprised. Again Micron is making high bandwidth memory for AI instead of consumer DRAM. They have already announced this.

            . Approval for connecting to the grid should happen before they even break ground for the building, after whcih it takes anywhere from one to three years *after* they break ground before the data center opens.

            Again you would be surprised by the lack of logistics for some of these data centers. And no one is not saying it does not require that level of planning. What we are saying is some of these data centers are being built on hopes and dreams as the foundation.

  • RAM (Score:2, Informative)

    by SumDog ( 466607 )
    Please crash and burn harder AI industry bubble. I'm not buying RAM again until it's less than $6/GB. We are currently at 2009 prices for RAM:

    https://battlepenguin.com/tech... [battlepenguin.com]

    I'm really regretting not just maxing out my homelab with RAM when it was all reasonable.
    • Back in March of 2025 I purchased 32GB of DDR4 for $62 for a system I was putting together. I just checked the other day and that same RAM is now $219. Man I got lucky. I had no idea back then RAM prices would spike this much. Like you I'm going to hold off on RAM purchases until this AI insanity bubble bursts. Same goes for all other components (SSD, NVMe, GPU, etc..)

    • It's kind of pissing me off actually. I'm finally to the point where the 32GB I have in my desktop isn't sufficient for what I'm doing on it anymore, and I don't want to spend the entire price of a new mini PC to get it to 64GB.

      So I have to grit my teeth through swapping and frequent reboots.

      Fuck the AI billionaire bros.

      • Out of morbid curiosity, what are you doing that takes up all your ram? I'm at 32gb and find that only under rare circumstances do I even need the 16gb. I only bought the 32gb to future proof, as the mobo itself can handle 128gb.

        • VM workloads. Specifically, I've decided I'm done letting Microsoft destroy my windows install that I need for things, so I have virtualized it and am passing through the GPU, audio device, and a USB hub into the VM using virtualized IO.

          They release some hum-dinger of a patch that screws Windows up, and I restore my automatic snapshot and block the patch from installing.

          There's extra memory overhead of running Debian and the IOMMU setup. And Windows is a fucking hog. And I use a second GPU on a display t

          • Ahh okay, that makes sense. Before I bought my most recent laptop, I only had access to Windows 11 via VM.

            I rarely use it or the laptop booted to Windows. I did a lot of research to ensure I could install Linux on it and everything would be peachy. It is. My Xbuntu setup idles at something like 1.2gb ram used, where as the Windows/dell stuff combined was sucking down 8gb!!! of ram. The poor laptop only has 16gb, which was bought in the eyes of a non-Windows user and light gaming every blue moon. I was flabb

    • I'm of the same opinion. In August of 2023 I paid $69.99 for a 32gb (2x16) kit of Corsair ddr4 3600. The most expensive part of that build was my ASUS Dual GeForce RTX 4060 OC Edition 8GB GDDR6 that I paid $299 for and now that exact same card is listed at $479.

      I'm very happy my system is overkill for everything I use it for. The last game I bought was BG3, and it runs beautifully. I do sort of wish I had the 16gb video card so I could play with a local generative AI model but even that would just barely be

  • As reported elsewhere, the long lead times have made some data centers obsolete at startup, or sometimes before construction is complete. For example, Oracle and OpenAI abandoned expansion of a data center in Texas, because the expansion as planned would not be ready for new NVidia GPUs. TFA doesn't mention the tremendous water consumption by data centers, a statistic AI companies strive to hide, and which many water-constrained communities rightly use to oppose data center construction.
  • If you have to wait years for transformers, this creates a huge market for electronic converters for high and medium voltage AC. Their production can be ramped up much faster. The problems in designing such electronic transformers is not much different from HVDCHVAC converters, it's obviously possible to do it reliably.

    • Not really. I was in a factory in Shenzen 25 years ago that was able to spit out ~20 5MVA transformers per week with basic equipment and minimal staff. The only components brought in from outside were bolts and insulators. Doubling capacity only required equipment worth roughly the cost of one transformer.

      Solid state solutions are an order of magnitude more complex.

      • Two orders of magnitude less copper. The stacked modules aren't complex enough to make a dent in existing PCB/pick&place/component market.

        Complex to engineer, but the complexity in implementation is miniaturised and mass manufacturable.

  • If a bunch of data centers are being canceled I would hope that ram prices will ease somewhat.

  • This is interesting. Microsoft is currently preparing the site for a large data center on the border of my city and San Jose. In an area that has had power issues for years and is undergoing yet another dry winter. Will it actually get built and come online and, if so, will it be consuming an outsized share of resources such that we peasants revolt when we have brown outs when the summer temps break 100 or we have a cap placed on water usage? Stay tuned.

    • I wouldn't be surprised if we are going to see caps this year in California anyway. The snowpack is quite low this year and it's not like water catchment systems are mandatory for new construction like solar is. They should be though., It's a lot cheaper to add a cistern and a catchment system then it is to do solar.

  • Thought exactly nobody.
  • Fuck techbro electricity, water, money, DRAM, and GPU leeches. Bubble is finally popping.

Asynchronous inputs are at the root of our race problems. -- D. Winker and F. Prosser

Working...