Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Security Software The Military United States Hardware Technology

Cray Is Building a Supercomputer To Manage the US' Nuclear Stockpile (engadget.com) 65

An anonymous reader quotes a report from Engadget: The U.S. Department of Energy (DOE) and National Nuclear Security Administration (NNSA) have announced they've signed a contract with Cray Computing for the NNSA's first exascale supercomputer, "El Capitan." El Capitan's job will be to will perform essential functions for the Stockpile Stewardship Program, which supports U.S. national security missions in ensuring the safety, security and effectiveness of the nation's nuclear stockpile in the absence of underground testing. Developed as part of the second phase of the Collaboration of Oak Ridge, Argonne and Livermore (CORAL-2) procurement, the computer will be used to make critical assessments necessary for addressing evolving threats to national security and other issues such as non-proliferation and nuclear counterterrorism.

El Capitan will have a peak performance of more than 1.5 exaflops -- which is 1.5 quintillion calculations per second. It'll run applications 50 times faster than Lawrence Livermore National Laboratory's (LLNL) Sequoia system and 10 times faster than its Sierra system, which is currently the world's second most powerful super computer. It'll be four times more energy efficient than Sierra, too. The $600 million El Capitan is expected to go into production by late 2023.
"NNSA is modernizing the Nuclear Security Enterprise to face 21st century threats," said Lisa E Gordon-Hagerty, DOE undersecretary for nuclear security and NNSA administrator. "El Capitan will allow us to be more responsive, innovative and forward-thinking when it comes to maintaining a nuclear deterrent that is second-to-none in a rapidly-evolving threat environment."
This discussion has been archived. No new comments can be posted.

Cray Is Building a Supercomputer To Manage the US' Nuclear Stockpile

Comments Filter:
  • by thereddaikon ( 5795246 ) on Wednesday August 14, 2019 @08:11AM (#59085620)

    I wonder if this will be Epyc powered like Frontier? I assume so since Cray has formed a strategic partnership with AMD.

    • I don't understand why nuclear stewardship would require so much (or any) computing power..
      • Development of new weapons. Since the ban on testing.

      • SkyNet requires a lot of processing power.
      • to run war games!

      • Nuclear weapons are constantly degrading. The fissile material is slowly transmuting into "daughter" elements, therefore changing how the bomb will behave. The only way to test this absent of actually blowing them up is simulation. Lots and lots of simulation.

        • by caveat ( 26803 ) on Wednesday August 14, 2019 @09:27AM (#59085882)

          Pu-239 has a half life of around 24,000 years - yes, it is decaying, but over the couple of hundred years we're realistically going to have bombs, it's such a small fraction as to be negligible in terms of nuclear physics wrt the actual explosive fission process. The fiddly precise bit is getting the material evenly compressed to a supercritical state, after that it's not sensitive to a bit of impurity. Bomb pits are already alloyed with 3% germanium for stability reasons, if that doesn't affect the behavior a few hundredths of a percent of decay product isn't going to matter a whit (IIRC Pu-239 decays to also-fissile U-235 anyway).

          The bigger problem is in the rest of the components, in particular the high explosives...they tend to not age well over the course of a few decades; their decay process is known and monitored, then modeled in the computers. There's probably also issues with stuff like the X-ray reflectors and interstage materials decaying, maybe the lithium deuteride gets stale too, but the fissile material is for practical purposes the only completely stable part of the damn thing.

          • by Myrrh ( 53301 )

            Yes, but the tritium, which is used as a fusion booster in many designs, has a half-life of 12.3 years. Hence why DOE/NNSA was looking for a site to produce tritium using accelerators or other methods about twenty years ago.

            • Oh, the tritium is absolutely a very limited lifespan component, but it's a known factor that's designed to be easily replenished/replaced...not something you'd need an exaflop-scale system to simulate.

              • by Myrrh ( 53301 )

                I vaguely recall that they wanted, also, to simulate the weapons' performance given various decay states of the tritium (that is, without replenishment). That might or might not require lots of computing power.

                • Off the top of my head, I'd say that would be some (relatively) simple math; x percent of the tritium is decayed to He-3, which is definitely a bad thing since it's a neutron absorber, but also you might be able to just factor that into the math describing the fusion process (https://nuclearweaponarchive.org/Nwfaq/Nfaq4-4.html#Nfaq4.4) and get a good idea of yield reductions or outright fizzle given a certain decay level of the tritium.

                  Or I absolutely could be talking out of my only slightly informed ass an

                  • by Myrrh ( 53301 )

                    I don't really know either. Physics was never my strong suit.

                  • by boskone ( 234014 )

                    amongst many other things, the increase in neutron transfer could also decay some of the surrounding materials differently, either atomically or physically (embrittlement) amongst thousands of other factors I'm not even considering.

                    It's one of those issues that seems simple on the surface, but is probably pretty intricate once you dig into it.

          • by Strider- ( 39683 )

            It may be stable atomically, but it's not necessarily stable phase-wise. Plutonium has a number of different solid phases depending on various factors and how its alloyed. These can vary widely in density, a fact that is likely leveraged in weapons design.

            • Absolutely - the 3% gallium very effectively stabilizes the Î phase, which is the ideal allotrope for pit production since it's easily machinable, ductile, fairly stable, and under relatively mild implosion pressure it transitions to α phase which is significantly denser, increasing the implosive efficiency big-time. It does also transition to α at something like 120ÂC, so there is some concern with inadvertent overtemperature as the weapon ages, but that's more a metallurgic

      • by AHuxley ( 892839 )
        To ensure the existing metals in bunkers all over the world get inspected before they fail.
        Then the US mil and send in contractors to repair and replace parts.
        The stockpile all over EU in US bases is ready for use for decades more.
        A lot of calculations per devices, per decade. The US has a long list of devices to look after.
      • Multiple reasons. Nuclear bombs have been constantly refurbished over the last few decades, and computers are necessary to see if the replacements will work. Conventional explosive triggers have been changed to more stable, "inert" explosives. A substance known as "FOGBANK" was hard to produce, and computer simulations at the time weren't powerful enough to see if a replacement would work. We are also embarking on a redesign and refurbishment process of our nuclear stockpile so that'll require even more computing to validate design changes.

      • Weapons development as others have said but also more and more accurate modeling of existing warheads to know how they would perform in certain cases. These kinds of things used to be done by hand but as you can imagine the more power you can put behind it the higher resolution the model. They also get used for nuclear research which helps with reactor designs and other things related to heavy atoms and radiation.

    • And we were discussing QC yesterday and its "imminent," like any day now probably soon, on the horizon deployment and stuff and then we roll out a goddam classic computer.

  • ... to the Russians again.

    Maybe we'll actually make it to the Moon.

    • I seem to recall, in the very late 1990s or early 2000s reading, ON THIS VERY SITE, about the Russians (or some subset) managing their nuclear weapons using a super state of the art system, known for its reliability, and security . . .

      Excel.

      Many Windows systems still ran on DOS.
  • by hughbar ( 579555 ) on Wednesday August 14, 2019 @08:42AM (#59085712) Homepage
    So what do they need all that horsepower for? There's a little one there, oh, and another one over there etc. etc. (see Terry Gilliam's Jabberwocky, stocktaking scene: https://en.wikipedia.org/wiki/... [wikipedia.org].
    • by ceoyoyo ( 59147 )

      It's to simulate explosions. Typical QA would involve periodically taking a bomb or two and seeing if they still make a satisfying kaboom. Since that's not allowed, you use a giant computer to simulate it instead.

      Since such simulations have been happening for decades, one might wonder why you need the biggest computer in the world to do them. A cynic might suspect that it would be useful for developing improvements, as well as maintenance.

    • Cray's bottom line.

      See, in America the way we do socialism is with our military. Nobody'll pay for it otherwise. Then you hope some of it trickles down.
  • Really? (Score:4, Funny)

    by flippy ( 62353 ) on Wednesday August 14, 2019 @08:44AM (#59085724) Homepage
    "El Capitan"? "Sierra"? Are they running MacOS on these supercomputers? Hackintosh Supercomputers?
    • by Shag ( 3737 )

      OS X 10.11, yeah -- but I wouldn't figure Hackintosh. Since the top entries on the Top500 list these days use literally millions of CPU cores, I was thinking rows and rows of impressive-looking racks with CRAY on the doors, but when you open the doors there are shelves and shelves of Mac Minis (late 2014, the top-end model with the 3.0GHz Core i7-4578U dual-core CPU).

    • Re:Really? (Score:4, Interesting)

      by tlhIngan ( 30335 ) <slashdot&worf,net> on Wednesday August 14, 2019 @01:31PM (#59086672)

      "El Capitan"? "Sierra"? Are they running MacOS on these supercomputers? Hackintosh Supercomputers?

      Actually, the early Crays did run MacOS. In a way - they used a Mac to manage them (schedule and load jobs into them, etc. Basically the Mac provided the UI, because the Cray itself was for pure computation and not to waste time on an operating system or a monitor or such. So they had a Mac to provide all the user interaction, networking and other things.

      There was a joke that Apple used a Cray to help design the new Macs, while Cray used Macs to drive them. (And all the associated jokes they had like it was so fast, it could run an infinite loop in a few seconds).

    • "El Capitan"? "Sierra"? Are they running MacOS on these supercomputers? Hackintosh Supercomputers?

      Yes, they need the compute power because they are seeing the spinning beach ball.

    • They're actually running Radius Rocketshare [lowendmac.com].

  • by hcs_$reboot ( 1536101 ) on Wednesday August 14, 2019 @08:45AM (#59085730)
    or wouldn't the right and powerful algorithm from the right software be sufficient?
    • I'm wondering to. If this was the 1950 or 60s, sure, but a beefy desktop can handle all this now. Then I read the marketing wank and cost...
      "El Capitan will allow us to be more responsive, innovative and forward-thinking when it comes to maintaining a nuclear deterrent
      that is second-to-none in a rapidly-evolving threat environment." The $600 million El Capitan is expected to go into production by late 2023.

      It's just a boondoggle for the military industrial complex.
  • Supercomputers (Score:4, Interesting)

    by JBMcB ( 73720 ) on Wednesday August 14, 2019 @09:21AM (#59085858)

    I used to really be into supercomputers when I was a kid. All the weird architectures and instructions sets. Custom processors to accelerate specific algorithms. Weird OSes that ran on top of other weird OSes. Crazy ECL hardware cooled with Flourinert. Walls of blinking LEDs and cooling systems with visible waterfalls.

    Now they are racks of AMD or Intel blades with nVidia GPUs running customized Linux distributions. The only slightly interesting things are whatever tweaks to OpenMP they are using, or maybe the Myrinet or Infiniband backbones.

    Commoditization is great, but it does make things a bit less interesting.

    • The top two right now are running on POWER9 chips from IBM. It's a pretty interesting CPU. https://en.wikichip.org/wiki/i... [wikichip.org]

      • by bodog ( 231448 )

        The P9 soon will have the full attention of RedHat engineering, that could make that chip even more appealing..

    • Re:Supercomputers (Score:5, Interesting)

      by godrik ( 1287354 ) on Wednesday August 14, 2019 @10:32AM (#59086102)

      I don't agree with you, there have been plenty of interesting things happening in the field. Though part of them are software, but even in the hardware side there are fun stuff.

      Bluegenes are not that old and were pretty interesting.
      We saw the rise (and fall) of XeonPhi.
      The Chinese system is made of their own weird CPU.
      There is plenty of POWER in the field and serious investigation of using ARM processors.

      On the network side, dragonfly is the new hot thing and that's new and weird. We are also seeing in-network collectives that could change the performance (and therefore design) of many applications.

      The innovation on the storage side are pretty impressive as well. Most recently Burst Buffer has been changing the game. But being able to support checkpointing became a significant problem for the storage system.

      On the software side, there have been lots of problems that needed to be solved. How do we deal with accelerators? How do we deal with heterogeneous processors? How do we build performance portable applications? How do we account for dark silicon? How do we manage the power budgets to maximize performance? How do we make application easier to program for non-HPC experts? How do we move past MPI+X? How do we deal with reliability issues that arise from running on 100000s processing units? How do we make checkpoint/restart transparent to the programmer?

      I think the HPC community has delivered plenty of interesting things.

    • I used to really be into supercomputers when I was a kid. All the weird architectures and instructions sets. Custom processors to accelerate specific algorithms. Weird OSes that ran on top of other weird OSes. Crazy ECL hardware cooled with Flourinert. Walls of blinking LEDs and cooling systems with visible waterfalls.

      Now they are racks of AMD or Intel blades with nVidia GPUs running customized Linux distributions. The only slightly interesting things are whatever tweaks to OpenMP they are using, or maybe the Myrinet or Infiniband backbones.

      Commoditization is great, but it does make things a bit less interesting.

      We're all enjoying a "less interesting" life. What was once something only for the extremely wealthy is now something we find in the trash because something better has come along.

      I'll often comment on how we are living in the future. What was thought nearly impossible only a decade or three ago is now a commodity. Maybe some of this is still priced out of the range of many but it's in the range of a good sized portion of the public.

      The idea of a video phone was thought of something someone would have to

    • by Myrrh ( 53301 )

      I've got no problem with it. Standardization and commoditization saves the taxpayers lots of money.

  • by Anonymous Coward
    with a coke and side of fries
  • ... call it W.O.P.R!!!

  • Cray marketing is clearly a wholly-owned subsidiary of Apple, Inc.

  • We spend money buying and researching ultrafast computers and physicists to use them rather than conducting nuclear tests with all the proliferation, environmental and geopolitical downsides.

    I wish more government programs had this return on investment.

  • Comment removed based on user account deletion

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...