Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing United Kingdom Hardware Technology

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office 125

Memetic writes: The UK weather forecasting service is replacing its IBM supercomputer with a Cray XC40 containing 17 petabytes of storage and capable of 16 TeraFLOPS. This is Cray's biggest contract outside the U.S. With 480,000 CPUs, it should be 13 times faster than the current system. It will weigh 140 tons. The aim is to enable more accurate modeling of the unstable UK climate, with UK-wide forecasts at a resolution of 1.5km run hourly, rather than every three hours, as currently happens. (Here's a similar system from the U.S.)
This discussion has been archived. No new comments can be posted.

16-Teraflops, £97m Cray To Replace IBM At UK Meteorological Office

Comments Filter:
  • by Anonymous Coward on Wednesday October 29, 2014 @03:48AM (#48258547)

    16 peta not tera FLOPS

    • We need a law against journalists using numbers. It would be less misleading to have them report as:

      "[...]a Cray XC40 containing much storage and capable of a large number of flops."

    • by tibit ( 1762298 )

      Yeah, I thought what the heck? I could probably stuff 16 teraflops worth of compute power in a couple desktop machines, easy.

      • by Haven ( 34895 )

        Less than 20 grand gets you a perfectly tuned 16 teraflop (single precision) "super computer".

    • I was going to say that a well-setup 2U hybrid CPU/GPU server would be capable of more than 8 TFLOPS (double precision). lol

  • #cuethedeniers #poisoningthewell #climatedenialiscreationism #dontquestionclimatemodels #dontmentionthehiatus

    • I find it ironic that on one side they deny climat change, on the other - almost the same people build the ark.

  • It will spend its days predicting it's own global warming impact
  • by serviscope_minor ( 664417 ) on Wednesday October 29, 2014 @04:03AM (#48258583) Journal

    16 TFlops ain't much to write home about. 480,000 CPUS? What are they? 6502s?

    Turns out it's 16PFlops according to the BBC.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      What, are you suggesting they fucking read the articles they're going to post? Or more absurdly yet, be broadly informed about the general goings on in technology?

      One might even imagine that this headline, the weekly articles about the latest multi-teraflop figures from single GPUs, and some working synapses might have raised a SIGREDFLAG or something.

    • And neither mentions the CPU architecture, but if you go to the product brochure [cray.com] then you learn that they're Intel Xeon E5s (which doesn't narrow it down much). Interesting that they're using E5s and not E7s, but perhaps most of the compute is supposed to be done on the (unnamed, vaguely referenced) accelerators.
      • Interesting that they're using E5s and not E7s

        Probably something to do with yields and availability - buying 480,000 CPUs in one go is going to cause consternation, regardless of who your supplier is :) Getting 480,000 E5s in half the time it would take to get 480,000 E7s means you have less liability on the books for the duration (you have to hold delivered stock and down payments as liabilities), and a better cash flow.

        • by tibit ( 1762298 )

          It might also be that Intel has a bit too much capacity for E5s, and needs to utilize it. Unused semiconductor capacity is costly. Now don't get me wrong: this might simply be a case of more efficient capacity being available. E5s and E7s may be all made on the same equipment, but if said equipment makes E5s at half the cost of E7s, and you can sell them for more than half the cost of E7s, you really have more capacity in terms of what's sensible to use for ROI.

      • E7 is useful for areas where extremely large memory per core is mandatory (some parts of HPC)

        In general, E5 strikes the balance between having adequate amounts of cache and SMP interconnect, compute capability (Haswell E5 is available, E7 is still Ivy bridge, AVX2 being a big thing there), and per-unit cost (E7 carries a huge premium for its benefits, most of which are generally not needed in HPC of this scale).

        Even in places where you do see E7, it's usually in a special portion of the cluster for big-memo

    • by grub ( 11606 )
      Don't tell the UK meteorlogical Office this. The trick to getting those TFLOPs on those 480,000 6502 CPUs is that Cray benchmarked them all with nothing but NOP instructions.
      • by tibit ( 1762298 ) on Wednesday October 29, 2014 @09:41AM (#48260045)

        Of course the mention of 6502 was a joke, but let's see how close one could get. Let's say that you could get one FLOP in 1000 cycles on a legacy 6502. With 2MHz clock, we're talking 2kFLOPs per chip. With half a million of them, we get 1GFLOP. That's still 7 orders of magnitude away from where one needs to be... This tells us, indirectly, that the desktop processors we currently have are essentially the realm of 1980s science fiction :)

        • by itzly ( 3699663 )
          Exactly, The Cray Y-MP that I was drooling over in 1988 has processors with a 167 MHz clock and 512MB of main memory. Now you can fit a faster CPU in your shirt pocket.
  • PetaFLOPS ffs (Score:2, Informative)

    by Anonymous Coward

    Slashdot is getting worse by the minute.

  • by Greyfox ( 87712 ) on Wednesday October 29, 2014 @04:19AM (#48258631) Homepage Journal
    Those guys are still around? I thought they were all eaten by dinosaurs. How many times have they gone bankrupt now?
    • Re:Cray? (Score:4, Informative)

      by Required Snark ( 1702878 ) on Wednesday October 29, 2014 @04:36AM (#48258667)
      It started out as Terra Computer Company.

      Cray Computer [wikipedia.org]

      Cray Inc. is an American supercomputer manufacturer headquartered in Seattle, Washington. The company's predecessor, Cray Research, Inc. (CRI), was founded in 1972 by computer designer Seymour Cray. Seymour Cray went on to form the spin-off Cray Computer Corporation (CCC), in 1989, which went bankrupt in 1995, while Cray Research was bought by SGI the next year. Cray Inc. formed in 2000 when Tera Computer Company purchased the Cray Research Inc. business from SGI and adopted the name of its acquisition.

      • The current Cray probably only keeps the logo of the original Cray. There was an additional purchase besides Tera that is not listed there.

        • The "original cray?" I'd be surprised if even the power plug is the same type. 1972 is a long time ago, what carryover would you expect? But besides big gangs of commodity CPUs (which are actually not just a beowulf cluster, by the way), they still develop interesting new architectures [wikipedia.org].
          • I thought that architecture from Tera had been cancelled long enough. Never heard about much sales from it. It was interesting but if you read the description of what it does and think about how a modern GPU works you will see you are probably much better off buying COTS GPUs.

            • It lives on [wikipedia.org]. Anyways, I am mainly giving them props for making the effort. To the extent they are "just" making computers with commodity CPUs, it's because that's what works out best, not because they're unimaginative or unambitious.
    • Twice, I think (Cray Laboratories and Cray Computer Corporation) though the name has been passed around a bit.

      The current Cray (Which was formerly known as the Tera Computer Company) bought up the remnants of Cray Research from Silicon Graphics in 2000, who had bought them up in 1996, and appropriated the name.

  • by Attila the Bun ( 952109 ) on Wednesday October 29, 2014 @04:35AM (#48258661)
    I miss the days when supercomputers looked super. This one [cray.com] looks like a row of drinks machines.
  • by ssam ( 2723487 ) on Wednesday October 29, 2014 @04:46AM (#48258677)

    As a British nerd my 2 favourite topics of conversation are the weather and super computers, so this is exciting news.

  • by MoonlessNights ( 3526789 ) on Wednesday October 29, 2014 @04:50AM (#48258697) Homepage Journal

    I was interested in what the change-over was, which was causing the performance increase, and how old the existing system is. This information seems to be missing.

    What is included actually sounds a little disappointing:
    13x faster
    12x as many CPUs
    4x mass (3x "heavier")

    I would have thought that there would be either a process win (more transistors per unit area and all that fun) or a technology win (switching to GPUs or other vector processors, for example) but it sounds like they are building something only marginally better per computational resource. I suppose that the biggest win is just in density (12x CPUs in 4x mass is pretty substantial) but I was hoping for a little more detail. Or, given the shift in focus toward power and cooling costs, what impact this change will have on the energy consumption over the old machine.

    Then again, I suppose this isn't a technical publication so the headline is the closest we will get and it is more there to dazzle than explain.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      The speed of these large supercomputers is based less on processors and more on the networking between the nodes. To have a linear response in performance at 13x the number of processors is pretty impressive capability.

  • There's been times were the forecast for 30mins away was wrong. Half a million chips cant be cheap, I thought we were in a time of austerity, this clearly doesn't benefit the UK economy. Money would have been better spent in researching better methods of forecasting rather than trying and failing to brute force weather forecasting.

    • by jcupitt65 ( 68879 ) on Wednesday October 29, 2014 @05:13AM (#48258765)

      UK weather forecasts have become much more accurate over the last few decades as the computers that do the forecasting have become more powerful. This new machine will continue that trend.

      For many years we have verified our forecasts by comparing forecasts of mean sea-level pressure with subsequent model analyses of mean sea-level pressure. These comparisons are made over an area covering the North Atlantic; most of western Europe, and north-eastern parts of North America. From this long-term comparison an average forecast error can be calculated.

      The graph shows how many days into a forecast period this average error is reached compared to a baseline in 1980. This graph shows that a three-day forecast today is more accurate than a one-day forecast in 1980.

      http://www.metoffice.gov.uk/media/image/7/2/capIndPlot-600.jpg [metoffice.gov.uk]

      • Hmmm. I wonder if you are just confirming what the parent comment said. The sheer linearity of that graph indeed hints that the improvements have mostly happened by just throwing more and more raw CPU power into the task, without breakthroughs in making the algorithms more accurate or efficient.
        • The hardware is the cheap part - coming up with better algorithms is akin to mathematical breakthroughs these days...

        • Even if that's true that the algorithms are pretty much unchanged, that the accuracy gets better when throwing resources at the problem probably means the algorithm is working as intended.

        • by tibit ( 1762298 )

          In the end, you have to do those additions and multiplications, there's nothing to be more efficient about. All those computations run on a grid, and the elements in the grid can approximate effects of various orders (think polynomial orders). Up to a certain point, increasing the order of individual elements decreases the net amount of computations done, since the increase in number of computations within an element is outcompensated by the decrease in the needed number of elements. At a certain point, you

          • by tibit ( 1762298 )

            I should probably say that it speaks to the incredible flexibility and scalability of our grid-based methods that they even can be scaled in such a fashion. Some numerical methods simply don't scale at all, and throwing more computational power at them gives slower-than-linear increases in accuracy or decreases in computation time. For example, good luck with scaling up the grade-school long multiplication, or with single-polynomial approximations that span more than a dozen points...

            • by itzly ( 3699663 )
              Nature uses a grid based algorithm to run the weather, so it shouldn't be a surprise that it works.
        • What precisely is wrong with throwing more and more raw CPU power into the task if it produces better results?

          Not everything in the world can be solved by clever software.

    • Eh quite a bit of industry where even small impovements in weather forecasting are extremely valuable.

    • by ihtoit ( 3393327 )

      Cameron shot that one out of the water when he promised to victims of the floods last winter: "MONEY IS NO OBJECT."

      Some of us have long memories.

      • by tibit ( 1762298 )

        And how is improved forecasting going to really help here, when you get past the platitudes? Is the transportation and rescue infrastructure up to par to cope with the evacuations prior to a forecast flooding? I somehow doubt it is. But feel free to prove me wrong, of course.

        • by amck ( 34780 ) on Wednesday October 29, 2014 @12:09PM (#48261595) Homepage

          You do more than rescue. When you know the storm is coming you prepare ahead of time. With 3-5 days notice, Councils, police cancel overtime. All vehicles are out of the garage/repair shop. Priority on getting sandbags in place, clearing all drains and drain covers.

          Then the general public are warned. Less events are on, or they are cancelled. Less people travel, everyones been to the shops two days before.

          And away from storms, farmers know 5 days in advance what they're doing; warm humid weather means preparing for blight, etc. Less fertlilizers, less pesticides are wasted.

          People still grumble about the bad weather, but harvests and lives aren't lost.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      this clearly doesn't benefit the UK economy.

      Oh look, it's another small minded Little Endlander with their "I don't understand it, it doesn't benefit me directly and it costs money, so it must be bad". See also HS2.

      It benefits the UK economy massively. It allows shipping & aircraft companies to make sensible decisions like "Should we have the snowplows on standby tonight?" and "Should we wait in port while that storm passes?". It benefits farmers by giving them more accurate long-range forecasts so t

    • 97 million pounds is a pittance in a 731 billion budget [ukpublicspending.co.uk]. An Eurofighter Typhoon costs 110 million (marginal cost, not factoring R&D in).

    • I think the problem isn't the lack of centralized computing resources but rather the lack of distributed sensors. The UK is quite small. If they would have spent the money on blanketing the country with sensors, they could give a much more localized and up to date weather forecast. I find that I get the best forecast for rain if I look at the radar map. But it requires quite a bit of time to read the map. I should be able to check on my phone, which has GPS anyway, and determine if it's going to rain, but
  • According to the BBC broadcasts yesterday, the system is £67 million worth of iron. Good deal if you can play Doom on it.

    Fuck-all good that's going to do, if they can't predict the weather with any accuracy on this planet - how the fuck did NASA do it in the 80's to predict the weather TWO WEEKS in advance for NEPTUNE so they knew where to point the cameras?? I'm pretty fucking sure my current LAPTOP has even more grunt than their entire server farm had...!

    • by tibit ( 1762298 )

      The grid that NASA used for those Neptune weather predictions probably had a cell the size of a large Earth country, or a small Earth continent. Neptune is fucking big.

  • by Anonymous Coward

    To tell which of the UK's three weather conditions (rainy, cloudy, or foggy) it's gonna be?

    • by Geeky ( 90998 ) on Wednesday October 29, 2014 @08:00AM (#48259263)

      You joke, but our weather has been getting less predictable. We had a fairly hot summer overall, but August was fairly wet and dull. September, on the other hand, was the driest on record, and October has mostly been warm. It's forecast to reach 20 degrees in London on Friday - if that was one day later, on the 1st of November, it would be challenging the record for the hottest November day recorded in the UK.

      Monday and Tuesday were warm enough to sit outside on my lunch break, today it's raining and chilly, tomorrow it's back up to 19 degrees apparently.

      • by itzly ( 3699663 )
        The fact that the weather is weird and different than what you're used to doesn't mean it isn't predictable for the next couple of days.
  • Now dat just Cray.

  • Better than 2000's ASCI White [top500.org], but worse than 2002's Earth Simulator [top500.org]. 13 years back to the past!

    Or maybe the actual performance is 16 PetaFLOPs, as the linked article states.

  • by Anonymous Coward

    ...I could predict the weather for the same price.

    Rain, rain and more rain.

  • Seriously, all the useless stats... weighs as much as 11 double decker buses... I've asked the metoffice on twitter but they ignore more. How much electricity is this gonna need?
  • by HyperQuantum ( 1032422 ) on Wednesday October 29, 2014 @07:54AM (#48259227) Homepage

    Does it run Linux?

    Not mentioned in TFA, and I haven't seen anyone talk about it yet in the comments here. Or maybe the answer is so obviously 'yes' that nobody even talks about it anymore.

    • by gewalker ( 57809 )

      Yes, it runs linux.

      Cray Linux® Environment (includes SUSE Linux SLES11, HSS and SMW software)
      Extreme Scalability Mode (ESM) and Cluster Compatibility Mode (CCM)

      system specs [cray.com]

  • predicting the weather will be a breeze ....

  • England will be covered in Cray skies. No Sun.
  • by PineHall ( 206441 ) on Wednesday October 29, 2014 @10:05AM (#48260305)
    The current NWS computer is only capable of 0.21 petaflops [blogspot.com]. There is an upgrade to bring it up to 0.8 petaflops, After Sandy (1.5 years ago) Congress gave money for a new computer but nothing seems to be happening with that money. Sandy's forecast was good not because of the American forecasts but because of the European forecast. I believe American forecasts were wrong in predicting Sandy's direction because America lacks of a decent supercomputer for forecasting.
    • How can this be? I was just informed on Slashdot, today, that government is the best system not only for weather forecasting, but for everything. Please tell me this is not true, and restore my faith in the U.S. federal government. I don't want to start questioning things...I don't know where that might end up, and that scares me.
    • And no source for his (Cliff Mass's) claim of performance. As far as I know the US National Weather Service (NWS) in fact operates multiple clusters, I don't think they have any classic singular "supercomputers," but then again neither does anyone else anymore, since the original Cray supercomputer heydays.

      The various models are run on several clusters AFAIK. I believe North American Mesoscale, NAMS and Global Forecast System, GFS may run on the primary operational cluster, but I was under the impression th

  • It's not just a simple mistake that anyone could have made. If you know anything about computers at all, the error in the title, when you read it, is about as subtle as someone smacking you across the face.

    If Soulskill doesn't know the difference between TFLOP and PFLOP, what is he doing posting articles here?

For God's sake, stop researching for a while and begin to think!

Working...