Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Earth Government Supercomputing Hardware

NWS Announces Big Computer Upgrade 161

riverat1 writes "After being embarrassed when the Europeans did a better job forecasting Sandy than the National Weather Service Congress allocated $25 million ($23.7 after sequestration) in the Sandy relief bill for upgrades to forecasting and supercomputer resources. The NWS announced that their main forecasting computer will be upgraded from the current 213 TeraFlops to 2,600 TFlops by fiscal year 2015, over a twelve-fold increase. The upgrade is expected to increase the horizontal grid scale by a factor of 3 allowing more precise forecasting of local features of weather. The some of the allocated funds will also be used to hire some contract scientists to improve the forecast model physics and enhance the collection and assimilation of data."
This discussion has been archived. No new comments can be posted.

NWS Announces Big Computer Upgrade

Comments Filter:
  • by Anonymous Coward on Monday May 20, 2013 @08:54AM (#43772631)

    http://www.ecmwf.int/services/computing/overview/supercomputer_history.html

    Europe: 70 TFLOPS by and upgrade to be finished by early 2013 (Sandy was in Oct 2012), which they say will make it about 3 times the power of the computer it replaces. i.e. 23 TFLOPS, they did a part upgrade during Sandy, to about 50 TFLOPS

    USA: 213 TFLOPS, to be upgrade to 2,600 TFlops

    So no, the Europeans did the prediction with 10%-20% of the supercomputing power, 2% of the proposed supercomputing power. This is just a subsidy to the Supercomputer industry (and indirectly USA chip makers), at a time when the PC market is tanking. It has nothing to do with the garbage the US produced, they just used a bad model.

    "Replacement of the second cluster will be completed in early 2013. Each cluster has 768 POWER7-775 servers connected by the IBM Host Fabric Interface (HFI) interconnect."

    "For the first time the processor clock frequency actually decreased, going from 4.7GHz to 3.83GHz, despite this each processor core has a theoretical peak performance 60% greater than that of the POWER6. For ECMWF's applications the system is about three times as powerful as the system it replaced.The first operational forecasts using this system were produced on 24 October 2012."

  • by Chrisq ( 894406 ) on Monday May 20, 2013 @09:02AM (#43772681)
    Also, though I would like to believe that Europeans have superior algorithms, realistically the hurricane prediction could be a "one off". We know that modeling weather can gibe widely different results based on small variations of starting conditions, assumptions, etc. Unless there is evidence that European forecasts are consistently better it could just be luck. With the known chaotic nature of storm systems it wouldn't surprise me if the "butterfly effect" of the rounding errors when converting from C to F would be enough to displace a storm by hundreds of miles!
  • Re:Precise garbage (Score:1, Interesting)

    by Anonymous Coward on Monday May 20, 2013 @09:03AM (#43772691)

    The Europeans rightfully use Fortran for the numerical simulations, while the US hipster-doofus coderz use C and tons of flying pointers everywhere (essentially just sophisticated GOTOs). This creates code that is far, far less efficient. I wouldn't be surprised if much of their C codebase has been refactored with the use of automated tools several times.

  • by Anonymous Coward on Monday May 20, 2013 @10:35AM (#43773253)

    Posting anon to avoid burning bridges. NCAR has tried to develop better forecast models but they've layed off experienced US staff to hire foreign H1B grad students to write their software. I lost my 18+ yr position as a software engineer at NCAR, while helping to replace the 1980's crap they use to verify the accuracy their models with modern software, using modern techniques . They have great hardware but very amateur software. I got a "we've lost funding for you" while they were hiring H1B's. I was often the only US born software engineer in many of the projects I worked on at NCAR. The US could have much better forecasts, but the public wants everything on the cheap. The Europeans are doing better because they hire professionals to do development and charge for the output. IMO, American weather science is quickly becoming a joke.

  • by Miamicanes ( 730264 ) on Monday May 20, 2013 @10:45AM (#43773349)

    Supercomputing improvements are nice, but I personally want to see them get the cash to profoundly increase their NEXRAD backhaul (the data lines connecting their radar sites to the outside world).

    Right now, they're HORRIBLY backhaul-constrained. I believe most/all NEXRAD sites only have 256kbps frame relay to upload raw data to NOAA's datacenter for further processing & distribution to end users. As a result, they're forced to throw away data at the radar site to trim it down to size, and send it via UDP with little/no modern forward error correction. That's a major reason why glitches are common. In theory, the full-resolution data is archived to tape on site and CAN be mailed in if some major weather event happens that might merit future study, but the majority of collected data gets archived to tape, then unceremoniously overwritten a few days later. And most of the tapes that DO get sent in sit in storage for weeks or months before finally getting added to their near-line data archive.

    The low backhaul bandwidth is made worse by the fact that the secondary radar products (level 3 radar, plus the derived products like TVS) get derived on site, and wedged into the SAME bandwidth-constrained data stream. That's part of the reason why level 3 data lags by 6-15 minutes... they send the raw level 2 data, and interleave the previous scan's level 3 data into the bandwidth that's left over. I believe the situation with TDWR sites is even worse... I think THEY actually have a single ISDN line, which is why level 2 data from them isn't available to the public at all.

    As I understand it, they can't use lossless compression for two reasons -- since they have no error correction for the UDP stream, a glitch would take out a MUCH bigger chunk of data (possibly ruining the remainder of the tilt's data), and the error correction would defeat the size savings from the compression. Apparently, the processors at the site are pretty slow (by modern computer standards), so it would also add significant delay to getting the data out. When you're tracking a tornado running across the countryside at 50-60mph, 30 seconds matters.

    If NWS had funding to increase their backhaul to at least T-1 speeds, they could also tweak their scan strategies a bit to make them more useful to others. For example, they could do more frequent tilt-1 scans (the lowest level, which is the one that usually affects people the most directly), and almost immediately upgrade all current NEXRAD sites to have 1-minute updates for tilt 1 (adding about a minute to the time it takes to do a full volume scan, but putting data more immediately useful to end users out much more frequently).

    Going a step further, more bandwidth would open the door to a fairly cheap upgrade to the radar arrays themselves... they could mount a second antenna back-to-back with the current one with fixed tilt (ideally at 10cm, like the main one, but possibly 5cm like TWDR if 10cm spectrum isn't available, or a second dish of the proper size for 10cm wouldn't fit), and do some moderate hardware and software tweaks that would effectively increase their tilt-1 scanrate to one every 6-10 seconds (because every full rotation of the main antenna would give them a full tilt-1 rotation off the back). This means they could send out raw tilt-1 data with 6-10 second frequency. It's not quite realtime, but it would be a HUGE improvement over what we have now.

    Unfortunately, NWS has lots of bureaucracy, and a slow funding pipeline. I think it's safe to say that the explosion in popularity of personal radar apps, combined with mobile broadband, almost totally caught them by surprise. Ten years ago, very few people outside NWS were calling for large-scale NEXRAD upgrades. Now, with abundant Android and IOS apps & 5mbps+ mobile data the norm, demand is surging.

    That said, I hope they DON'T squander a chunk of cash on public datafeed bandwidth instead of upgrading their backhaul. I'd rather see them do the back-end upgrades that only THEY can do, and tell people who want reliable & frequent upgrades to get their data feed through a private mirror service (like allisonhouse or caprockweather) who can upgrade their own backhaul as needed, instead of having to put in funding requests years in advance.

  • by Miamicanes ( 730264 ) on Monday May 20, 2013 @02:03PM (#43775107)

    You're mostly right, but you're overlooking the software limits that exist mainly due to the limited bandwidth. If they upgraded the sites to a full T1 and tweaked the software a bit, they could give us new tilt-1 updates every minute, with about 15-60 seconds of radar-to-end-user latency, without major hardware upgrades besides the T1 interface itself.

    Compare that to now, where we get only a single tilt-1 scan every 6 minutes, and that scan might itself be delayed by another 6-10 minutes on top of that. There are ALREADY several VCP programs that sample tilt 1 every minute... they just can't send out that data, and only use it locally for calculating their derived products, because they don't currently have the dedicated bandwidth to send it out.

    Remember, WSR88D is kind of like an Atari 2600... it has very few limits that are truly "hard" and insurmountable. Rather, they're software-imposed in recognition of other limiting factors like backhaul bandwidth, or are precautionary limits imposed to guarantee that some specific product can always be fully-derived and delivered within some specific amount of time, or in a way that won't be destroyed by random errors. Many of them could be substantially improved with even minor hardware upgrades in other areas.

    There are real limits to resolution imposed by scattering, wavelength, and particle size, but from what I've read, the current level 2 scan data is still throwing away about 30-50% of the nominal max resolution, and enormous amounts of theoretical resolution that could be recovered through oversampling. At this point, NWS doesn't even *know* what they could derive offsite from oversampled level 2 data, because they've never had the backhaul resources to even *fantasize* about streaming it in its full oversampled glory, or even archiving it all on site. 20 years ago, the idea of having 64 terabytes of on-site raid storage for Amazon/Google-like raw indiscriminate archiving would have been unthinkable, and never even entered into the equation.

    The current scan rates are a compromise that tries to balance their backhaul against the need to track fast-moving storms like tornadoes. If they mounted a second, fixed-tilt dish back to back with the current dish so that every rotation produced a tilt-1 sample, they could alternate the back-facing samples between slow and fast pulse rates (so every other scan would be alternately optimized for range or resolution), and dedicate the front-facing dish currently in place to sampling the higher tilts (interleaving them to sample lower tilts twice at both PRF rates). Freed of the need to dedicate at least two full sweeps out of each volume scan to tilt 1 (because the back-facing antenna would sample tilt one every time the dish rotated), they could possibly slow down the rotation rate and use it to increase the resolution.

    The closest thing I've seen to my idea was a paper someone at NOAA wrote about a year or two ago, proposing a compromise between fixed-tilt back-to-back conventional radar, and full-blown (and likely to be cost-prohibitive) phased-array radar 360-degree fixed radar. Basically, their idea was to build a limited wedge of PAR modules capable of sampling 4 tilts over ~1 degree horizontal, and mount it to the back side of the existing dish assembly, so that it could sample 4 tilts per revolution, and give us the equivalent resolution of 4-tilt level 3 TDWR every 12-15 seconds. The idea is that NOAA would then have a TDWR-resolution rapidly-updating radar source for tracking fast-moving/rapidly-developing storms off the back, and could slow down the overall rotation to get more detailed ultra-hires samples than we have now off the front dish.

    The catch, from what I recall, was that they'd HAVE to decrease the RPM, and use 5.8GHz (like TDWR) for the rear array, because there just isn't enough C-band 10cm spectrum available to simultaneously broadcast 5 pulse beams without creating an interference scenario that would make their current range-folding issues look downright tame. They'd

  • by MasaMuneCyrus ( 779918 ) on Monday May 20, 2013 @03:39PM (#43775915)

    Try this: Why European forecasters saw Sandy’s path first [arstechnica.com]

    The ECMWF, for example, utilizes an IBM system capable of over 600 teraflops that ranks among the most powerful in the world, and it's used specifically for medium-range models. That, fundamentally, is the reason their model frequently outperforms the American one. The US National Weather Service’s modeling center runs a diversity of short-, medium-, and long-term models, all on a much smaller supercomputer. The National Weather Service has to do more with less.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...