Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
Software Stats Supercomputing Hardware

World's Most Powerful Private Supercomputer Will Hunt Oil and Gas 135

Posted by samzenpus
from the black-gold dept.
Nerval's Lobster writes "French oil conglomerate Total has inaugurated the world's ninth-most-powerful supercomputer, Panega. Its purpose: seek out new reservoirs of oil and gas. The supercomputer's total output is 2.3 petaflops, which should place it about ninth on today's TOP500 list, last updated in November. The announcement came as Dell and others prepare to inaugurate a new supercomputer, Stampede, in Texas on March 27. What's noteworthy about Pangea, however, is that it will be the most powerful supercomputer owned and used by private industry; the vast majority of such systems are in use by government agencies and academic institutions. Right now, the most powerful private supercomputer for commercial use is the Hermit supercomputer in Stuttgart; ranked 27th in the world, the 831.4 Tflop machine is a public-private partnership between the University of Stuttgart and hww GmbH. Panega, which will cost 60 million Euro ($77.8 million) over four years, will assist decision-making in the exploration of complex geological areas and to increase the efficiency of hydrocarbon production in compliance with the safety standards and with respect for the environment, Total said. Pangea will be will be stored at Total's research center in the southwestern French city of Pau."
This discussion has been archived. No new comments can be posted.

World's Most Powerful Private Supercomputer Will Hunt Oil and Gas

Comments Filter:
  • by Anonymous Coward on Monday March 25, 2013 @12:46PM (#43273511)

    I bet cold-pressed humans are a wonderful source of hydrocarbons.

  • by Anonymous Coward on Monday March 25, 2013 @12:48PM (#43273555)

    Quite an impressive system in general terms, 2.3PF without accelerators says a lot about the size of this machine (48 racks):

    "Pangea is manufactured by SGI, built on the ICE-X platform. In a video, Total said that each blade contains four Xeon processors (most likely the E5-2600, which SGI uses), each with 32 cores and 128 Gbytes of RAM. Each M-Rack contains 72 blades, for a total of 288 processors, 2304 cores, and 9 TB of RAM. An M-Cell contains four M-Racks and 2 C-Racks for 288 blades, 1,152 processors, 9,216 cores, and 32 TB of RAM. In all, 12 M-Cells are used, with 110,592 cores, 442 TB of RAM, and 120 km of fiber-optic cable connecting it all up. Pangea also includes 12 bays, with 600 1-TB drives each, and 4 petabytes of magnetic tape for archiving data."

    A system of this size with accelerators would exceed easily 10PF, although I am not sure whether the particular workload to be ran on this beast would be suitable for any kind of accelerators (anybody has an idea on that?). Now I have a question: what is TACC going to do with so many Xeon Phi accelerators not delivering the promised performance? Will intel provide them with the second generation of MICs for free or will that upgrade cost another big chunk of taxpayers money?

    X.

  • by Garin (26873) on Monday March 25, 2013 @01:10PM (#43273857)

    Seismic imaging. Imagine solving a wave equation (acoustic, elastic, or worse) over a 3D grid many kilometers on a side with grid spacing on the order of meters. Imagine you're doing it with a strong high-order finite-difference code. Calculate for tens of thousands of timesteps. Now repeat that entire thing thousands of times for a given full survey.

    No matter how much computer you have, it's never nearly enough for seismic imaging.

  • by toastar (573882) on Monday March 25, 2013 @01:15PM (#43273929)
    Processing Seismic Data takes a ton of power, There are techniques that are well known that we still can't use due to the lack of computer power. The last big advance was RTM(Reverse Time Migration). This was first done on 2D Data in the 80's, But didn't become reasonable to do on 3D's until about 2008-'09. This improvement in imaging is one of the drivers is subsalt exploration. The next big step is FWI (Full Waveform Inversion) We still don't have enough power to run this mainstream yet, The main idea is the stuff we mute out as noise is actually just data that we can migrate back to the original location. The other Item more power helps us with Is running Migrations at higher frequencies. right now we record at 250Hz(125 nyquest) but only process at 60Hz, This is mainly due to the price of computer time. doubling to 120Hz requires 4 times more computer time. But allows us to double our image resolution from to 50 meters to 25 meters. Considering some of our target reservoirs are as narrow as 20 feet, This type of thing is important.

"Text processing has made it possible to right-justify any idea, even one which cannot be justified on any other grounds." -- J. Finnegan, USC.

Working...