×
Supercomputing

Quantum Computer To Launch Next Week 224

judgecorp writes "D-Wave Systems of British Columbia is all set to demonstrate a 16-qubit quantum computer. Simple devices have been built in the lab before, and this is still a prototype, but it is a commercial project that aims to get quantum devices into computer rooms, solving tricky problems such as financial optimization. Most quantum computers have to be isolated from the outside world (look at them and they stop working). This one is an 'adiabatic' quantum computer — which means (in theory, says D-Wave) that it can live with thermal noise and give results without having to be isolated. There's a description of it here — and pretty pictures too."
NASA

Google NASA Partnership Announced 154

eldavojohn writes "Google & NASA announced their partnership today with many benefits. The director of a NASA site said 'Just a few examples are new sensors and materials from collaborations on bio-info-nano convergence, improved analysis of engineering problems, as well as Earth, life and space science discoveries from supercomputing and data mining, and bringing entrepreneurs into the space program.'" Update 23:51 by SM As pointed out by so many readers the GoogleNASA site originally linked was completely bogus.
Supercomputing

Steve Chen Making China's Supercomputer Grid 128

nanotrends writes "Steve Chen was the principal designer of the Cray X-MP supercomputer. He recently created multi-teraflop blade based supercomputers for a Chinese company. He is now creating a supercomputer grid across China and he is working on a bio-supercomputer extension to human brains called THIRD-BRAIN. The THIRD-BRAIN project has significant 3 year and 5 year targets."

DARPA Awards HPC Contracts To IBM, Cray, Not Sun 74

snedecor writes "DARPA has awarded a third round of funding for the next-generation petascale computing system. IBM and Cray roughly split the $494M, while Sun, with little track record, received none. This is in spite of Sun's radical proposal for proximity communication."

Purdue Streams a Movie At 7.5Gb/sec 117

the_psilo writes, "My friend just got back from the Supercomputing conference in Tampa, FL where she and the rest of the Purdue Envision Center rocked the High Performance Computing Bandwidth Challenge by streaming a 2-minute-long, 125-GB movie over a 10-Gb link at 7.5 Gb/sec. They used 6 Apple Xserve RAIDs connected to 12 clients projecting onto their tiled wall (that's 12 streams in all). Lots of accolades from the people who set up the challenge. More links to articles and reviews can be found at the Envision Center Bandwidth Challenge FAQ page." The two-minute video is a scientific visualization of a cell structure from a bacterium. The Envision Center site hosts a reduced version of the video.

TOP500 Supercomputer Sites For 2006 108

geaux writes to let us know about the release of the 28th TOP500 List of the world's fastest supercomputers. From the article: "The IBM BlueGene/L system, installed at DOE's Lawrence Livermore National Laboratory, retains the No. 1 spot with a Linpack performance of 280.6 teraflops (trillions of calculations per second, or Tflop/s). The new No. 2 systems is Sandia National Laboratories' Cray Red Storm supercomputer, only the second system ever to be recorded to exceed the 100 Tflops/s mark with 101.4 Tflops/s... Slipping to No. 3 is the IBM eServer Blue Gene Solution system, installed at IBM's Thomas Watson Research Center, with 91.20 Tflops/s Linpack performance." You need over 6.6 Tflop/s to make it into the top 100.

GPUs To Power Supercomputing's Next Revolution 78

evanwired writes "Revolution is a word that's often thrown around with little thought in high tech circles, but this one looks real. Wired News has a comprehensive report on computer scientists' efforts to adapt graphics processors for high performance computing. The goal for these NVidia and ATI chips is to tackle non-graphics related number crunching for complex scientific calculations. NVIDIA announced this week along with its new wicked fast GeForce 8800 release the first C-compiler environment for the GPU; Wired reports that ATI is planning to release at least some of its proprietary code to the public domain to spur non-graphics related development of its technology. Meanwhile lab results are showing some amazing comparisons between CPU and GPU performance. Stanford's distributed computing project Folding@Home launched a GPU beta last month that is now publishing data putting donated GPU performance at 20-40 times the efficiency of donated CPU performance."

Citizen Journalism Expert Jay Rosen Answers Your Questions 42

We posted Jay Rosen's Call for Questions on September 25. Here are his answers, into which he's obviously put plenty of time and thought. This is a "must read" for anyone interested in the growing "citizen journalism" movement either as a writer/editor or as an audience member -- and please note that Rosen and many others say, over and over, that one of the major shifts in the news media, especially online, is that there is no longer any need to be one or the other instead of both.

Oak Ridge Lab Supercomputer Doubles Performance 89

Anonymous Coward writes "The most powerful supercomputer available for general scientific research in the United States has undergone an upgrade that's doubled its peak performance. The Cray XT3 supercomputer at Tennessee's Oak Ridge National Laboratory can now perform up to 54 trillion calculations per second, up from its previous peak of 25 trillion calculations. 'It is probably the fifth-fastest machine' in the world, said Thomas Zacharia, associate laboratory director. 'It is clearly the fastest open-science machine in the U.S. today.'"

The Future of Computing 184

An anonymous reader writes "Penn State computer science professor Max Fomitchev explains that computing has evolved in a spiral pattern from a centralized model to a distributed model that retains some aspects of centralized computing. Single-task PC operating systems (OSes) evolved into multitasking OSes to make the most of increasing CPU power, and the introduction of the graphical user interface at the same time reduced CPU performance and fueled demands for even more efficiencies. "The role of CPU performance is definitely waning, and if a radical new technology fails to materialize quickly we will be compelled to write more efficient code for power consumption costs and reasons," Fomitchev writes. Slow, bloated software entails higher costs in terms of both direct and indirect power consumption, and the author reasons that code optimization will likely involve the replacement of blade server racks with microblade server racks where every microblade executes a dedicated task and thus eats up less power. The collective number of microblades should also far outnumber initial "macro" blades. Fully isolating software components should enhance the system's robustness thanks to the potential of real-time component hot-swap or upgrade and the total removal of software installation, implementation, and patch conflicts. The likelihood of this happening is reliant on the factor of energy costs, which directly feeds into the factor of code optimization efficiency."

Swimsuit Design Uses Supercomputing 253

Roland Piquepaille writes "These days, most competitive swimmers wear some type of body suit to reduce high skin-friction drag from water. And makers of swimwear are already busy working on new models for the Olympics 2008. According to Textile & Apparel, Speedo is even using a supercomputer to refine its designs. Its engineers run Fluent Computational Fluid Dynamics (CFD) program on an SGI Altix system."

New Top500 List Released at Supercomputing '06 217

Guybrush_T writes "Today the 27th Edition of the Top 500 List of World's Fastest Supercomputers was released at ISC 2006. IBM BlueGene/L remains the world fastest computer with 280.6 TFlop/s. No new US system in the top10 this year, since they all come from Europe and Japan. The French Cluster at CEA (French NNSA equivalent) is number 5 with 42.9 TFlop/s. The Earth simulator (no 10) is no longer the largest system in Japan since the GSIC Center built a 38.2 TFlop/s Cluster, reaching the 7th place. The German cluster at Juelich is number 8 with 37.3 TFlop/s. The full list, and the previous 26 lists, are available on the Top500.org site."

Supercomputer Models Sun's Corona Dynamics 105

gihan_ripper writes "Researchers from San Diego are using supercomputers to accurately predict the shape of the Sun's corona, based on magnetic field data from the photosphere. It is hoped that this model will enable us to predict Coronal Mass Ejections. When CMEs reach the Earth, they produce geomagnetic storms and can wreak havoc with communcations, GPS, and power networks. In the decade or so, the researchers hope to be able to predict CME collisions with the Earth and determine their impact."

End of a Scientific Legend? 243

pacopico writes to mention the sorry state of the well-known Los Alamos National Laboratory. Sixty years ago, it was at the forefront of the race for the Atomic bomb. Nowadays, "smugness can breed complacency, and complacency carelessness. In recent years the laboratory has been in the news not for its successes but its failures.The result is a change of management, which the story goes on to discuss in great detail. It begs the question - can Los Alamos hang on as a prestigious place or is it too late for the supercomputing powerhouse and weapons lab?"

Google's Secretive Data Center 391

valdean wrote in with a NYTimes article about Google which says "On the banks of the windswept Columbia River [in Oregon], Google is working on a secret weapon in its quest to dominate the next generation of Internet computing. But it is hard to keep a secret when it is a computing center as big as two football fields, with twin cooling plants protruding four stories into the sky...' What's the goal of this new complex? Expanding Google's raw computer power. It's one more piece in the Googleplex, the massive global computer network that is estimated to span 25 locations and 450,000 servers.'

Windows Compute Cluster Server 2003 Released 230

grammar fascist writes "According to an Information Week article, on Friday Microsoft released Windows Compute Cluster Server 2003." From the article: "The software is Microsoft's first to run parallel HPC applications aimed at users working on complex computations... 'High-performance computing technology holds great potential for expanding opportunities... but until now it has been too expensive and too difficult for many people to use effectively,' said Bob Muglia, senior vice president of [Microsoft's] Server and Tools Business unit, in a statement."

Cluster Interconnect Review 64

deadline writes to tell us that Cluster Monkeys has an interesting review of cluster interconnects. From the article: "An often asked question from both 'clusters newbies' and experienced cluster users is, 'what kind of interconnects are available?' The question is important for two reasons. First, the price of interconnects can range from as little as $32 per node to as much as $3,500 per node, yet the choice of an interconnect can have a huge impact on the performance of the codes and the scalability of the codes. And second, many users are not aware of all the possibilities. People new to clusters may not know of the interconnection options and, sometimes, experienced people choose an interconnect and become fixated on it, ignoring all of the alternatives. The interconnect is an important choice and ultimately the choice depends upon on your code, requirements, and budget."

Cray Introduces Adaptive Supercomputing 108

David Greene writes "HPCWire has a story about Cray's newly-introduced vision of Adaptive Supercomputing. The new system will combine multiple processor architectures to broaden applicability of HPC systems and reduce the complexity of HPC application development. Cray CTO Steve Scott says, 'The Cray motto is: adapt the system to the application - not the application to the system.'"

Slashdot Top Deals