Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Microsoft Supercomputing Windows Hardware

Fastest-Ever Windows HPC Cluster 216

An anonymous reader links to an eWeek story which says that Microsoft's "fastest-yet homegrown supercomputer, running the U.S. company's new Windows HPC Server 2008, debuted in the top 25 of the world's top 500 fastest supercomputers, as tested and operated by the National Center for Supercomputing Applications. ... Most of the cores were made up of Intel Xeon quad-core chips. Storage for the system was about 6 terabytes," and asks "I wonder how the uptime compares? When machines scale to this size, they tend to quirk out in weird ways."
This discussion has been archived. No new comments can be posted.

Fastest-Ever Windows HPC Cluster

Comments Filter:
  • finally (Score:5, Funny)

    by gmack ( 197796 ) <<gmack> <at> <innerfire.net>> on Tuesday June 24, 2008 @10:24AM (#23917611) Homepage Journal

    Enough power to run vista.

    • Re:finally (Score:4, Interesting)

      by Zashi ( 992673 ) on Tuesday June 24, 2008 @10:27AM (#23917671) Homepage Journal

      You've no idea how right you are.

      I got to test Server 2008 before it was released to the public. All our internal applications identified 2008 as "Vista".

      • Re:finally (Score:5, Insightful)

        by nmb3000 ( 741169 ) on Tuesday June 24, 2008 @11:53AM (#23919785) Journal

        I got to test Server 2008 before it was released to the public. All our internal applications identified 2008 as "Vista".

        I have no idea why this is modded Informative.

        Vista uses the NT kernel, version 6.0, build 6000. SP1 puts it up to 6001.
        Server 2008 uses the NT kernel, version 6.0, build 6001.

        Is it any surprise that software build prior to Server 2008 being released see it as Vista?

        In related news, both Ubuntu 8.04 and Fedora 9 report being Linux v2.6.

        • I'm more curious as to why nobody's noticed that his INTERNAL software incorrectly identifies the OS.

        • more similar (Score:5, Interesting)

          by DrYak ( 748999 ) on Tuesday June 24, 2008 @01:11PM (#23921591) Homepage

          In related news, both Ubuntu 8.04 and Fedora 9 report being Linux v2.6.
          Except that Linux kernel is just a tiny part of a distribution. In fact, those two distribution don't even share the same version, yet alone build. (Distrowatch pages for Ubuntu [distrowatch.com] and Fedora [distrowatch.com] could tell you the difference in version for most common components)

          Whereas Server 2008 and Vista share a tad more of their code base.

          and *that* is relevant.

          And could be humorously be alluded to because of the mis-detection of some software.

        • Re: (Score:3, Insightful)

          by mspohr ( 589790 )
          I think the surprise here is that MS is using same core that's in their very shaky Vista software to run their server software.
          • Re:finally (Score:4, Informative)

            by nmb3000 ( 741169 ) on Tuesday June 24, 2008 @04:35PM (#23924869) Journal

            I think the surprise here is that MS is using same core that's in their very shaky Vista software to run their server software.

            I realize it's great fun to aimlessly bash Vista around here but I wasn't aware that the NT kernel was generally considered "shaky". In fact, I didn't even think that Vista was widely considered shaky. Bloated? Maybe. Resource intensive? Possibly. Some stupid UI decisions? Most certainly.

            I'm (begrudgingly) running Vista at home (since I have to support it at work) and I haven't had any stability problems. I do curse the UI team for removing features I deem necessary and adding meaningless clutter, but I haven't seen any crashes or stability issues.

    • Re:finally (Score:5, Funny)

      by Sabz5150 ( 1230938 ) on Tuesday June 24, 2008 @10:29AM (#23917727)

      Enough power to run vista.

      But not Crysis :(
    • Re:finally (Score:4, Funny)

      by v1 ( 525388 ) on Tuesday June 24, 2008 @10:31AM (#23917791) Homepage Journal

      mmm that may make a very nice addition to my botnet. Wonder what it has for network bandwidth?

    • Re: (Score:3, Funny)

      by tubapro12 ( 896596 )
      So, what does one do when their cluster BSODs?
    • Re:finally (Score:5, Funny)

      by TRS80NT ( 695421 ) on Tuesday June 24, 2008 @10:50AM (#23918201)
      But you still have to turn off Aero.


    • Re:finally (Score:4, Funny)

      by camperslo ( 704715 ) on Tuesday June 24, 2008 @12:09PM (#23920205)

      If one of these is expected to be networked in normal operation, perhaps it would be reasonable to require that antivirus software be running while doing benchmarks?

    • There's an obvious application [xkcd.com] to run on a Windows cluster.

  • Linux? (Score:4, Funny)

    by Anonymous Coward on Tuesday June 24, 2008 @10:24AM (#23917619)

    But does it run linux?

    • Re: (Score:3, Informative)

      But does it run linux?
      It can but isn't, [top500.org] however this one [top500.org] does :)
  • by kwabbles ( 259554 ) on Tuesday June 24, 2008 @10:29AM (#23917729)

    "Your cluster has just finished downloading an update, would you like to reboot now?"

  • by Gazzonyx ( 982402 ) <scott.lovenberg@nOspam.gmail.com> on Tuesday June 24, 2008 @10:29AM (#23917743)
    The Windows Server 2K8 code base must be better than previous versions of Windows. From what I understood, Windows didn't scale for clustering due to problems with file locking (IIRC, the overhead for tracking locks grew quickly enough that the performance was marginalized past about 4 nodes). Unless they're using an iSCSI SNS server that handles the locks over a clustered file system. Still, this is leaps and bounds beyond previous versions of Windows WRT clustering!
    • Define 'clustering' (Score:3, Informative)

      by Junta ( 36770 )

      Clustering in the sense I think you are discussing is the HA-clustering stuff. HPC clustering is a tad different.

      • by Bandman ( 86149 )

        Do you happen to have a good resource for learning about HPC clustering in Windows? I'm not a Windows guy, but I'd be curious myself how it goes.

        I imagine the base overhead of the OS cuts into each node's computing power, wouldn't it?

        • For a lot of the fairly typical stuff, I actually am prepared to admit the base OS overhead may not be that different. A lot of HPC clusters are not set up particularly fundamentally different from a typical linux server randomly set up. This is mainly because it's just easier to understand and set up this way.

          However, the ones that do implement something highly efficient or sophisticated at the OS level would have a very very hard time achieving analogous results. The petaflop system, for example a) use

    • They aren't running Windows Sever 2008. They are running Windows HPC Server 2008 beta. I don't know the difference but it's enough that they gave it a new name. It might be 2008 that has been highly optimized for HPC applications. Also it's a beta so the code base may or may not make it into 2008.
      • by jd ( 1658 )
        ...is that it is a true HPC clustering environment. They demoed the 2003 cluster edition at SC|05, and frankly I was not impressed. Nor were most other people, it was not a highly-popular stand. That could be because they were demonstrating things like Excel on the Cluster Edition. A clustered spreadsheet?! Oh, and the version of MPI they are using is derived from MPICH. For those who are unfamiliar with clustering and message passing, MPI is pretty horrible at the best of times, and MPICH is a nasty implem
    • Not "clustering" (Score:4, Informative)

      by kiwimate ( 458274 ) on Tuesday June 24, 2008 @11:38AM (#23919397) Journal

      A Windows MSCS cluster is essentially for fail-over/HA purposes. This is for high-performance purposes, and explictly excludes use as an application or database server. From the FAQs (although this is for 2003):

      Windows Compute Cluster Server is licensed for use with HPC applications. HPC applications solve complex computational problems using several servers as a group, also called a cluster, to solve a single computational problem or a single set of closely related computational problems. Applications that run on a single server are not considered HPC applications. Applications that are distributed across multiple servers may not be considered HPC applications, unless they are working on a set of closely related computational problems.

      You may not use Windows Server 2003 Compute Cluster Edition (CCE) as a general purpose server, database server, e-mail server, print server or file server. In order to allow Windows Compute Cluster Server 2003 to be offered at a lower price, its server roles are restricted to computational use only. For example, if users want to install Microsoft SQL(TM) Server 2005 on a cluster node, they will need to purchase and install a full version of Windows Server 2003 64-bit Standard Edition or Windows Server 2003 64-bit Enterprise Edition on that cluster node. To maintain licensing compliance, Windows CCE takes advantage of a feature in Windows Server Standard to protect these applications from being executed. Please see the Windows Compute Cluster Server 2003 Pricing and Licensing page for more information.

  • by Tibor the Hun ( 143056 ) on Tuesday June 24, 2008 @10:33AM (#23917835)

    And with the easily affordable CALs, up to 11 users will be able to use it at the same time! (well 8, 2 CALs will prolly be used by junior admins, and one for "test")

  • Quirk Out? (Score:4, Funny)

    by ProfessionalCookie ( 673314 ) on Tuesday June 24, 2008 @10:35AM (#23917879) Journal

    When machines scale to this size, they tend to quirk out in weird ways
    Just leave the doctype out and it'll revert to quirks mode. Should work as "intended" even if it does follow the standard.
  • But why?! (Score:2, Funny)

    by mkcmkc ( 197982 )
    In other news, IBM debuts world's fastest punch card reader...
    • Re: (Score:2, Interesting)

      by Kingston ( 1256054 )
      It looks like Microsoft engineers have been working with the NCSA and a beta version of Microsoft HPC server 2008 as part of a Microsoft marketing push for this software. The marketing pdf is here [microsoft.com]. Microsoft want to increase their foothold in HPC, it's a growing, high margin market.
      • Re: (Score:3, Interesting)

        by gmack ( 197796 )

        It's growing yes but its actually a very low margin market. The whole idea of an HPC cluster is saving money.

        Somehow I doubt it's the margins so much as the fact that Linux dominates it and they are afraid Linux will use that to gain a foothold elsewhere.
         

  • BSOD (Score:3, Funny)

    by suck_burners_rice ( 1258684 ) on Tuesday June 24, 2008 @10:49AM (#23918173)
    Such a powerful cluster should get from power-up to BSOD instantly!
  • Only six teras ? (Score:3, Interesting)

    by billcopc ( 196330 ) <vrillco@yahoo.com> on Tuesday June 24, 2008 @10:57AM (#23918373) Homepage

    So.... six terabytes... isn't that horribly small by today's standards ? I mean, our small backup server here is 2 teras, it's just a cheap PC with a bunch of SATA drives in it.

    Does that mean my gaming rig and media server, when combined, constitute an "HPC Cluster" worthy of the top 100 ?

    Ghey.

    • by SuperQ ( 431 ) *

      I can only assume that they got that storage number wrong. We had more than 8T of storage for a couple of our small (a few hundred cores) IBM power4 cluster in 2005. Normal compute clusters have 1-2G of ram per core, which means they should have atleast 9T of RAM in this cluster of 9k cores.

    • So.... six terabytes... isn't that horribly small by today's standards ?

      Depends what you're doing with it. Suppose a bunch of netbooting, diskless nodes designed for doing calculations stored in RAM; 6TB might be plenty for that setup.

    • by Undead NDR ( 1252916 ) on Tuesday June 24, 2008 @11:17AM (#23918857) Homepage Journal

      So.... six terabytes... isn't that horribly small by today's standards ?

      Should be enough for everyone.

    • Re: (Score:2, Informative)

      by cstdenis ( 1118589 )

      That is RAM not disk space.

  • *yawns* (Score:2, Informative)

    by painehope ( 580569 )
    So what? Microsoft has been putting up huge booths at the annual Supercomputing Conference, even sponsored one, for years now. No one takes them seriously. They even bought a whole lab for some university that I'm too lazy to look up, and from what I heard, it was a complete flop (no pun intended, though that's probably all the performance you can expect on a real world application).

    Supercomputing is the one area where Linux is the dominant operating system. Period. AIX still plays, but that's about it.

    • by chthon ( 580889 )

      I find it weird, with all the uptake of Linux in HPC, that the university of Antwerp (Belgium) some time ago bought a Sun based HPC cluster. Probably something to do with PHA (pointy-haired administrators).

      • I couldn't find anything on what OS it was running, but remember that Sun sells a lot of x86_64 (or x64, whatever) equipment for HPC. And most of it is running Linux. And the "uptake" is nothing new - Linux has been dominant for years, and the "uptake" started sometime around 2000. It's just that most major vendors didn't officially support Linux until later.

        That's not to say that Solaris 10 isn't nice, but it's not free, doesn't have the grip on the HPC market, and OpenSolaris is too fragmented and imma

    • While Linux is dominant, other systems do make it into the list. [top500.org]. After Linux, comes mixed, then Unix. The #4 cluster is a Sun cluster created for The University of Texas at Austin.
      • And that cluster (Ranger) is running Linux.


        As for the "Mixed" category, most of those systems are a combination of Linux and another OS. And AIX accounts for 23/25 of the straight UNIX deployments.


        I rest my case.

    • Re: (Score:2, Interesting)

      by labmonkey09 ( 992534 )
      There is a difference between super computing and HPC. Up till now Linux has had little to compete with in scaled out HPC rigs. Allot of that has to do with node pricing and the fact that Sun has been asleep at the wheel (no pun intended - if know SunOs you should be laughing). However, priced right this and Solaris are a real competative threat for Linux. Linux is not a great platform for HPC. The kernel doesn't scale to extreme levels (total througput pegs early) and Tx latencey gets pretty wide at the
      • There is a difference between super computing and HPC.

        Only in semantics and some specialized cases. The terms are nearly identical in usage.

        As for a threat, not really. The only deployments that Microsoft has gotten is by giving away the software and/or hardware. Hell, I'd take a free cluster from MS - and promptly install Fedora on it. And Solaris 10? Maybe somewhat, but given the fact that almost all Sun clusters are sold w/ Linux installed, that's a bit laughable.

        And your statements about the kernel scaling fail to take into account things like Infin

      • How did Novell and IBM manage it on Blue Gene ..

        "Linux has dominated the marketplace for high-performance computing [forbes.com],"

        Mark Seager, Lawrence Livermore National Laboratory, Calif
  • "It looks like you're breaking into the top 25 fastest supercomputers. Would you like me to fix that?"

  • by mpapet ( 761907 ) on Tuesday June 24, 2008 @11:04AM (#23918541) Homepage

    and I have a very hard time believing most of the claims of fact in this story.

    "When we deployed Windows on our cluster, which has more than 1,000 nodes, we went from bare metal to running the Linpack benchmark programs in just four hours,"

    Hmmm. And what installer was this? Is it available commercially? How much is the license for the version with this mythical four-hour installer?

    "The performance of Windows HPC Server 2008 has yielded efficiencies that are among the highest we've seen for this class of machine," Pennington said.

    What "class" would that be? I imagine it would explicitly exclude Free clusters.

    One should question whether the efficacy of any institution/research project using their grant money wisely given the amount of money required to fulfill Microsoft's licensing requirements.

    Furthermore, If research projects are actually considering wasting their grant dollars on Microsoft licenses, then the outlook for American R&D is grim.

    • "When we deployed Windows on our cluster, which has more than 1,000 nodes, we went from bare metal to running the Linpack benchmark programs in just four hours"

      Four Hours! what took them so long?

    • Furthermore, If research projects are actually considering wasting their grant dollars on Microsoft licenses, then the outlook for American R&D is grim.

      As other comments mention, Windows systems simply aren't considered when it comes to HPC. This is the first good Windows HPC publicity I can remember hearing. I would wager that Microsoft donated the software licenses for this cluster gratis.

    • Re: (Score:3, Insightful)

      by saleenS281 ( 859657 )
      So basically you have no facts, but you're writing them off as idiots because they used the MS package. Nevermind they might be saving money in the long run by paying less people to administrate it because the MS tools get the job done. Or perhaps that they don't have to spend time tweaking things for months because MS has assigned them resources to do this. Let's just assume they're idiots and are wasting money, because if MS is involved, that MUST be it!!!11
    • by Monoman ( 8745 ) on Tuesday June 24, 2008 @11:30AM (#23919205) Homepage

      I'm no MS fanboy but I think someone should make a few points.

      "I run several Windows Clusters"
      and I have a very hard time believing most of the claims of fact in this story.

      I think you might be confusing Windows clustering with MS Compute Cluster (appears to be called HPC now). Windows clustering is used to provide fault tolerant applications where if one fails another node will fire up an instance to replace it. Compute Cluster is for spreading out computations across many active nodes. The HPC nodes do some calculations and return the results back. I guess like SETI.

      Hmmm. And what installer was this? Is it available commercially? How much is the license for the version with this mythical four-hour installer?

      I think the article said this was all done with HPC 2008 beta. You can find out pricing info here: http://www.microsoft.com/hpc/ [microsoft.com]

      "The performance of Windows HPC Server 2008 has yielded efficiencies that are among the highest we've seen for this class of machine," Pennington said.

      What "class" would that be? I imagine it would explicitly exclude Free clusters.

      PC class, not big iron or whatever you want to call those expensive IBM thingys.

      One should question whether the efficacy of any institution/research project using their grant money wisely given the amount of money required to fulfill Microsoft's licensing requirements.

      Furthermore, If research projects are actually considering wasting their grant dollars on Microsoft licenses, then the outlook for American R&D is grim.

      In general I agree. However, I would be surprised if this cost them much at all besides time. They are probably a large enough customer that they get many MS products and services for free. In addition, the publicity for MS makes it worth it to MS to offer tons of incentives. I work at an EDU org and MS pricing is a lot less than retail ... a lot less.

      • by mpapet ( 761907 )

        However, I would be surprised if this cost them much at all besides time. They are probably a large enough customer that they get many MS products and services for free.
        Except it isn't "free." Someone way outside your pay grade signed a contract and might have paid Microsoft. (or not if the customer is a good PR win)

        In addition, the publicity for MS makes it worth it to MS to offer tons of incentives.
        This story is an advertisement disguised as news.

        I work at an EDU org and MS pricing is a lot less than re

        • by Monoman ( 8745 )

          Except it isn't "free." Someone way outside your pay grade signed a contract and might have paid Microsoft

          Agreed.

          This story is an advertisement disguised as news.

          Agreed. You must be new here. :-)

          And a Linux-based cluster is even less. I don't see any motivation to maximize the educational institutions resources in your response. None!

          Now more than ever, I'm concerned about the basic capabilities of American research institutions maximize their resources. Sigh...

          I understand your point and frustrations but

    • "The performance of Windows HPC Server 2008 has yielded efficiencies that are among the highest we've seen for this class of machine," Pennington said.

      What "class" would that be?

      Why, the set of Windows clusters of course.
    • And what installer was this? Is it available commercially? How much is the license for the version with this mythical four-hour installer?

      Chances are the majority of nodes are diskless. I bet they did like one actual disk install, then an automated set up of config files for each node and then the system boots with some sort of broadcast or multicast kernel load.

      I really don't know how that site runs, but if I were doing an HPC cluster, that's how I would do it. Four hours seems kind of excessive for something like that.

    • Re: (Score:3, Insightful)

      by jsac ( 71558 )

      What "class" would that be? I imagine it would explicitly exclude Free clusters.

      This cluster has appeared in the last three Top 500 lists. In June and November 2007 it had a performance of 62.68 TFlops with 70% efficiency, running Linux. In June 2008 it had a performance of 68.48 TFlops with 77% efficiency, running Windows HPC Server 2008.

      http://www.top500.org/system/details/8757
      http://www.top500.org/system/ranking/8757

  • ...that the only thing which counts is the 'Total Cost of Ownership'? Do I have to pay every installed node running Windows or every CPU? And how much do i have to pay for every registered copy of Windows and it's support service?
  • I mean, it can't accelerate with more than 9.82m/s, and the article doesn't say a word about the terminal velocity.

    • by CompMD ( 522020 )

      It can accelerate faster than that if you launch it into the sun, which is probably a good place for it. As I understand it, Microsoft is launching their next cluster into Sun, for no other reason than to annoy Jonathan Schwartz.

    • Terminal velocity is probably about 200 mph, like for most heavy objects (like cars) - so you can just barely follow it in head down [wikipedia.org].

      Let's just kick one out of the back of a plane and test it.

  • Okay... (Score:3, Interesting)

    by ledow ( 319597 ) on Tuesday June 24, 2008 @11:17AM (#23918847) Homepage

    But the statistics for the top500.org show that over 9000 processors is way above normal for a supercomputer cluster up there. In fact less than 5% of machines in the entire 500 have more than 8000 processors, with the majority around the 1-4k mark. Oh, and 85% run Linux-only with an amazing 5 (not percent, actual projects) running Microsoft-only. So it looks like MS did this through hardware brute-force, not some amazing feat of programming. But then, that's true of them all. Although being in the top500 list is "good PR", it doesn't mean that much.

    I wonder what the licensing is like for a 9000-processor Windows Server, though?

    • It was 9000 cores. According to the summary, it was quad core chips so that would be about 2000+ chips. The top 500 lists by number of processors. I don't know if "processors" means chips or cores.
    • Re: (Score:3, Informative)

      Tha majority are around the 1-4k mark, but in the top 25 the range is from 6720 cores to 212992 cores. Only 2 entries in the top 25 have fewer cores than Microsoft

      http://www.top500.org/list/2008/06/100 [top500.org]

      Basically, it's all brute force if you want to get into the top 25.
  • by ettlz ( 639203 ) on Tuesday June 24, 2008 @11:18AM (#23918861) Journal
    Is this euphemism for "botnet"?
  • by idiot900 ( 166952 ) * on Tuesday June 24, 2008 @11:25AM (#23919059)

    Can someone explain why anyone could possibly want Windows on a scientific computing cluster? What does Windows offer that Linux doesn't?

    Much of my work involves running molecular dynamics simulations. By HPC standards these are tiny calculations (in my case, usually 32 CPUs at a time). All science HPC software I'm aware of is Unix-oriented, and everything runs on Linux. At my institution we have an OS X cluster and we are in the process of purchasing a Linux cluster. We didn't even consider Windows - given the difficulties we've experienced administering Windows on the desktop, a Windows cluster just seems like an expensive exercise in frustration.

    • Re: (Score:3, Interesting)

      Cost is another factor. I don't know how much volume discounts come into play but running 9000+ cores might cost a great deal if it wasn't built by MS themselves. Also they were able to tweak the OS code and kernel as they see fit. A Windows HPC customer may not have that flexibility.
  • I bought an external WD hard drive for $200 that was 1 TB. Yay, it's fast, but it isn't going to be doing much with so little storage.
    • I bought an external WD hard drive for $200 that was 1 TB. Yay, it's fast, but it isn't going to be doing much with so little storage.

      That very probably is the total RAM, not disc storage.
  • First, the Top500 [top500.org] list has plenty of value. What most people do not realize (or should realize) is it is one data point on the HPC spectrum. If your HPC program does not perform the same or similar matrix operations as HPL [netlib.org] then the ranking is meaningless to you. To some the list has become a public relations contest.

    Second, performance is virtually independent of the OS (unless you are using TCP). Most big clusters use InfiniBand and run applications in "user space" by-passing the kernel. The rest of the

  • ... but does it freeze while formatting a floppy?
  • by Cutie Pi ( 588366 ) on Tuesday June 24, 2008 @12:23PM (#23920499)

    While I don't agree that Microsoft Windows HPC Server is the best software to manage a supercomputer, the linux diehards out there should pay attention to a problem that Microsoft is trying to tackle: accessible supercomputing. See one of their case studies [microsoft.com] as an example.

    The bottom line is, these days pretty much anyone has access to a few TFlops of compute power, but the learning curve for getting something running on these machines is pretty intimidating, especially for non-CS based disciplines. I've had to take a 1-2 day class, plus futz around with the clunky command-line tools for a few days or so, on every supercomputer I've used, just to get simple jobs running. In my experience, people learn to game the various batching and queuing systems such that their jobs run faster than everyone else's, further shutting out the newcomers.

    HPC vendors would be wise to focus more attention on the tools and interfaces so that Joe-researcher can set the number of nodes and go, rather than having to manually edit loadleveler text files, sending them to the queue, and then coming back next day to find the job failed due to a typo in the startup script.

    On multi-TFLOP systems, not everyone needs 99.5% efficiency with all the implementation details that requires. These days, many people just want their job to run reasonably quickly, with no fuss.

    The same thing happened several years ago with the move to high level languages like Python and Ruby. Sure, they're slower than C++ and FORTRAN. But for the vast majority of applications, you wouldn't know the difference on modern processors. And the turn around time and user-friendliness on these languages is so much better, using them is a no-brainer.

    Hopefully Microsoft can spur the industry in this direction.

    • by bockelboy ( 824282 ) on Tuesday June 24, 2008 @01:06PM (#23921477)

      From your case study:

      """
      In addition, it is investigating ways to allow users to connect remotely to the cluster. It expects to complete the project and move the cluster into production by March 2009.
      """

      By time the cluster in the case study allows users to remotely log in, the hardware will have lost at least 1/2 of its value.

      While more work is needed to make things user friendly, you have to remember that the funding is there for CPUs; not many folks are forward looking enough to realize researchers really need funding into making stuff easier.

    • by rs232 ( 849320 ) on Tuesday June 24, 2008 @01:14PM (#23921647)
      "Microsoft is trying to tackle: accessible supercomputing"

      Assuming MS was responding to this imagioned problem ..

      "The contest showed that supercomputers .. are accessible [supercomputingonline.com] to people interested in pursuing science, simulation or modeling"

      "but the learning curve for getting something running on these machines is pretty intimidating, especially for non-CS based disciplines. I've had to take a 1-2 day class, plus futz around"

      You actually programed a supercomouter - cool. What type and where exactly? How does HPC Server differ in respect to other solutions?

      "the Blue Gene family of supercomputers has been designed to deliver ultrascale performance within a standard programming environment [ibm.com]"

      "Hopefully Microsoft can spur the industry in this direction"

      You mean like continually inventing Apple, badly .. :)
      • by Cutie Pi ( 588366 ) on Tuesday June 24, 2008 @02:16PM (#23922761)

        Accessibility can mean: 1) able to access, 2) easy to use. When it comes to supercomputers, th former is very much true nowadays, but the latter is not. And it's not just a matter of programming. Pretty much all supercomputers can be programmed with a standard programming environment, say C + MPI + SCALAPACK libraries. (I think more could be done on that side too, but that is a different story).

        But the steps required to actually run the programs can be exceedingly difficult. I liken it to the state of desktop linux about 12 years ago... Yes, it was accessible in that PCs were everywhere and you could grab a free copy of Slackware, but the setup process was mind numbing. Setting up X was not for the faint hearted as it required knowing intimate details about your graphics and display hardware. There were stern warnings that using the wrong modeline values could damage your CRT. Nowadays even my grandmother could install Ubuntu and everything would be automatically detected. That's the progress that I think needs to happen on the supercomputer user interface side of things.

        • by rs232 ( 849320 )
          "I've had to .. futz around with the clunky command-line tools .. on every supercomputer I've used"

          What supercomputers have you used and in what context? Personally I have found some kind of a scripting language de rigueur for serious computing. What alternative do you recommend? For example how about:

          "Click here to extract a q-analogue of your hypergeometric orthogonal polynomial set"

          I mean if you don't know what that means, then what difference does it make whether you use a script or a bunch of
  • humph..... (Score:3, Interesting)

    by advocate_one ( 662832 ) on Tuesday June 24, 2008 @12:46PM (#23921035)
    now see how fast the identical hardware runs with Linux on it... bet it goes way faster...
  • http://www.top500.org/system/8757 [top500.org]

    Look at the description. Does it run RH? If it exports a Lustre filesystem, I think Lustre only runs on *nix.

    Does anyone know the real implementation details behind this system? Is it part Linux, part Windows? Was it linux and now Windows? Did they port Lustre to Windows?

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...