Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Supercomputing Hardware Technology

Asus Releases Desktop-Sized Supercomputer 260

angry tapir writes "Asustek has unveiled its first supercomputer, the desktop computer-sized ESC 1000, which uses Nvidia graphics processors to attain speeds up to 1.1 teraflops. Asus's ESC 1000 comes with a 3.33GHz Intel LGA1366 Xeon W3580 microprocessor designed for servers, along with 960 graphics processing cores from Nvidia inside three Tesla c1060 Computing Processors and one Quadro FX5800."
This discussion has been archived. No new comments can be posted.

Asus Releases Desktop-Sized Supercomputer

Comments Filter:
  • wow (Score:2, Funny)

    by Anonymous Coward

    and it's much cheaper and more effective than just using multiple multi-core processors. parallel computing is the future. how long before we have three dimensional processors?

    • Technically, graphics cards are four-dimensional already, which is why they can solve these certain types of problems with such great speed. Their construction is coincidentally designed such that they solve certain types of problems very quickly.

      The interesting thing about those problems is that many difficult problems in other formats can be massaged to fit into that format.

  • Hrmm (Score:5, Funny)

    by acehole ( 174372 ) on Tuesday October 27, 2009 @01:17AM (#29881135) Homepage

    How many pets would I have to eat to balance out the carbon footprint of this?

    I've got a six-pack of kittens at the ready.

    • Re:Hrmm (Score:5, Funny)

      by wisty ( 1335733 ) on Tuesday October 27, 2009 @01:20AM (#29881151)

      The PSU is only 1100W. It's not that intensive - three teslas are like three big graphics cards. 2 or 3 kittens would be sufficient, so you've got enough to share.

      Do you have pepper sauce?

      • by syousef ( 465911 ) on Tuesday October 27, 2009 @01:58AM (#29881285) Journal

        The PSU is only 1100W. It's not that intensive - three teslas are like three big graphics cards. 2 or 3 kittens would be sufficient, so you've got enough to share.

        1100W? Can I eat my vacuum cleaner instead? Yummy.

        Do you have pepper sauce?

        Pepper sauce? Pepper sauce?!? Do you have any idea what the carbon footprint of pepper sauce is? My brother ate pepper sauce once. He had to eat a whole zoo full of animals to make up for it! Stay away from the sauce!

      • How many pets would I have to eat to balance out the carbon footprint of this? I've got a six-pack of kittens at the ready.

        Do you have pepper sauce?

        Seconded; pepper sauce goes great with bonsai kittens [google.com], though I don't think these come in six-packs unfortunately.

    • Forget pets, this is going to take a 6 pack of HUMAN babies!
    • by mcgrew ( 92797 ) *

      How many pets would I have to eat to balance out the carbon footprint of this? I've got a six-pack of kittens at the ready.

      Don't eat 'em, that's wasteful. Kittens don't have much meat on 'em. You should huff them instead [wikia.com].

      The orange ones will fuck you up REAL good.

  • by Xin Jing ( 1587107 ) on Tuesday October 27, 2009 @01:21AM (#29881153)

    As a participant in the Milky Way and SETI projects for BOINC, I can say this development is impressive and would be a cruncher's dream come true. It would put supercomputing power in the hands of the everyman and allow applications that rely on distributed computing to take a leap forward.

    • by Anonymous Coward on Tuesday October 27, 2009 @01:27AM (#29881173)
      More importantly can this actually run Crysis 2? Probably not.
    • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Tuesday October 27, 2009 @02:29AM (#29881391)

      Yeah, as long as that everyman can afford $14,519 for crunching purposes...

      For that price I'd build myself a real virtual reality gaming room.

    • I can't imagine anyone buying such a machine specifically to run SETI@Home or similar projects. If you want/need a machine like this you will have a specific use for it, as I don't think it's that speedy for most games etc - to run your projects on graphics cores you will need special software, this is useless for generic computing. And those distributed projects are set up with the idea of using spare cycles - not to buy hardware specifically for it.

      Now if you still happen to have spare time on the comput

    • As a participant in the Milky Way and SETI projects for BOINC, I can say this development is impressive and would be a cruncher's dream come true. It would put supercomputing power in the hands of the everyman and allow applications that rely on distributed computing to take a leap forward.

      BOINC already supports CUDA & has alpha(?) support for ATI's version.
      There's nothing stopping you from packing a tower with graphics cards and a high end PSU.

    • As a participant in the Milky Way and SETI projects for BOINC, I can say this development is impressive and would be a cruncher's dream come true. It would put supercomputing power in the hands of the everyman and allow applications that rely on distributed computing to take a leap forward.

      Yeah, but unless it's going to offer the surreal experience of porn in 4-D, you're probably not going to get many people biting to spend this "paltry" amount.

      Now, I CAN see the average man "investing" $15K for a new holodeck o'porn...Sad? Yes. True? Damn skippy.

    • by selven ( 1556643 )

      Troll? I hope that was a misclick...

    • As a fellow SETI participant, I also have to appreciate how much faster this would allow us to not find evidence of alien life. Maybe I would use a few hundred cores for one of the other BOINC projects too...
    • Remember that this 1.1TF is single-precision; double-precision is around 240GF. Let's hope they fix this in the next version.

      Also, there is 240 cores per C1060, for 720 cores total of Tesla power. The additional 240 cores come from the Quadro in the system; those cores may occasionally be busy with graphics work and unavailable for computation.

    • most people can do distributed computing far more powerfully for less money via getting a more potent graphics card whether it is from Nvidia or ATI, really.

      build cost can probably be around $400 like that, and have better performance for cheaper. Doesn't Nvidia advertise around 1 or 2 Tflops for their graphics cards as ATI does?

  • Very impressive, but you could get something very similar last year.
  • http://www.youtube.com/watch?v=7Eb1yih5kNY [youtube.com]

    I remember when that ad came out. I was so pissed. Apple preys on people who have no concept of the scale of computing and this campaign really got under my skin. Now I just laugh at it, but they're still advertising this way, with their comparison charts and graphs touting biggest and best with comparisons to competitors' computing hardware from years past.

    • Yes, this bugged me too. The Apple campaign actually made sense. The definition of 'supercomputer' for US export had not been updated for some years and so the G4 actually was classified as a supercomputer for export purposes. This ad was actually run in response to the fact that the US government was not permitting G4-based machines to be exported to any of the 50 countries that were under arms embargo at the time, and which had previously been able to buy G3-based Macs. The definition, as I recall, wa

  • Index? (Score:3, Funny)

    by Swoopy ( 101558 ) on Tuesday October 27, 2009 @01:32AM (#29881185)

    The real question of course is, what the "Windows Vista experience index" of this machine is. If it's anywhere below 5.5 it's obviously not worth the bother.

  • Super computer? (Score:5, Informative)

    by MosesJones ( 55544 ) on Tuesday October 27, 2009 @01:32AM (#29881187) Homepage

    Ummm isn't this just a ridiculously powerful desktop computer rather than a super computer? The current 500th super computer on the top500 list is this machine [top500.org] which has a Rmax of 17 Tflops and an Rpeak of just over 37.6. Now its impressive that this desktop system has 1/37th of the power of the lowest machine on the super computer list... but does that really make it a super computer? Moore's Law says that it will take around 10 years for this desktop box to evolve to the power of that current bottom top500 box. So in other words its 10 years behind the performance of the current 500th best super computer.

    If its because it hits 1 Tflops then in a few years time you'll have mobile phone "super computers" as Moore's Law is still moving onwards.

    This is a very very fast desktop computer suited to certain simulation elements which are GPU intensive. Nice box, fast box.... but not a real modern super computer.

    • by the_humeister ( 922869 ) on Tuesday October 27, 2009 @01:55AM (#29881275)

      Well, that's easy enough. Just get 38 of these things, hook'em together and MosesJones, you will have #500 on that list!

    • Re: (Score:2, Informative)

      by textstring ( 924171 )

      So in other words its 10 years behind the performance of the current 500th best super computer.

      If the top500 list is really a good indicator, this system would have definitely made the 2004/06 list and maybe the 2004/11. You can basically build a 5 year old top 500 supercomputer today for $15k. It would have been top 10 in 1999/06. So it's 10 years from top 10 supercomputer to a personal, desktop "super"-computer but it'll probably take even less time for today's fastest machines to become affordable.

      Also remember this is your personal supercomputer. It's working on your jobs 24/7. And really, 1/40th

    • by mcgrew ( 92797 ) *

      Indeed; your cell phone is more powerful than a supercomputer [wikipedia.org] now.

      Not a modern supercomputer, of course.

    • by CompMD ( 522020 )

      "Now its impressive that this desktop system has 1/37th of the power of the lowest machine on the super computer list... but does that really make it a super computer?"

      Just imagine a beowulf cluster of them!

  • by HalfFlat ( 121672 ) on Tuesday October 27, 2009 @01:32AM (#29881195)

    The Tesla c1060 [nvidia.com] processor boards sound like a very efficient way of packing in compute power, but unless they're neglecting to mention it, the 4GB of GDDR3 RAM each has on board has no error correction. Given the rates of correctable errors observed e.g. here [toronto.edu], I could never recommend using it for computing simulations that matter. A flipped bit in a floating point number can have a disproportionate affect on the outcome of calculations that rely upon it, and short of running the whole simulation a second or third time, one couldn't be confident that such an error did not occur.

    Large compute-intensive simulations can take weeks, and are used to justify engineering and business decisions that involve the disposition of large amounts of money and other resources — it is important that the computational part of the process can be relied upon.

    • by Anonymous Coward on Tuesday October 27, 2009 @01:53AM (#29881261)

      I'm a student at the University of Washington and once talked to a representative for Cray about using GPU's a a cheaper supercomputer and he told me that they generally have a nontrivial error rate. The issue with using ECC memory is that the GPU's are also libel for errors within their computations, making the ECC RAM pointless. A weird pixel in one frame of a game is no problem, but an error when performing a large simulation creates problems if the algorithm isn't designed to compensate for that noise.

    • by bertok ( 226922 ) on Tuesday October 27, 2009 @02:18AM (#29881353)

      The Tesla c1060 [nvidia.com] processor boards sound like a very efficient way of packing in compute power, but unless they're neglecting to mention it, the 4GB of GDDR3 RAM each has on board has no error correction. Given the rates of correctable errors observed e.g. here [toronto.edu], I could never recommend using it for computing simulations that matter. A flipped bit in a floating point number can have a disproportionate affect on the outcome of calculations that rely upon it, and short of running the whole simulation a second or third time, one couldn't be confident that such an error did not occur.

      Large compute-intensive simulations can take weeks, and are used to justify engineering and business decisions that involve the disposition of large amounts of money and other resources — it is important that the computational part of the process can be relied upon.

      Which is why the upcoming NVIDIA "Fermi" GPU based boards will support 4GB of ECC memory. Also, they'll have about 2 TFLOPS of single-precision power, and you can stack 4 of them in a box = 8 TFLOPS beside your desk.

      I can't wait until the US government starts banning these things because they could be used by terrorists to design nuclear weapons or something. 8)

      • Re: (Score:3, Informative)

        by sjames ( 1099 )

        Keep in mind that TFLOP is not a single benchmark. There's theoretical peak and then there's actual linpack performance. Single precision is rarely good enough for simulations, they all use double. Naturally, all marketing slicks like to talk about single precision theoretical peak because it's a nice big number, but you'll NEVER actually see that, even in a benchmark. If you're very lucky, your actual practical performance will be in the same neighborhood as the linpack double precision benchmark.

    • A flipped bit in a floating point number can have a disproportionate affect on the outcome of calculations that rely upon it, and short of running the whole simulation a second or third time, one couldn't be confident that such an error did not occur.

      First rule in government spending: why build one when you can have two at twice the price?
      -Contact [imdb.com]

      TFA says something about "US$14,519 over five years" for one box
      Is that cheap enough to justify buying twice what you need and running the simulation in parallel?

      As an aside, the biggest problem I see is that it 'only' has 24GB of RAM.
      In my uninformed opinion, that doesn't seem nearly enough for supercomputing purposes.

  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Tuesday October 27, 2009 @01:40AM (#29881223)
    Comment removed based on user account deletion
    • Re:Not long ago (Score:4, Interesting)

      by XDirtypunkX ( 1290358 ) on Tuesday October 27, 2009 @02:30AM (#29881395)

      We've had over a teraflop of single precision available to consumers in graphics card form for a few years now; the newly released ATI 5870 actually has more than double that in a single chip. Soon the 5870 x2 (with double the performance again) will be out and you'll be able to have multiple of those in one PC.

    • by Targon ( 17348 )

      Computation power only matters if you actually use it for something useful. The fastest chip out there that isn't being used, or does not generate reliable results is worthless.

  • WIndows 7 not Vista? (Score:2, Interesting)

    by syousef ( 465911 )

    processors to attain speeds up to 1.1 teraflops.

    So you're saying it's fast enough to run Windows 7, but forget Vista?

  • by AHuxley ( 892839 ) on Tuesday October 27, 2009 @02:04AM (#29881307) Journal
    This is :
    http://helmer.sfe.se/ [helmer.sfe.se]
  • I put it to you that any computer that fits on or under a desk is not "super".
  • ...which will be used principly for... typing e-mails and surfing the internet, just like 90+% of other desktop computers... oh yeah, and downloading lots and lots of porn. Way to go, guys! Keep the hits coming!
  • by Gori ( 526248 ) on Tuesday October 27, 2009 @03:35AM (#29881605) Homepage

    San somebody who has actually worked with such machines enlighten me about its performance on tasks that are not floating point intensive? Our simulations mainly push many,many objects around, with relatively little, or no floating point math in them.

    Do such machines still make sense, or are we better off with a bunch of general purpose CPUs clustered together? How do they compare to Suns Niagara cpus that have umpteen hardware threads in them ?

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      I work in military research in the UK, we've been building similar machines to this general spec (Xeon/Nehalem/Nvidia Teslas/loads of RAM) for a year of so now. This type of machine is pretty amazing for running our engineering codes; we've achieved a 30x speed up in some cases when compared to a regular high end desktop PC, running a variety of fluid dynamics codes.

      Although it's not a high priority to my management, I personally think the power consumption of the Teslas when compared to regular super compu

      • by Gori ( 526248 )

        Thanks for the comment!

        running a variety of fluid dynamics codes.
         

        This is indeed the key. Our models are Java/semantic web type of things, with many, many threads and inter agent communication. almost no math. I guess in that case it would not make too much sense to move to these architectures.

    • I have a simulation I ported to the GTX280, it's all integer. For my simulation, the biggest problem with using the GPU is memory access patterns, I need too much random read/write which doesn't fit real well with their processing model.
    • by sjames ( 1099 )

      No. Actually simulations for engineering, chemistry, and physics are ALL ABOUT floating point. It's Integer performance that is nearly irrelevant.

  • can it run MATLAB? (Score:2, Interesting)

    by nerdyalien ( 1182659 )

    nice to have powerful machines. But what about the programming end ?

    More specifically, can it run MATLAB or Octave and use all the flops for computations ?

    I think its a known fact that most academia use MATLAB/Octave to do model creation/testing...

  • It seems like loading up a motherboard with loads of PCI Express slots coupled with the 5520 chipset just makes for trouble. I actually almost bought the motherboard upon which this computer is based, P6T-WS-Professional, but the problem is that it got some fairly mixed reviews as far as stability goes. Tyan has a similar product, the S7025, but let yous you use two CPUs. In both cases, people are reporting issues with.the boards.

    It's rather unlike ASUS, for sure, as I trust the brand of motherboard. So

    • It's rather unlike ASUS, for sure, as I trust the brand of motherboard.

      I automatically bought ASUS motherboards by choice, and that's usually worked out well, but the last two models I've bought have been nightmares... the M2A/VM and it's predecessor model (I forget the model number now). I bought the older model, and was unable to use but a single channel of RAM. The store "upgraded" me to the M2A/VM, and after replacing it twice... the store was unable to get the first two to work dual channel even with t

  • ... to speed up web access on an HTTPS-only web site?

  • But... (Score:2, Funny)

    by akeyes ( 720106 )
    Does it run Linux?
  • Now , not only can you fold the dna markers, you can also hotbox about 960 WoW accounts all at the same time....
    groovy, I want one for christmas

  • does anyone have historical comparisions going back to the 70s, eg, how many terflops and how much ram nasa had during apollo.
    I have this memory of an ad taken out by Boeing in the late 70s, offering their world class supercomputers to researchers; among the leading edge attributes was 500 meg of solid state memory

  • Yawn (Score:3, Insightful)

    by sluke ( 26350 ) on Tuesday October 27, 2009 @08:40AM (#29883055)

    While this sort of machine is useful (I just built one for quantum Monte Carlo calculations 6 months ago) it is hardly news. NVIDIA has been pushing this sort of machine since the launch of the Tesla. In fact, they have had a parts list on their website [nvidia.com] for some time telling exactly what is needed to put together a computer with 4 C1060's. This is not even the first commercial offering of this nature, with companies like appro [appro.com] and microway [microway.com] having similar products for at least a year (see nvidia [nvidia.com]) for a more complete list.

  • For some reason this story is tagged !supercomputer (as well as supercomputer), which seems downright churlish.

    Sure NVidia's CUDA architecture is quite specialized and has some severe constaints, but OTOH so do any of these modern cluster-type supercomputers. Certain types of application map well onto these architectures and others don't. The CUDA architecture is certainly more constrained in terms of memory access per node and inter-node connectivity than say a cluster of Linux nodes.

    OTOH, look at the down

  • !Supercomputer (Score:3, Interesting)

    by Stoutlimb ( 143245 ) on Tuesday October 27, 2009 @02:59PM (#29888175)

    If they can call a custom desktop PC a supercomputer, because it has specs that used to be in the range of supercomputers, then my wristwatch is also a supercomputer.

You know you've landed gear-up when it takes full power to taxi.

Working...