Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Is Sun's Niagara Server Viagra? 190

argonaut writes "Ace's Hardware has an in-depth article on Niagara -- Sun's upcoming parallel server processor with 8 cores and 4 threads each. The article discusses the chip's radical architecture and what kind of performance can be expected from it in traditionally thread-heavy server applications like web hosting, databases, and other multi-user applications. Given the recent cancellation of the UltraSPARC V, it seems this is going to be Sun's new direction for its in-house CPU design efforts. Furthermore, both Intel and IBM are working on other highly parallel processors and AMD is expected to eventually introduce a dual-core Opteron. So, will more threads prop up Sun's performance?"
This discussion has been archived. No new comments can be posted.

Is Sun's Niagara Server Viagra?

Comments Filter:
  • Viagra? (Score:5, Interesting)

    by DaHat ( 247651 ) on Monday April 19, 2004 @04:04PM (#8908303)
    Good thing I didn't receive an e-mail about this story in my e-mail box or it would have been nuked as spam.

    Seriously though, why did the author have to use the term Viagra to simple mean 'performance boost'?
  • by theM_xl ( 760570 ) on Monday April 19, 2004 @04:08PM (#8908352)
    The obvious answer: Sure it will. Assuming the ability to have them will in fact be used by the software running on the thing - which may still take a while.
  • by NerveGas ( 168686 ) on Monday April 19, 2004 @04:16PM (#8908451)

    It's the same thing that's been happening for the last decade. As x86 slowly creeps in on Sun/IBM/Whatever's market, they have to come up with something "bigger".

    Right now, the Opteron, with embedded memory controller and gobs of I/O, has really entered what was previously a niche market that Sun made very nice profits from.

    So, now that particular cash-cow has fallen to the ravages of commodity parts, they're moving their sites even higher. Sun's never been the company to make $5 profit on each of 50 million computers, they'd much rather make $300,000 each on 1,000 computers.

    steve
  • by markc ( 42459 ) on Monday April 19, 2004 @04:17PM (#8908455) Homepage
    Any advances Sun may have in CPU performance will be greatly outweighed by two major engineering design flaws they've gotten themselves in to:

    1. overall system performance of their partitionable systems (i.e. the ones people will pay a premium for over low-end systems where Linux on Intel/AMD is killing them) is severely hampered by their 150MHz (Mhz!) backplane. Sun views this as a plus because it allows customers to run boards with differenc CPU speeds (e.g. a 750MHz board (5x backplane speed) and a 900MHz board (6x backplane speed)). So, board to board thruput suffers and overall scalability is reduced.

    2. Their desire for greater hardware isolation between domains, down to only a 2 or 4 CPU board with whatever memory happens to be installed on those boards, severely limits the flexibility in providing workload management between logical servers (domains), as well as less flexibility to create / deploy fewer, smaller servers. IBM's LPAR architecture, and HP's VPARs, are kicking Sun's ASS!

  • by joelparker ( 586428 ) <joel@school.net> on Monday April 19, 2004 @04:17PM (#8908460) Homepage
    This Sun Niagra processor looks promising,
    especially for server software that threads.

    I'm especially intrigued on how this will
    work with the Java NIO (new IO) libraries,
    which handle pooled selectable IO channels.

    Niagra and Java NIO together looks like
    a really fast way to do mass serving...
    Can anyone comment on threads sharing IO?

  • This could be HUGE (Score:5, Interesting)

    by menace3society ( 768451 ) on Monday April 19, 2004 @04:22PM (#8908514)
    If Sun doesn't cancel this one, it could put them back on the map for server & enterprise-class computing. Low power, awesome multi-threading capabilities, and software that could only be described as "bad-ass" (The 3D Desktop should be out by then) will give Sun a huge edge over everyone that would take years to catch up.

    But that's a big "if."
  • Yes, you're right. Except, of course, that the price to performance ratio of the X86 platform remains unmatched; X86-64 has removed some of the limitations of this platform; limitations that made it unsuitable for the high end, and now Intel has been forced to follow. I fear for Sun's long-term future. on the long run, value for money always wins in business. Or so I [afriguru.com] think.
  • by levram2 ( 701042 ) on Monday April 19, 2004 @04:32PM (#8908613)

    The Cell processor follows the same design idea, if not more radical. Sony has cross licensed it to IBM and Toshiba. Toshiba is already planning on using Cell in high end processing.

    The big question is if bandwidth constraints will choke these massively parallel superscalar processors.

  • by hotchai ( 72816 ) on Monday April 19, 2004 @04:37PM (#8908660)
    What you say is absolutely true, but ...

    1. It is an easier upgrade path for customers. I think Sun learnt that it is easier to sell its customers incremental upgrades than to sell them brand new designs. Remember that the market they sell to (telco, financial) absolutely despises having to test all their mission-critical applications on new, unproven hardware. So while the slow backplane is a performance limitation, many customers may prefer stability to cutting-edge performance.

    2. Wait for the 'Zones' in Solaris 10 ... I've heard it is better than anything IBM & HP have to offer.
  • Re:Memory subsystem? (Score:3, Interesting)

    by iabervon ( 1971 ) on Monday April 19, 2004 @04:38PM (#8908667) Homepage Journal
    Sun doesn't make commodity processors, and they (at least in theory) have much better memory controllers already. Since it's a lot easier to improve the bandwidth on access to memory than the latency, it makes a lot of sense to uberthread their CPU, because they can move a lot of data in a single round-trip. If you have time to get 64 threads to their next cache misses in the time it takes to start getting data, and you can have 64 requests in flight at the same time, you're going to keep the processor 64 times as busy with a lot of threading than with a single thread per processor.
  • This actually brings up something that I have been thinking about recently, what classifies something as commodity hardware. It's not as if an opteron box can be had for a tremendously low price, HP's quad processor opteron box starts at approx $20K. I don't really consider that "commodity". Compare that to a quad Xeon box for $26K. And finally compare that to a quad box from sun, for approx $34K. None of those are what I would consider commodity. So what is commodity pricing?
  • Re:Memory subsystem? (Score:5, Interesting)

    by sheddd ( 592499 ) <jmeadlock.perdidobeachresort@com> on Monday April 19, 2004 @04:44PM (#8908727)
    The reason for this multithreading per core is to reduce performance penalties while you're waiting for input. I think Sun's gone this route based on the assumptions:

    1) Memory latency will be a bigger and bigger bottleneck in systems as cpu frequencies scale

    2) Technology will not allow memory latency to keep pace with cpu frequency.

    See ace's previous interview [aceshardware.com]

    A snippet:

    Chris Rijk [Ace's Hardware]: Stalled on waiting for data, basically.

    Dr. Marc Tremblay: Yes. In large multiprocessor servers, waiting for data can easily take 500 cycles, especially in multi-GHz processors. So there are two strategies to be able to tolerate this latency or be able to do useful work. One of them is switching threads, so go to another thread and try to do useful work, and when that thread stalls and waits for memory then switch to another thread and so on. So that's kind of the traditional vertical multithreading approach. The other approach is if you truly want to speed up that one thread and want to achieve high single thread performance, well what we do is that we actually, under the hood, launch a hardware thread that while the processor is stalled and therefore not doing any useful work, that hardware thread keeps going down the code as fast as it can and tries to prefetch along the program path. Along the predicted execution path [it] will prefetch all the interesting data and by going along the predicted path [it] will prefetch all the interesting instructions as well.

  • by EatenByAGrue ( 210447 ) on Monday April 19, 2004 @04:48PM (#8908766)
    Sun doesn't have the R&D to keep up in this space. By the time 2006 rolls around, AMD, Intel and IBM will be closing any performance gap with this chip, and their higher volumes will ensure that they blow this out of the water in terms of price/performance. Sun is clinging to an image of itself that no longer works as a business model - hence years of huge losses and layoffs.
  • by markc ( 42459 ) on Monday April 19, 2004 @04:48PM (#8908773) Homepage
    1. That must be why Sunfires are flying out the door...

    2. They HAD to come up with something to counter LPARs, etc... the market shifted and they got caught with their domain's down at their ankles... of course, no doubt IBM and HP could (and frankly, maybe have) come up with something akin to zones / containers as well, ON TOP OF h/w LPARs... the fact remains, better h/w flexibility

  • by dubious9 ( 580994 ) on Monday April 19, 2004 @05:04PM (#8908947) Journal
    I'm not sure about #1, but I always thought Sun had a much more vast throughput than Intels. I'm also not sure what you mean by "backplane", a quick wikipedia seems to suggest that it a simple bus of 1-1 pin mapping. Where is this used? Why does it matter? Even Mid-range Sun servers have a 9.6 GB/sec sustained throughput [sun.com] (Sun Fire Interplane Connect),

    2. As with all things, there are cost/benefits to every feature. I'm sure there are applications that are better suited with greater hardware independance. Still I'm not sure what you mean here, are you advocating more manageability between CPU's and different domains (which is good for managing severals VM's?)? With a processor that has eight cores, you'd assume that one would be able to put a vm on each one with that vm, having four hardware threads available. How is IBM/HP's offering different?
  • by platypus ( 18156 ) on Monday April 19, 2004 @05:07PM (#8908973) Homepage
    So, with the necessary Solaris installed, your existing Tomcat running on your existing JVM will see all the benefits.

    Not it won't. At least not so simply. It will see the benefits if there are enough concurrent threads running (as you said), and even that if these are not waiting for each other. So it will work for many clients at once.
    I have my doubts that this architecture will help with most real world tasks - even real world server tasks - even with completely blown out of proportion threading like java seems to lead people to.
    Let's face it, the reason Intel or IBM are not going into that direction that far are not that they couldn't if they wanted to.
    It's more that they still have other tricks in their sleeves to ramp up their processor power, and Sun hasn't - or can't afford them.
    For me this is the last desperate attempt from Sun to prolong their relevance in the processor arena.

  • by platypus ( 18156 ) on Monday April 19, 2004 @05:21PM (#8909086) Homepage
    It's the same thing that's been happening for the last decade. As x86 slowly creeps in on Sun/IBM/Whatever's market, they have to come up with something "bigger".

    This is not bigger. Taken to the extreme, this is like if Commodore was still in business and tried to sell computers with 2^32 6502 procs.
    Look at the chart in the article to see how desperate Sun is:
    They admit that existing Opterons Xeons not only kick the ass of their newest, existing architecture for a single thread, they also concede that even their not-existant future proc won't even be faster for single threaded apps.
    Ok, you say, but it is faster for multithreaded apps. The only problem with that is that I bet that a recent multiproc Opteron/Xeon will give the future Sun architecture a run for its money.
    And IBM/Intel won't have any problems building multicore procs, if they want. They just don't need to, at the moment.
    IOW, looking at this chart one might ask himself why tSun even tries to build processors nowadays.
  • by SmackCrackandPot ( 641205 ) on Monday April 19, 2004 @06:34PM (#8910140)
    If I understand correctly, the Solaris operating system allows the owner to select the number of CPU's they wish to license (it's cheaper for Sun to build a fully configured system, and then license the number of CPU's used, rather than to send a technician in and change the hardware). Presumably this licensing scheme would be extended to control the number of cores active?
  • by dubious9 ( 580994 ) on Monday April 19, 2004 @06:44PM (#8910253) Journal
    IBM's backplane CAN increase sustained throughput as faster CPUs are installed, for better overall system scalability.

    If I understand the backplane as the CPU->CPU bus, then wouldn't a multi-core CPU reduce dependency on the backplane? For applications that require low latency and high throughput how can you get higher transfer rates that what's available on the CPU itself?

    As per the second point: Wouldn't a multicore-multithread multi-CPU server offer more flexibility for load balancing and on-damand peak handling (ie, move CPU2 Cores 1-3 from mail/fileserver duties to httpd to handle slashdotting)?

    It seems the differences you are stating are about the overhead of managing multiple physical CPUs, but with this new chip a 4 way could handle what a 16 or 32-way did before. Thus the intra-CPU differences between IBM/HP and Sun are fairly irrelevent. Maybe I'm missing your point.
  • by Anonymous Coward on Monday April 19, 2004 @07:01PM (#8910467)
    They hardly sell single processor systems anyway ... people interested in their types of servers and HPC are not interested in tasks without plenty of parallelism.
  • Yes and no (Score:1, Interesting)

    by Anonymous Coward on Monday April 19, 2004 @07:04PM (#8910496)
    Cell is designed for massive parallelism ... but it is unlikely to be heavily based on multithreading. More aimed at stream type processing, with very predictable memory access patterns.
  • by MisterP ( 156738 ) * on Monday April 19, 2004 @08:04PM (#8911161)
    This is totally true. Scott must have done something to piss Larry off.

    Maybe an Oracle techie/insider can debunk these rumors coming from the sales droids.

    1) 10g was made for linux, all other versions are a "port"

    2) At Oracle you need Larry's signature on the PO if you want to order a Sun box.

    If either of these rumors are true, that's pretty bad news for Sun.
  • by wfmcwalter ( 124904 ) on Monday April 19, 2004 @09:07PM (#8911813) Homepage
    That's true, but until nio introduced polled IO, the best behaved java developer either had to choose to have rather a lot of threads or have their program crippled by IO waits. So there's a lot of code out there that does make lots of threads (and it's a handy programming paradigm even now, so it's not going away any time soon). As the poster above says, it's only an improvement if you've got lots of threads. So an application server is a prime example - it ends up running _lots_ of servlet instances simultaneously, as it's mostly IO bound (waiting for disks to spin, database servers to respond, xml-messaging backoffice thingies to commune with antique cobol boxes, etc.). This kind of application will really benefit - other stuff (e.g graphics, raw-calculation) largely won't - but stuff like Websphere and Tomcat is exactly what folks buy mid/high end Solaris-SPARC boxes for. As to problems on "non-Windows" environments, you'll find fantastic thread handling on AIX, HP/UX, and Solaris, where tens of thousands of extant threads isn't going to bring the machine to its knees. NT is okay, I don't know about the BSDs. Linux _was_ horrible, but I know a bunch of work has gone into threads recently, both in the library and in 2.6 - I don't know how much better this has made things.
  • An Interesting Plan (Score:3, Interesting)

    by fupeg ( 653970 ) on Monday April 19, 2004 @09:31PM (#8912034)
    It's very interesting to think about who these Niagara based servers are going to be targeted for. The nifty IOE feature and integrated ethernet controller seems to guarantee they should be great for telecom purposes. Of course that's a cursed market that Sun is already king of. Niagara based server seem destined to go head-to-head with dual-processor Xeons and Opterons. IT groups building web server farms or clustered databases will have a new option to consider. Either go with cheaper, lower performance Xeons and Opterons running Linux or with fewer, but more expensive Sun Niagaras running Solaris. It's an interesting proposition, and seems like Sun's first real attempt to compete on price/performance. The real x-factor is AMD. If they can really break into the server market, then the Opteron could offer as much performance as the Niagara but at the same (or lower) price as a Xeon.
    It's ironic to see how positions have changed. Intel and AMD are developing multi-core CPUs for use in 4+ way systems, while Sun develops a CPU that is SMP incompatible. Of course Sun is also working on Rock, and hoping it can compete with a Xeon as a single cpu, while still scaling for 100 CPU Infernos (or whatever they are going to call them.)
  • Sun was ravaged by time. When the SPARC begun to lose its competitive edge, they would have been forced to get their CPUs from one of their direct competitors in the Unix OS+System market. The processors eating their lunch at the time were DEC's Alpha and IBM's POWER. Intel chips weren't up to par yet, obviously, nor AMD. This was when SPARC was still worth something. Now, it's hopelessly outdated, they don't have any IP anyone wants. They can't unload SPARC, and they can't just take a loss, so what do they do with it? Milk it as long as they can, and shake hands with the Devil in order to stay afloat. Which we have seen happen already - so basically, Sun is going to self-immolate soon enough.

    The only way I can see them staying alive is to find a sucker to dump SPARC on and embrace Opteron. If they tie their future to AMD's then AMD might decide to keep their promises. I do not thing it would be very wise to make the same bet on intel. I'm not sure if Solaris has a future but I'm pretty sure that if SPARC does, it won't be Sun driving.

  • Re:Sun Sets By 2008 (Score:2, Interesting)

    by Anonymous Coward on Tuesday April 20, 2004 @06:04AM (#8914599)
    I can't comment on the specifics you mention, being a Sun customer/reseller rather than a true insider. However I am concerned about how the markets and community seem to have a down on Sun at present, which could itself be their undoing (i.e. the damage is being caused not by the danger but by the perception of danger).

    Firstly, Sun are absolutely right to keep hold of their processor technology. The market has long since grown up to the feeds-n-speeds contest, and realised that memory latency and I/O throughput matter right through the server room, not just on the top end systems. To throw away their ability to create a competitive advantage through innovative system architecture would be madness - we've seen how HP have squandered their high end server business through precisely that tack. The costs of system *and* processor R&D are very high, but are a key part of what makes Sun different from Dell.

    Secondly, Sun have two major investments which are yet to start paying back: The JES/JDS stack is now at GA, and could become a massive source of recurring revenue over 2-3 years; the throughput computing technologies are further off, but again is a potentially disruptive technology. Are IBM and HP quietly working on their own versions? Well, HP aren't because they haven't got a processor architecture any more.

    Thirdly, Sun won their lawsuit against Microsoft (out of court) though everyone is saying that they threw in the towel: they got more money out of Microsoft that any court settlement would have been, they got an undertaking from Microsoft to support standards, and they got a load of proprietary Microsoft API and protocols as a hostage in case they welch (this was a very clever deal).

    So when will Sun stop losing money? Probably as large corporates and government departments/agencies approach their next Windows refresh and a proportion (doesn't need to be that big a proportion either) run with JES/JDS. That refresh cycle comes every 3-4 years, so while we have a few early adopters now, next year could be interesting as the Win2K refreshes start appearing. And Sun has enough cash in the bank (even before another $2bn from Microsoft) that there's no need to sacrifice the strategy.

    D.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...