Forgot your password?
typodupeerror
Sun Microsystems Hardware

Sneak Peek At Sun's SPARC Server Roadmap 113

Posted by Soulskill
from the measuring-stick-for-oracle dept.
The folks at The Register have gotten their hands on Sun's confidential roadmap from June, which outlines the company's plans for SPARC product lines. The chart has some basic technical details for the UltraSPARC T-series and the SPARC64 line. The long-anticipated "Rock" line is not mentioned. "We can expect a goosed SPARC64-VII+ chip any day now, which will run at 2.88 GHz and which will be a four-core, eight-threaded chip like its 'Jupiter' predecessor. This Jupiter+ chip is implemented in the same 65 nanometer process as the Jupiter chip was, and it is made by Fujitsu, a company that is in the process of outsourcing its chip manufacturing to Taiwan Semiconductor Manufacturing Corp. ... not only has Sun cut back on the threads with [the 2010 UltraSPARC model, codenamed Rainbow Falls], it has also cut back on the socket count, keeping it at the same four sockets used by the T5440 server. And instead of hitting something close to 2 GHz as it should be able to do as it shifts from a 65 nanometer to a 45 nanometer process in the middle of 2010, Sun is only telling customers that it can boost clock speeds to 1.67 GHz with Rainbow Falls."
This discussion has been archived. No new comments can be posted.

Sneak Peak At Sun's SPARC Server Roadmap

Comments Filter:
  • Overcome by events (Score:2, Insightful)

    by russotto (537200)

    The only things on Sun's roadmap now are signs to the effect of "Road Closed 1000 feet".

    • I guess that's why Oracle are running ads at the moment saying that they plan on spending more on SPARC and Solaris development than Sun ever did...
      • Re: (Score:3, Insightful)

        by timeOday (582209)
        But why? 10 years ago I thought sharing an 8 CPU Sun with a big devel team was a privilege. Now any decent Dell workstation has that. What does SPARC have over Intel? (No vague claims of superior "throughput", please!)
        • by Anonymous Coward on Friday September 11, 2009 @10:58PM (#29395725)

          But why? 10 years ago I thought sharing an 8 CPU Sun with a big devel team was a privilege. Now any decent Dell workstation has that. What does SPARC have over Intel? (No vague claims of superior "throughput", please!)

          It has throughput. Back in 2006, when the first T2000 was released, a dual Xeon could handle 980 req/s from Apache and the T2000 could handle 15,000 req/s:

                          http://www.stdlib.net/~colmmacc/2006/03/23/niagara-vs-ftpheanetie-showdown/
                          http://www.stdlib.net/~colmmacc/2006/03/27/niagara-benchmarks-update/

          At the same time the Xeon used a peak of 2.2 Amps, while the T2000 peaked at 1.2 A. Things have only gotten faster.

          Throw-in on-board crypto, and you can do AES-128 at 38.9 Gb/s with a single socket (eight core) T5220:

                          http://blogs.sun.com/bmseer/entry/ultra_fast_cryptography_on_the

          A T5440 can do 22,932 MB/s (183,456 Mb/s = 179 Gb/s):

                          http://blogs.sun.com/yenduri/entry/t5440_crypto_performance_numbers

          If you're a site that cares about SSL/TLS, how many x86 machines would need to buy, maintain, and cool to handle that load? How many F5 load balancers/SSL accelarators would you purchase? According to F5's own data sheet, the 8900 (with dual 850W P/S) can handle 9.6 Gb/s--and you still have to buy web servers on top of that (more power)

          So the T5120 can do roughly four times the raw encryption rate, uses dual 720 W P/S, and also do work as web servers. You're also using less rack space.

          Let's also compare to AMD-based systems (which Sun also sells):

                          http://blogs.sun.com/bmseer/entry/web2_0_consolidation_sun_sparc

          Now the Niagara (UltraSPARC-Tx) CPU isn't good for every work load out there, but if it's highly parallel then it's something that you should be looking.

          • by JAlexoi (1085785)
            However any floating point operations make the T2000 look like Pentium 2.
            • So don't buy one if you need floating-point operations.
            • Re: (Score:2, Informative)

              by Anonymous Coward

              That was only true for the first chip. The T2 series has 1 FPU per core.

            • No. The later generations of Niagara procs from the last 2 years or so have 1 FPU per core and perform well under floating point loads. There are still some original T2000 systems out there, but they are specifically for business-oriented apps with mostly integer operations.
          • > Now the Niagara (UltraSPARC-Tx) CPU isn't good for every work load out there, but if it's highly parallel then it's something that you should be looking.

            If Oracle still charges per core, the Niagara approach of many core CPUs could be more expensive.

            Looking at the roadmap they seem to be going fewer cores, or at least sticking with 8.

            As for power consumption, I wouldn't bet on the Intel x86 always consuming more power than a SPARC for the same performance. They are a scary competitor. They keep introdu
            • Re: (Score:3, Informative)

              by joib (70841)


              Can Sun/SPARC keep ahead of them? They might only be ahead in SSL/TLS. And if that becomes a big enough demand, some taiwanese/chinese company start producing cheap pcie cards to do that

              Crypto accelerator cards have been available for a long time. Don't know about the price though.


              Or Intel could decide to use some transistors to do it - they have lots of transistors to play with on their chips, it's just a matter of priorities.

              See "Sandy bridge", Intel's next 32nm chip, due Q1 2011, will have extra instruct

            • "If Oracle still charges per core, the Niagara approach of many core CPUs could be more expensive."

              Just run it in a single core Virtual Machine, e.g. VirtualBox [virtualbox.org].

              • by TheLink (130905)
                In my experience databases don't do so well in virtual machines. Unless you use something like Xen.

                And if you run your DB single core, you'd probably do better running it on a powerful single core like i7 rather than a weak T2 core.
                • "In my experience databases don't do so well in virtual machines. Unless you use something like Xen."

                  OK, well that is what I recommended. If I am not mistaken VirtualBoz and Xen are both paravirtualization tools. Maybe I am mistaken, but I said something along the lines of VirtualBox. I cannot imagine why it would matter actually. Perhaps you can elaborate?

                  Also, I concede it might not be a great idea. I was just theowing it out there as an option.

                  • by jsight (8987)

                    "In my experience databases don't do so well in virtual machines. Unless you use something like Xen."

                    OK, well that is what I recommended. If I am not mistaken VirtualBoz and Xen are both paravirtualization tools. Maybe I am mistaken, but I said something along the lines of VirtualBox. I cannot imagine why it would matter actually. Perhaps you can elaborate?

                    Also, I concede it might not be a great idea. I was just theowing it out there as an option.

                    Yeah, Virtualbox is NOT paravirtualization.

          • by g00ey (1494205)
            What about Intel's Itanium2 chips?
          • by k8to (9046)

            Now the Niagara (UltraSPARC-Tx) CPU isn't good for every work load out there, but if it's highly parallel then it's something that you should be looking.

            Highly parallel with *low* cpu needs.

            Niagara is good at dispatch and switching, but not computation.

        • by multiplexo (27356)
          Because it has incredibly superior throughput. I know that there are lots of fanbois out there who are hung up on substitutes for dick size such as clock speed or number of cores, but the throughput on a cheap Sun T2000 kicks the living shit out of anything Dell has in a comparable size and price. If you're running anything that needs to push a lot of bits back and forth throughput is important. Go ahead and try running a NetBackup master server on an Intel box running Linux. It can be done, but the perform
      • by rubycodez (864176)

        no, oracle said they would spend more "than sun does now". which is next to nothing for R&D since their sales have tanked.

    • Re: (Score:3, Insightful)

      by onionman (975962)

      Unfortunately, that's looking more true every day. I remember running a network of Sparcs and bragging to my family members about how they (the Sparcs) were sooo much more powerful than PCs that we had in our homes. Seven years later I was replacing all our Sparcs with x86_64 Linux boxes... too bad Sun just couldn't keep up with hardware development. It would be nice if Oracle really did ramp up hardware R&D for Sun, but I can't see those announcements being anything more than reassurances to nervous

    • by reporter (666905)
      In order for a microprocessor to be financially successfully, it must enjoy large economies of scale. That Intel can sell essentially the same design (of the x86) in multiple forms to hundreds of millions of customers means that Intel can afford the massive research and development that is necessary to design the typical x86 chip.

      By contrast, though Sun Microsystems often boasted that it has -- actually, had -- the largest microprocessor team after Intel, the team could not design a chip that sold to hun

      • by hemp (36945)

        Believe it or not, but back in 1990, I worked on a Sun workstation with an Intel 80386 processor that ran SunOS and *DOS* ( http://en.wikipedia.org/wiki/Sun386i/ [wikipedia.org].

      • In the 1990s, Sun could have easily built their company on the unglamorous ARM RISC processor, but Sun management wanted to exhibit the "pride" (and arrogance) of homegrown technology

        And they were right to do so. ARM focussed entirely on the embedded market and left companies like Acorn in the cold for workstation chips a few years later. They were lower power, and maybe cheaper too, but they were much slower than anything else on the market; much slower than. Sun's mistake was to choose not to compete with ARM; they had low-power SPARCv7 designs, but never pushed them into the mass market. If they'd sold a stack based on the *7 prototypes (low power SPARC+Solaris running in 1MB of

        • by drinkypoo (153816)

          Sun kicked super-lower-power microsparcs out into the market; the market Did Not Want them. They cost too much (surprise!) and weren't as fast as offerings coming from basically everyone else at the time.

      • by Ed Avis (5917)
        Are you really saying that ARM processors would have outperformed an UltraSPARC III? Which particular ARM chip were you thinking of? Indeed, back in the 1990s were there any 64-bit ARM chips available?
    • by Taco Cowboy (5327)

      For MIPS and Digital, they have hit the end of the road

      For SUN, their end of the road is near

      And I am afraid the same would be for the now fabless AMD

      One day in not that long in the future we gonna wake up to the fact that only IBM, Intel and some Taiwanese companies (Nvidia, VIA, TSMC) gonna be the only one left still making power processors for the world

      And if I am not wrong, IBM may end up not making chips as well

    • by thogard (43403)

      I want a SPARC IIIi running at 65 or 45nm with modern gig ethernet. It would be far more than I need for my apps. I would like it in a box the size of the Netra X1 and running any OS the old ones could run like Solaris 9. It would be cool if they were $1000 each like the old X1 or V100. I might buy several hundred in that case. Meanwhile I'm buying old X1 systems and putting in SSD and replacing fans and power supplies and hoping for the best.

      And I have loads that are faster on the old X1 than the t100

  • by harmonise (1484057) on Friday September 11, 2009 @07:06PM (#29394931)

    The folks at The Register have gotten their hands on Sun's confidential roadmap from June

    If it's confidential then the Reg shouldn't publish the details. Unless they want to give Sun's competitors a leg-up. I'm sure Sun's competitors marketing teams are happy to have this. [sigh]

    • by Henriok (6762)
      In this business, having a two months head start is..nothing. I bet HP's and IBM's marketing teams are launching whatever is in the pipe according to nuances Sun's roadmap. It must be the reason Tukwila is slipping year after year after year.. or not.
    • by icebike (68054) on Friday September 11, 2009 @07:35PM (#29395043)

      Then Sun should, in fact, keep it confidential.

      I'm betting it was leaked to give some assurance to the customer base that there will actually BE a Sun in the future.

      • by eln (21727) on Friday September 11, 2009 @08:50PM (#29395267) Homepage
        It's not necessarily Sun that leaked it. Hardware manufacturers (and software houses, for that matter) routinely show large customers their roadmaps under NDA. It's entirely possible some less than scrupulous employee of one of their big customers leaked it, in violation of their NDA.
        • by mehemiah (971799)
          I think this was a leak like some campaign and whitehouse leaks are. It's possible that they unofficially released this on purpose to gauge the public response. This, as aposed to being an Apple leak which is simply a ploy to plug leaks. Then finally there are leaks which, like predictions and interpretations of Lost, are simply guesses that they got right because when enough people are guessing about the same thing, someone statistically has to get it right. (like the iPhone) I still call BS on the people
      • by hairyfeet (841228) <.bassbeast1968. .at. .gmail.com.> on Friday September 11, 2009 @11:59PM (#29396029) Journal

        Which anybody who actually thought about it for more than 2 seconds would know that Oracle would be keeping SPARC and Solaris around for a LOONG time. I mean lets be honest here: Like Bill and Steve old Larry may be a bit of a bastard, but he is a bastard that knows how to get his ROI. As other posters have pointed out Solaris + SPARC equals high throughput in specialized tasks (like say...an Oracle database) and more importantly to Larry he now controls the whole smash, from the OS down to the hardware.

        With Linux it wasn't like he could call up Linus and demand he rewrite the kernel to give Oracle maximum throughput, but with Solaris and SPARC he can have the direction of the entire thing shaped by HIS design, and towards making it the fastest platform for Oracle possible. And of course by owning the whole thing it will make many an admin and PHB happy, as there is only one company to call if things go wrong and none of this "it is the other guy's stuff!" blame passing.

        So I doubt VERY seriously you'll be seeing anything like killing Solaris or SPARC. More likely Larry will make damned sure that future development will be tailored to Oracle, making Solaris+SPARC+Oracle the preferred platform for anyone running Oracle. And thus making Larry a whole hell of a lot more money in the process. It just makes good business sense.

        • by drinkypoo (153816)

          Which anybody who actually thought about it for more than 2 seconds would know that Oracle would be keeping SPARC and Solaris around for a LOONG time.

          I don't agree. They might TRY to, but it looks very much like SPARC is out of steam. POWER is beating it like a pinata.

          Sun couldn't keep SPARC on top no matter how they tried. Oracle pledges to spend more money, but it's not clear where the money will come from or that it will do any good.

    • by Freaky Spook (811861) on Friday September 11, 2009 @07:50PM (#29395103)

      Sun have been providing theses details to their Partners at the Sun Partner Advantage Summits, I got this info last month.

      Plus Sun Partners just have to contact their Sun Sales managers and just ask for a Roadmap Session(Under Signed NDA)

      The Register are just publishing what already is pretty common knowledge amongst most people working with Sun/SPARC hardware already, it won't give their competitors a huge advantage at all, the fact that Sun are already revealing this stuff to their wide partner network means that the development of it is well and truly in its final stages, and if their competitors are finding this out through The Register, then they really are not doing their jobs properly.

      • by NoYob (1630681)
        ...if their competitors are finding this out through The Register, then they really are not doing their jobs properly.

        I had this image of Intel, Motorola, AMD, (I can't think of more) sending really hot Russian women to go and seduce the SUN engineers getting them to divulge everything. After I stopped my 007ish fantasy, I realized all they'd have to do is send in a pretty lady and have her just say "Hi" and those engineers would divulge everything - they are geeks after all.

        • Re: (Score:1, Funny)

          by Anonymous Coward

          Idiot. Sun has a lot of female geeks, too.

  • Peak??? (Score:5, Informative)

    by hguorbray (967940) on Friday September 11, 2009 @07:29PM (#29395031)
    I know this is a dark age for literacy, but s/b peek -ya know? like PEEK and POKE???

    I'm just sayin'
    • by Cambo67 (932815)
      They may be referring to an unexpected hill that the cartographers missed.... :)
    • by TheLink (130905)
      Maybe they meant it's a sneaky little peak before the long steep downhill in the roadmap.

      "Peak" SPARC just like "peak oil".
  • And it will cost 50x the cost of cheap PCs?
    Surely as google does it, its better to have 20 x cheap pcs each running E5400's or anything thats less than $100/cpu with $50 mb's

    Everything eventually fails, its not worth spending 50x more for something that lasts 2x longer, when replacement costs are super
    cheap, and the replacements are going to be even faster.

    • by Yvan256 (722131) on Friday September 11, 2009 @07:56PM (#29395117) Homepage Journal

      Google goes for the lowest watt per processing, the actual hardware cost is probably negligible compared to the cost of years of electricity for powering the systems and cooling the surroundings.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      And it will cost 50x the cost of cheap PCs?
      Surely as google does it, its better to have 20 x cheap pcs each running E5400's or anything thats less than $100/cpu with $50 mb's

      Everything eventually fails, its not worth spending 50x more for something that lasts 2x longer, when replacement costs are super
      cheap, and the replacements are going to be even faster.

      Google? How many tens or maybe hundreds of millions of dollars has Google spent developing the software that can run on piece-of-shit boxes?

      It sure as hell is relevant to be able to buy one box that simple non-redundant apps can run on when the alternative is trying to pay massive amounts to develop fault-tolerant and redundant custom apps that can run on two or three cheap boxes.

      Because unless you can run your software on lots and lots of boxes like Google does, it's cheaper to throw high-end hardware at

      • Google? How many tens or maybe hundreds of millions of dollars has Google spent developing the software that can run on piece-of-shit boxes?

        Not many, but then Google has a problem that is naturally parallel with very few data dependencies. Not all of us are so fortunate.

        • "Not many": Do you know that? Because you posted that like you do actually know, as opposed to pure speculation.

          My guess (just an opinion) is that it would absolutely require a large investment in custom software and manpower to create the infrastructure that google has created. And it makes sense for them and is probably worth it.
    • by leenks (906881)

      Clusters of machines aren't good for all problems, and Google doesn't use their huge clusters for everything... Some problems can only really be solved on a larger machine.

      Hardware costs aren't the be all and end all any more either - one of the biggest costs is electricity (both for the machines and the cooling). It may well be that it is cheaper in the long run to have less of these than a huge cluster.

      Anyway, to me it is quite clear why Oracle want Sparc and Solaris - have a good look at the Oracle produ

  • by Anonymous Coward

    How'd they get this roadmap? More than likely from someone inside Oracle. Now when Oracle gets Sun and the SPARC chips are better than this, Oracle will get the credit for "saving Sun".

    Or am I too cynical?

  • by sunderland56 (621843) on Friday September 11, 2009 @08:22PM (#29395185)
    Sun had a 486i workstation roadmap, too. They never built a single one. Marketing dreams on a PowerPoint slide doesn't mean squat.
    • by fm6 (162816)

      Sun employees are not allowed to use Powerpoint.

  • Can anybody give real life examples where the CPU multi-threading brings anything?

    And please only real life examples: no theory, no official PR - I know them well myself.

    • Re: (Score:3, Informative)

      Can anybody give real life examples where the CPU multi-threading brings anything?

      Multi-threading per core helps with video encoding. I saw benchmarks just today at http://www.anandtech.com/weblog/showpost.aspx?i=642 [anandtech.com] showing the results of the same processors run against the same tasks with and without HT enabled. How many thousand more examples do you need to see?

    • by goofy183 (451746)

      How about any sort of web-server type task. I do development on web-based portal software that is highly threaded. Each thread doesn't due a huge amount of work but there are a lot of them (multiple threads per web server request) so having a machine that can run 128 threads (though each is fairly slow) easily outperforms a machine with much faster CPUs but only 4 or 8 of them.

      Generally webserver type loads do better on hardware/clusters that can deal with lots of threads even if they aren't all that fast.

    • If there was no multi threading, your IE/Firefox would be frozen until it completely loads any webpage.
  • Sun, if this is the best you can do -- 4 cores, 8 threads, arriving at 45nm just as everyone else is getting to 32nm -- just give it up now instead of asking us to watch a slow, agonizing, death.
    • by Daniel Phillips (238627) on Friday September 11, 2009 @09:47PM (#29395473)

      Sun, if this is the best you can do -- 4 cores, 8 threads, arriving at 45nm just as everyone else is getting to 32nm

      Sun's performance as a chip vendor is far better that your performance as a Slashdot troll. According to Sun's roadmap, a 16 core times 8 threads processor (128 threads just to be clear) at 40 nanometers arrives in 2010. That would be four sockets per blade, 48 blades per chassis for a respectable 768 multithreaded processors per chassis. As Sun says, it comes down to the TPC-C numbers. I'm no Sun fanboi, far from it, but I could be convinced by the right performance/heat ratio.

    • They're saying its 128 threads per chip, not eight, and at 40nm. Are you illiterate?
    • by HuguesT (84078)

      Sorry if this redundant, but the title is not very clear, we are talking about 8 threads *per core*, not 8 threads total like with the Intel i7.

  • The Ultra 27 was released with 2 PCIe x16 slots .... and it wasn't until we'd bought the damn things that we found out you can't put two FX-5600s in there- the case was designed to prevent it.

    What's that got to do with their SPARC roadmap? Next x86 box we buy will be intel reference design. It's cheaper.

    (not to mention there are bugs with the XVR-300 and the FX series of cards where you can't turn on 3 heads- it's 2 or 4 only)

  • I'm missing indications about better virtualisation features, like I'm used to them on IBM gear. These days all high end installation I see are running tens to hundreds of virtual machines on a single server. It looks to me that virtualisation in this scale is not even on the roadmap.

    Markus

    • by Lally Singh (3427)

      Solaris Zones not doing it for you?

      • Solaris Zones, as I understand them, isolate applications from each other, but all are running within/on top of the same Solaris instance. As soon as you want to run different OS levels for the different apps or environments you are out of luck.

        For example a new OS maintenance level is usually tested for a while in a test environment before being applied in production. Zones don't help here.

        Often we have also incompatible prerequisite requirements of different apps (3rd party apps are terrible in this res

        • by asaul (98023)

          The T-series (sun4v platform actually) have LDoms which are very similar to LPARs, but a bit more simplistic in their implementation. You can virtualise storage, networking on a control domain (i.e like a VIO server) and create domains out of the available threads and memory on the box. So with this you can do individual OSes in each LDom. It even now has dynamic migration where you can migrate a live running LDom between two machines (akin to VMWare Vmotion or the LPAR equivalent, the name of which esca

        • Zones separate userspace sections from each other, and change the kernel interfaces for each one. It's almost equivalent to full virtualization, though you can only present a linux and a solaris abi to userspaces.
          • I never used Solaris zones, never used the similar AIX 'workload partition' either. But I'm aware of what they do and and how they work.

            I (my) experience, if you need isolation, going the entire way and use a separate VM/LPAR with its own OS is the better solution. This is why I am missing this tech on Sun's high-end servers and don't understand that they are not even seem to plan to catch up in the future.

            Markus

            • Hardware virtualization is supported, but not as well integrated. Apparently lack of customer interest. Good enough is the enemy of perfect.
              • Yes, good enough is the enemy of perfect everywhere. And I suppose if you know your hammer well you tend to see problems as nails.

                In my area there in not much Sun high-end left (1-2 E20k) and plenty of fat IBM boxes (>50), this might explain part of the lack of customer interest...

                Markus

                • ROFL
                  By the way, my "area" computing potential are the measly computer labs at school, but thanks for thinking I'm higher up than I am.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...