Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
Sun Microsystems Businesses Databases Oracle Programming Software Hardware IT

Sun Kills Rock CPU, Says NYT Report 190

Posted by timothy
from the what-we-meant-was dept.
BBCWatcher writes "Despite Oracle CEO Larry Ellison's recent statement that his company will continue Sun's hardware business, it won't be with Sun processors (and associated engineering jobs). The New York Times reports that Sun has canceled its long-delayed Rock processor, the next generation SPARC CPU. Instead, the Times says Sun/Oracle will have to rely on Fujitsu for SPARCs (and Intel otherwise). Unfortunately Fujitsu is decreasing its R&D budget and is unprofitable at present. Sun's cancellation of Rock comes just after Intel announced yet another delay for Tukwila, the next generation Itanium, now pushed to 2010. HP is the sole major Itanium vendor. Primary beneficiaries of this CPU turmoil: IBM and Intel's Nehalem X86 CPU business."
This discussion has been archived. No new comments can be posted.

Sun Kills Rock CPU, Says NYT Report

Comments Filter:
  • by Anonymous Coward on Tuesday June 16, 2009 @08:08AM (#28346621)

    Yuck.

    Some days I hate this industry.

  • by bluesatin (1350681) on Tuesday June 16, 2009 @08:22AM (#28346723)
    Zombies? On my Slashdot? It's more likely than you think.
  • Um, Opteron? (Score:5, Insightful)

    by tjstork (137384) <todd.bandrowsky@gm a i l.com> on Tuesday June 16, 2009 @08:27AM (#28346773) Homepage Journal

    Not that I am an AMD fanboy, but, my dual opteron PC just ordered me to remind you all that AMD will also benefit from this choice. Indeed, Sun already uses AMD Opteron parts for some of its servers....

  • by GreenTech11 (1471589) on Tuesday June 16, 2009 @08:39AM (#28346887)
    To summarise the /. summary article, all computing hardware companies are going bankrupt, with the exception of Intel, who are delaying projects as well.

    How I love this industry

  • by downix (84795) on Tuesday June 16, 2009 @08:46AM (#28346943) Homepage

    People want more viruses? As a virus is targeted at an architecture and api, and if you combine into a single chain, you wind up with a perfect storm for virus spreading. Witness the Irish Potato Famine.

    I say we need more diversity of architectures, OS's, platforms and API's to prevent a Pandemic of computer malware. I still laugh at the memory of witnessing conficker trying desperately to install itself on my SPARC Kubuntu machine.

  • by TheRaven64 (641858) on Tuesday June 16, 2009 @08:54AM (#28347013) Journal

    The Rock is an amazing chip on paper. It runs an extra fetch/decode part of the pipeline a few cycles ahead so that it is always loading the needed data into the cache before it's needed.

    If this technology doesn't work, however, Rock is a pretty unimpressive chip and there is no evidence that it does actually work (for example, it doesn't predict across computed jumps, which accounts for a lot of cache misses in current chips). Even if it does work, Rock looked like it would perform best on the kind of workloads where the T2 does well, but probably not as well as the T2. Out of the SPARC64 series, Rock, and the T2 and successors, Rock is by far the weakest. The SPARC64 does well on traditional workloads, the T2 on heavily-parallel workloads. Between the two, Sun already has processors for pretty much any market they want to be in - Rock just doesn't fit commercially. Note that the summary's comment, there is no indication that they are killing off the Niagara line - they aren't exiting the CPU business, just killing one failed experiment. Not the first, and probably not the last, time Sun has killed off an almost-finished CPU because there was no market for it.

  • by paulsnx2 (453081) on Tuesday June 16, 2009 @09:08AM (#28347159)

    Rock was Sun's effort to develop a processor with high single thread performance. Single thread performance doesn't help the database performance of Sun' s new Oracle Over Lords. What databases need is high multi-thread performance.

    The Niagara line ( http://en.wikipedia.org/wiki/UltraSPARC_T1 [wikipedia.org] ) provides the proper architecture for improving database performance, and this effort by Sun has the added benefit of actually producing shipping products (Unlike Rock).

    At this time, Oracle/Sun has NOT announced the killing off of further Niagara development.

  • by downix (84795) on Tuesday June 16, 2009 @09:12AM (#28347195) Homepage

    Scale. x86 cannot scale up anywhere near as far as SPARC (or even MIPS for that matter) can. You realize that the cheapest SPARC can handle more threads per cycle than a dual-quad Xeon, and do it while using less electricity, right? As for the big-iron chips, they handle databases on a scale that dwarfs the address range of x86, relying on more registers than even exists in the x86 architecture.

  • by Anonymous Coward on Tuesday June 16, 2009 @09:15AM (#28347229)
    Some people believe that for a truly stable and robust database infrastructure, enterprise grade, you cannot use anything other than SPARC/Solaris and Oracle. I don't necessarily believe this, but if it is good enough for Microsoft then it is good enough for me and my infrastructure.
  • by mzito (5482) on Tuesday June 16, 2009 @09:32AM (#28347391) Homepage

    By your own example, though, clearly the current level of diversity hasn't helped mitigate the spread of malware, since conficker was able to install on many many PCs.

    Then, if we decided we needed more diversity, how many more? I can't see having 10 major OSes making a difference, perhaps 50 with wide distribution.

    So now, businesses, software developers, hardware manufacturers, tech support organizations have to support 50 different operating systems? Where's the ROI on that? How will we hire enough people who are trained on that many different configurations?

    Certainly, we all want better computer security, but improving security by increasing IT complexity is like permanently banning travel between countries because of the fear someone might bring a disease in. It solves the problem, but damages everyone every other way.

  • by reporter (666905) on Tuesday June 16, 2009 @09:37AM (#28347437) Homepage
    Oracle will discard the entire hardware division (of Sun), not just the processor departments.

    Unlike Sun (which will no longer build processors), Fujitsu does build processors and the servers that incorporate them. Building the processors gives Fujitsu engineers intimate knowledge of how the chips work and enables the engineers to optimize the processors' connection to the rest of the server ecosystem. Lacking this ability, Sun engineers will not be able to build servers that match the capabilities of Fujitsu's computers.

    The logical conclusion is that Oracle will jettison the entire hardware divison. That is not surprising. Oracle was mainly interested in Sun's software products (e. g., the operating system) and Sun's customer lists. Larry Ellison was willing to overpay for Sun (buying the hardware division in the process) simply because he and Scott McNealy are good friends.

    Note that Sun once boasted that it employed about 1000 (?) microprocessor engineers. Sun claimed that it had the largest processor team outside of Intel. Apparently, quantity does not necessarily lead to quality.

  • by John Betonschaar (178617) on Tuesday June 16, 2009 @09:45AM (#28347543)

    Discounting x86 for big-iron server systems because they would otherwise attract viruses -much like the potato famine- is ridiculous. I think you're paranoia.

  • by Anonymous Coward on Tuesday June 16, 2009 @09:46AM (#28347553)

    This is a bummer, I have 10 5220s in a datacenter space that cost over time more than the servers. If I can get these down to 5 with 16 core. Life would be great. BTW the T2 Proc rocks when you compile apps using -fast flags. One the fastest 2U boxes ever.

  • by John Betonschaar (178617) on Tuesday June 16, 2009 @09:56AM (#28347687)

    Yes, and all these threads you get have access to crappy fpu's and horrible memory bandwith.

    It's true that you can easily slap a lot of Sparc CPU's into a single machine than you can do so with x86, but since you're actually going to need all those CPU's to match even an off-the-shelf dual quad-core Opteron system for most tasks, the end result is that you're still spending much, much more money and probably suck more power too. For tasks that cannot be parallelized or executed concurrently Sparc is rubbish in every aspect imaginable.

    I work at one of those companies that got lured into standardizing on Sparc hardware years ago, and now we're kind of stuck on it because we have all those systems in the field, with customers. A while ago we investigated upgrading to newer Sparc hardware (M3000) and we leased a test system to assess it's performance. For compationally intensive (FPU) tasks running 8-threads, the ~$11,000 Sparc64 IV with 8 cores / 16 hardware threads was about as fast as a $400 Core 2 Duo laptop. I'm not kidding....

    So unless you want to run an enterprise database that has to handle 1000s of requests a second, Sparc has zero added value. If you really need a Sparc system for high-load, high-availability server tasks, I don't know. I'd guess a Power6 server or a rack of Opterons or Xeons wouldn't do much worse.

  • by afabbro (33948) on Tuesday June 16, 2009 @10:01AM (#28347733) Homepage

    What you say is often, but not exclusively, true. The main reason people buy SPARC:

    • The CoolThreads servers are genuinely different than others. Radically low power consumption and a bajillion threads. That doesn't mean they're good for everything, but in the app space they're marketed for, they're exceptional.
    • If I have millions of lines of code written for Solaris on SPARC, I might want to run SPARC. Sun has a large presence in many markets and compatibility (left over from the days when x86 was nowhere near SPARC) is important.
    • Above a certain level, x86 can't compete. You can say "yet" if you want. Sun, IBM, etc.'s high-end gear is the closest you can get to a mainframe, in terms of RAIDed memory (one bad chip doesn't bring down the system), hot-swapping CPUs, hardware partitioning, etc. There are a lot of people in love with clustered x86 boxes, but they do not scale as well. A single box with 32 CPUs will perform better than 16 boxes with 2 CPUs, every single time. The 16x2 might be cheaper, but there are a lot of apps that don't run as well that way. To take a very common example, Oracle RAC scales about as well as anything on "wide and small commodity," but Oracle certainly runs better on a 32-CPU box rather than 16x2.

    I agree that in many cases, proprietary kit is overpriced and unnecessary. Which is why it's on the decline...

  • by segedunum (883035) on Tuesday June 16, 2009 @10:12AM (#28347865)
    As soon as a group of people got into Sun, looked at the costs of maintaining and pumping research and development into their hardware, looked at the relative performance from SPARC versus competitors using x86 and ultimately looked at the bottom line objectively without being stupidly protectionist, then the next step was going to be shutting down Sun's production of Rock and SPARC and moving it to Fujitsu as a supplier to save money. However, even that probably won't be enough as I'm not sure Fujitsu will be able to keep SPARC viable themselves. SPARC has had two, possibly three, options written on the wall for the past ten years:

    1. Catch up to x86 platforms in terms of raw performance as most SPARC systems have tended to overlap with workloads x86 systems have taken over. Papering over cracks by promoting 'CoolThreads' and parallel processing as a way around this performance gap was never going to work. I can remember almost ten years ago working somewhere where a person discovered that their Athlon 1.4GHz desktop system had several times the performance of their UltraSPARC III server and could complete tasks several times sooner. Cue lots of panic as UltraSPARC was justified because it was 'enterprise' reliable.

    2. Accept the inevitable and throw the towel in.

    3. The third way: Do what IBM has done with Power and push it into a high-end and high premium niche. This is difficult because IBM itself can only cover Power by selling mainframe packages and a whole bunch of add-ons to make it pay. Sun have had difficulty with this because their hardware division has always relied on hardware sales themselves.

    Option 2 has clearly become the only way out once Sun's difficulties resulted in a takeover and as poor as Oracle might be at some things they are extremely successful at judging bottom lines.
  • by segedunum (883035) on Tuesday June 16, 2009 @10:21AM (#28347969)

    You realize that the cheapest SPARC can handle more threads per cycle than a dual-quad Xeon, and do it while using less electricity, right?

    The problem is that a Xeon will complete each thread [task] in far less time than the SPARC and be on to the next one, and the workloads that most organisations have are entirely dependant on completing ever increasing single tasks in the shortest amount of time possible.

  • by segedunum (883035) on Tuesday June 16, 2009 @10:33AM (#28348125)
    Feel free to mod the parent up more, because that, sadly, is a true reflection of the way things have been for most of the past ten years - not just now. I worked somewhere eight years ago where someone realised that a desktop 1.4GHz Athlon had several times the performance of an expensive UltraSPARC III whilst troubleshotting some Python and Zope performance issues. It was justified because it was an 'enterprise' piece of kit and no one wanted to believe that they wasted their money on something so expensive.

    To get close to an off-the-shelf AMD or Intel system performance-wise your SPARC systems need to be running hell-for-leather at 100%, drawing maximum power permanently. The Xeon or Opteron systems will be able to scale up and down far more comfortably, so when comparing these systems you are never comparing apples with apples because the performance is just not comparable. Unless you have thousands of *completely independent* requests to handle per second then a SPARC system is useless to you and the writing has been on the wall on that for the past ten years.
  • heat? (Score:2, Insightful)

    by Anonymous Coward on Tuesday June 16, 2009 @10:36AM (#28348163)

    Well there's IBM. And they don't seem to be slowing down:

    POWER 6 [wikipedia.org]

    POWER 7 [wikipedia.org]

    also:

    http://www.theregister.co.uk/2008/07/11/ibm_power7_ncsa/ [theregister.co.uk]

    POWER 7 sounds like crazy town...

    The one thing I like about the Niagara-based CPUs (UltraSPARC-Tx) is that they're fairly low wattage for the work that they can do. These 4 and 5 GHz chips from IBM seem like they're going to be dumping heat like mad.

    Unless you're doing HPC, and are willing to go into water-based cooling in your DC, it seems excessive to some extent.

    Anyone have experience with POWER and how it differentiates with SPARC? It seems that there's a product split in SPARC, but everyone else (IBM, Intel, AMD) seems to have a one-size-fits-all kind of thing.

  • by Macka (9388) on Tuesday June 16, 2009 @10:36AM (#28348179)

    Yeah, but the thing is that 32-CPU systems are incredibly niche. I've been involved in projects that delivered a number of systems of that size over the years and I can count on one hand the times they've been used as single 32-CPU systems. In virtually all cases they were hard partitioned down to 4, 8 and sometimes 16 cpu systems. And x86 is walking all over that market now. Next year when the Nehalem-EX chips ship, you'll get your 32 cores on a standard 4 socket server with twice as many threads. It just shoves the high end systems more and more into a tight corner. RAIDed memory is great, but that alone is not worth the premium that proprietary solutions are burdened with.

       

  • by fm6 (162816) on Tuesday June 16, 2009 @11:17AM (#28348807) Homepage Journal

    Oracle will discard the entire hardware division (of Sun), not just the processor departments.

    Right. Spend $5 billion dollars for a company and then shut down 90% of it.

  • by drinkypoo (153816) <martin.espinoza@gmail.com> on Tuesday June 16, 2009 @11:48AM (#28349391) Homepage Journal

    The Pentium 4 is the canonical example of a chip made with bad guesses. The P4 team were told to make it fast at all speed. They missed the market, because they didn't notice that people were starting to care about power consumption

    There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture. Sun has been wildly flailing its arms about trying to come up with an architecture worth carrying into the future. So far, no dice. This is just more of the same. Totally canning two architectures ought to be the end of Sun's attempts to make new SPARC processors; for the love of all that is holy, leave it to Fujitsu. Help them, if you must. It can only set them back so far...

  • by Anonymous Coward on Tuesday June 16, 2009 @03:27PM (#28353033)

    There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture.

    How is that a problem with his analysis?

    The Pentium 4 was designed to achieve high performance by having really high clocks to compensate for its poor per-cycle efficiency. It hit 3.8 GHz in late 2004, on a 90nm process. 4.5 years later, on 45nm, we still don't have any current processor design which clocks that fast (outside of overclocking, but then again P4 still overclocks higher than any current production processor -- IIRC people have gotten them over 8 GHz on liquid nitrogen).

    The P3 basic design could have scaled if Intel was trying to, which they weren't. It did scale when they turned back to it. The reason why they went back was that (as the GP post said) P4 turned out to not scale as well as intended.

    The P4 achieved high frequencies by using very fast (=high power) logic transistors and lots of pipeline stages. Lots of stages adds a lot more transistors, not just for the extra latches but also due to complicating many parts of the processor (for example, more stages requires the OoO engine to track more in-flight instructions). But the P4's architects didn't anticipate a severe problem which cropped up after the P4 hit the market. P4 was originally targeted at the 180nm process node, where there were only hints of the problem that would dominate the entire semiconductor industry at 130nm, 90nm, and 65nm: leakage current. As transistor gate insulators got thinner, it proved impossible to prevent significant leakage current; where once a CMOS logic gate would only draw power when it switched, half or more of the power of an operating CPU became constant leakage current through the gates. (Intel has finally solved this at the 45nm node with their HKMG process, which changes the materials used for gates and gate insulators.)

    It turns out that high power transistors have worse leakage than transistors tuned for low power, and of course the P4 design had lots of them. Intel was forced to limit the P4's potential for clock scaling just to keep power down to relatively reasonable levels. (In absolute terms, P4 was still very hot -- it would have been a total dog if they didn't push it to the edge of what was possible to cool in a desktop computer.)

    The P4 was intended to scale to 10 GHz and beyond during its lifetime. And it possibly could have -- in an alternate universe where the leakage current problem didn't exist.

The only problem with being a man of leisure is that you can never stop and take a rest.

Working...