Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Sun Microsystems Businesses Databases Oracle Programming Software Hardware IT

Sun Kills Rock CPU, Says NYT Report 190

BBCWatcher writes "Despite Oracle CEO Larry Ellison's recent statement that his company will continue Sun's hardware business, it won't be with Sun processors (and associated engineering jobs). The New York Times reports that Sun has canceled its long-delayed Rock processor, the next generation SPARC CPU. Instead, the Times says Sun/Oracle will have to rely on Fujitsu for SPARCs (and Intel otherwise). Unfortunately Fujitsu is decreasing its R&D budget and is unprofitable at present. Sun's cancellation of Rock comes just after Intel announced yet another delay for Tukwila, the next generation Itanium, now pushed to 2010. HP is the sole major Itanium vendor. Primary beneficiaries of this CPU turmoil: IBM and Intel's Nehalem X86 CPU business."
This discussion has been archived. No new comments can be posted.

Sun Kills Rock CPU, Says NYT Report

Comments Filter:
  • More likely reason (Score:5, Interesting)

    by downix ( 84795 ) on Tuesday June 16, 2009 @09:28AM (#28346783) Homepage

    It is more likely that Sun compared the Rock to Fuji's new SPARC CPU and realized that it could not compare for the price/performance. Frankly, looking at the two, Sun made the wise move, killed off a weaker chip, and will likely push forward the SPARC64 VVIfx, which is further along in development and will be ready sooner.

  • by seeker_1us ( 1203072 ) on Tuesday June 16, 2009 @09:30AM (#28346811)
    According to the CNET article, Tukwilla is pushed until 2010, and it's going to be 65nm instead of 45 nm. Since Intel has already demonstrated 32nm chips, [engadget.com] that means Tukwilla will already be at least two generations behind when it's released. No new chip designs from Sun and Fujitsu decreasing the R&D budget. Sounds like this market is falling behind.
  • by mzito ( 5482 ) on Tuesday June 16, 2009 @09:36AM (#28346855) Homepage

    Mostly, it just benefits Intel and AMD. Sun loses their high-end chip, which theoretically hurts their high-end offerings, but their high-end servers are an rapidly declining piece of their revenue. I've thought that Sun should drop SPARC entirely, except for supporting legacy customers. The niagara chip is an interesting concept, but most people today just want Intel/AMD chips in their servers.

  • by the donner party ( 1000036 ) on Tuesday June 16, 2009 @09:37AM (#28346869)
    The Fujitsu SPARC64 VIIfx [theinquirer.net] does look interesting, but does anyone know when it is actually supposed to be released?
  • Re:Um, Opteron? (Score:2, Interesting)

    by vil3nr0b ( 930195 ) on Tuesday June 16, 2009 @10:35AM (#28347417)
    There is no price/performance contest in comparing AMD Phenom Sexcore processors versus competitors. You could build a whole system around DDR3/i7 architecture, but it is unaffordable in large clusters. BTW, I am an AMD fanboy, especially after upgrading a cluster to the new Phenom chips. It was able to work perfect with DDR2 and saved a fortune just upgrading CPU's to get about a 15 percent performance increase. This only helps AMD.
  • by peppepz ( 1311345 ) on Tuesday June 16, 2009 @10:49AM (#28347585)
    In fiscal year 2008, Sun sold 4,532 $ millions in SPARC servers, and only 707 millions in x64 servers (source [sun.com]).
    I don’t think it would have been wise for them to kill their biggest-selling product.
  • by TheRaven64 ( 641858 ) on Tuesday June 16, 2009 @11:10AM (#28347847) Journal

    I was at a talk by a former Intel chief architect a while ago which explained this. It takes an absolute minimum of about five years to get a new CPU to market. When you start, you have to make guesses about the kind of workload people will be running, their power and financial budgets, and the process technology that will be available to you for producing it. Once you've made these guesses, you can generally come up with a chip that meets the requirements.

    The Pentium 4 is the canonical example of a chip made with bad guesses. The P4 team were told to make it fast at all speed. They missed the market, because they didn't notice that people were starting to care about power consumption, and few people wanted a 120W CPU - especially not in the data centre where the margins are high, but power and cooling are expensive. They also made some bad guesses about process technology, thinking that the process guys would fix the leakage problem so they could ramp the clock speeds up to 10GHz. They came up with a design that scaled up to 10GHz, but needed a process technology that still doesn't quite exist to produce it at these speeds.

    I suspect something similar happened with Sun. First, they made some bad guesses about how well the thread scout would work. It's a nice idea on paper, but doesn't seem to perform well. The result is that Rock will perform better than other approaches on highly-deterministic CPU-bound workloads with lots of threads, while in the real world highly-parallel workloads tend to be I/O bound or have less predictable code flow.

    The T2 goes in completely the opposite direction. It contains a set of very simple cores. They omit most of the complex logic found in other processors, and instead just have a lot of execution engines. If you have a workload that contains a lot of I/O-bound threads, then the T2 gives insanely good performance (both per Watt and per dollar). Sun began designing this family of chips right at the peak of the .com boom, and they are perfectly suited to web-serving workloads (they also do well on a lot of database workloads, which is one of the reasons Oracle is interested in them).

    One of the things Sun does very well is recycle technology. There are a lot of half-dead projects at Sun that are not commercially exploited, but have fed ideas into their other products. Even though Rock is dead, I wouldn't be surprised to see some of their ideas appear in the T3 or T4. The hardware scout is only useful on a few workloads, but it's relatively easy to implement on something like the T2, so we may see it reappear in a future design.

  • The logical conclusion is that Oracle will jettison the entire hardware divison.

    I don't think that'll happen. I think Larry wants you to buy Oracle (the database) running on Oracle (the OS) on Oracle (the hardware) and support contracts for the entire stack. There's a lot of PHB love for being able to call one phone number for anything that breaks because the same company is responsible for every component. IBM currently offers this, and now Oracle can, too.

  • by idontgno ( 624372 ) on Tuesday June 16, 2009 @12:01PM (#28348513) Journal

    I don't think that'll happen. I think Larry wants you to buy Oracle (the database) running on Oracle (the OS) on Oracle (the hardware) and support contracts for the entire stack. There's a lot of PHB love for being able to call one phone number for anything that breaks because the same company is responsible for every component. IBM currently offers this, and now Oracle can, too.

    True. But none of the above requires Oracle to manufacture one screw, chip, or board of hardware. OEM servers from Fujitsu (or Dell, or anyone they can trust and wangle a good price out of), slap on some Oracle name plates, et voila, the complete Oracle stack. Shoot, go nuts and do careful integration engineering so that the software is well-tuned and thoroughly optimized to the selected hardware. Subcontract HW and OS support out of the OEM vendor. Put them on-site with your Oracle weasels and make 'em wear Oracle name badges. Who'd know the difference?

    It was inevitable. Sun has relaxed and turned its back to Oracle, and the long knives are slipping out of the sheaths.

  • by RubberDuckie ( 53329 ) on Tuesday June 16, 2009 @01:27PM (#28350123)

    There's a lot to be said for backward compatibility. I recently migrated a very old database off of a Solaris 2.6 system and moved it to Solaris 10. I didn't have to search for back leveled software, the application just worked. Granted, this isn't something I need to do every day, but it's an invaluable feature to have when you're dealing with trying to support enterprise applications that just refuse to die.

  • by lewiscr ( 3314 ) on Tuesday June 16, 2009 @02:23PM (#28351109) Homepage

    Several years ago, I had the opposite problem with a real world OLTP load. I replaced a 5 year old Quad SparcII 450MHz machine with a Dual Opteron 2.4GHz. The Opterons had 3x the total MHz, 4x the RAM, more PCI bandwidth, and faster disks. They were half the price of the Sparc relacements, so I was not allowed to evalate the Sparc options. I guestimated that the new Sparc option would have been 2x faster and handled 4x the transactions compared to the 5 year old machines.

    The Opterons were slight faster, but did not handle load spikes nearly as well. Had I been allowed to purchase the 5 year old hardware used, I probably would have been better off sticking with the 5 year old hardware. If I allow hindsight, including all the architecture conversion problems and software upgrade issues I had, the old-but-tested hardware would have been a big win. (Note: I had the ability to scale my database horitzontally very easily, so old machines were still useful machines.)

    For a database server, I highly recommend that a Sparc based machine be evaulated next to any X86 based machine. They cost more upfront, but I found them to be cheaper in the long run.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Tuesday June 16, 2009 @02:45PM (#28351447)
    Comment removed based on user account deletion
  • by Anonymous Coward on Tuesday June 16, 2009 @06:22PM (#28354571)

    Funny thing is, I run an US II and find it's faster than a Pentium III of almost 5x the Mhz.

    No you don't.

    INT:
    http://www.spec.org/osg/cpu2000/results/res2000q3/cpu2000-20000810-00176.html [spec.org]
    http://www.spec.org/osg/cpu2000/results/res2000q4/cpu2000-20001129-00408.html [spec.org]

    FP:
    http://www.spec.org/osg/cpu2000/results/res2000q3/cpu2000-20000810-00177.html [spec.org]
    http://www.spec.org/osg/cpu2000/results/res2000q4/cpu2000-20001121-00355.html [spec.org]

    Summary of those numbers:
    UltraSPARC II 480 MHz scores 234 SPECint2000, 291 SPECfp2000
    Pentium III 1000 MHz scores 462 SPECint2000, 340 SPECfp2000

    So, at a ratio of just over 2x clock rate, the P3 is ~2x faster at int and 1.16x faster at FP. Which makes your claim that an US II can beat a P3 clocked 5x faster completely absurd -- question SPEC's methodology all you want, you're completely out of touch if you are that far away from them.

    (Of course, 'downix' is not anyone you'd want to consider an authority on anything, much less a legitimate challenge to SPEC's ability to design a proper benchmark. You're a pathological liar who has been faking expertise for years... 'Eddas' ring a bell?)

    But you weren't done! You went on to stick your foot even further in your mouth:

    You classify this as the platform, I spot it for what I and a lot of others recognize as a weakness of the US III. The III was Sun's P4, a high-priced pretty poor CPU.

    A little more searching on the SPEC website reveals that (just to pick one example pair of scores) a SunBlade 150 with a 650 MHz US IIi scores 246/276 int/fp, and a SunBlade 1500 with a 1.062 GHz US IIIi scores 589/884.

    And in another message you claimed:

    I've found the SPARC FPU to perform better than the Core 2's, a lot more reliably if nothing else. In addition, the SPARC's threads complete in less cycles, enabling a slower per-thread CPU to keep up.

    Bull. Complete bull.

    Let's consider SPEC CPU2006 this time, only because the old CPU2000 benchmark probably hasn't ever been run on a Core 2.

    It so happens that CPU2006 is normalized to an UltraSPARC 2 running at 296 MHz (the Ultra Enterprise 2). In other words, that processor by definition scores 1.0 on both the integer and FP tests.

    The 2.66 GHz Core 2 Duo E6700 scores 20.0 int, 16.9 fp.

    Even if you have the fastest US II ever made (the 650 MHz US IIe+), there is simply no way your US II's FP performance could even *touch* a Core 2's. You're in the realm of completely ludicrous claims, here.

    You can't even save yourself with a weak back-off to per-cycle efficiency: notice how the C2's clock is less than 10x as much as the US II, yet its performance is much more than 10x higher?

    Nate, why do you so often feel the urge to lie about things like this?

It appears that PL/I (and its dialects) is, or will be, the most widely used higher level language for systems programming. -- J. Sammet

Working...