Sun Kills Rock CPU, Says NYT Report 190
BBCWatcher writes "Despite Oracle CEO Larry Ellison's recent statement that his company will continue Sun's hardware business, it won't be with Sun processors (and associated engineering jobs). The New York Times reports that Sun has canceled its long-delayed Rock processor, the next generation SPARC CPU. Instead, the Times says Sun/Oracle will have to rely on Fujitsu for SPARCs (and Intel otherwise). Unfortunately Fujitsu is decreasing its R&D budget and is unprofitable at present. Sun's cancellation of Rock comes just after Intel announced yet another delay for Tukwila, the next generation Itanium, now pushed to 2010. HP is the sole major Itanium vendor. Primary beneficiaries of this CPU turmoil: IBM and Intel's Nehalem X86 CPU business."
another one bites the dust! x86 uber alles! (Score:2, Insightful)
Yuck.
Some days I hate this industry.
Re:That's just dynamite! (Score:2, Insightful)
Um, Opteron? (Score:5, Insightful)
Not that I am an AMD fanboy, but, my dual opteron PC just ordered me to remind you all that AMD will also benefit from this choice. Indeed, Sun already uses AMD Opteron parts for some of its servers....
To summarise the article: (Score:3, Insightful)
How I love this industry
Re:It doesn't really benefit IBM (Score:2, Insightful)
People want more viruses? As a virus is targeted at an architecture and api, and if you combine into a single chain, you wind up with a perfect storm for virus spreading. Witness the Irish Potato Famine.
I say we need more diversity of architectures, OS's, platforms and API's to prevent a Pandemic of computer malware. I still laugh at the memory of witnessing conficker trying desperately to install itself on my SPARC Kubuntu machine.
Re:More likely reason (Score:5, Insightful)
The Rock is an amazing chip on paper. It runs an extra fetch/decode part of the pipeline a few cycles ahead so that it is always loading the needed data into the cache before it's needed.
If this technology doesn't work, however, Rock is a pretty unimpressive chip and there is no evidence that it does actually work (for example, it doesn't predict across computed jumps, which accounts for a lot of cache misses in current chips). Even if it does work, Rock looked like it would perform best on the kind of workloads where the T2 does well, but probably not as well as the T2. Out of the SPARC64 series, Rock, and the T2 and successors, Rock is by far the weakest. The SPARC64 does well on traditional workloads, the T2 on heavily-parallel workloads. Between the two, Sun already has processors for pretty much any market they want to be in - Rock just doesn't fit commercially. Note that the summary's comment, there is no indication that they are killing off the Niagara line - they aren't exiting the CPU business, just killing one failed experiment. Not the first, and probably not the last, time Sun has killed off an almost-finished CPU because there was no market for it.
The summary is misleading.... (Score:5, Insightful)
Rock was Sun's effort to develop a processor with high single thread performance. Single thread performance doesn't help the database performance of Sun' s new Oracle Over Lords. What databases need is high multi-thread performance.
The Niagara line ( http://en.wikipedia.org/wiki/UltraSPARC_T1 [wikipedia.org] ) provides the proper architecture for improving database performance, and this effort by Sun has the added benefit of actually producing shipping products (Unlike Rock).
At this time, Oracle/Sun has NOT announced the killing off of further Niagara development.
Re:What are these architectures good for... (Score:5, Insightful)
Scale. x86 cannot scale up anywhere near as far as SPARC (or even MIPS for that matter) can. You realize that the cheapest SPARC can handle more threads per cycle than a dual-quad Xeon, and do it while using less electricity, right? As for the big-iron chips, they handle databases on a scale that dwarfs the address range of x86, relying on more registers than even exists in the x86 architecture.
Re:What are these architectures good for... (Score:1, Insightful)
Re:It doesn't really benefit IBM (Score:4, Insightful)
By your own example, though, clearly the current level of diversity hasn't helped mitigate the spread of malware, since conficker was able to install on many many PCs.
Then, if we decided we needed more diversity, how many more? I can't see having 10 major OSes making a difference, perhaps 50 with wide distribution.
So now, businesses, software developers, hardware manufacturers, tech support organizations have to support 50 different operating systems? Where's the ROI on that? How will we hire enough people who are trained on that many different configurations?
Certainly, we all want better computer security, but improving security by increasing IT complexity is like permanently banning travel between countries because of the fear someone might bring a disease in. It solves the problem, but damages everyone every other way.
Oracle will jettison the entire hardware division. (Score:3, Insightful)
Unlike Sun (which will no longer build processors), Fujitsu does build processors and the servers that incorporate them. Building the processors gives Fujitsu engineers intimate knowledge of how the chips work and enables the engineers to optimize the processors' connection to the rest of the server ecosystem. Lacking this ability, Sun engineers will not be able to build servers that match the capabilities of Fujitsu's computers.
The logical conclusion is that Oracle will jettison the entire hardware divison. That is not surprising. Oracle was mainly interested in Sun's software products (e. g., the operating system) and Sun's customer lists. Larry Ellison was willing to overpay for Sun (buying the hardware division in the process) simply because he and Scott McNealy are good friends.
Note that Sun once boasted that it employed about 1000 (?) microprocessor engineers. Sun claimed that it had the largest processor team outside of Intel. Apparently, quantity does not necessarily lead to quality.
Re:It doesn't really benefit IBM (Score:4, Insightful)
Discounting x86 for big-iron server systems because they would otherwise attract viruses -much like the potato famine- is ridiculous. I think you're paranoia.
Re:That's just dynamite! (Score:1, Insightful)
This is a bummer, I have 10 5220s in a datacenter space that cost over time more than the servers. If I can get these down to 5 with 16 core. Life would be great. BTW the T2 Proc rocks when you compile apps using -fast flags. One the fastest 2U boxes ever.
Re:What are these architectures good for... (Score:5, Insightful)
Yes, and all these threads you get have access to crappy fpu's and horrible memory bandwith.
It's true that you can easily slap a lot of Sparc CPU's into a single machine than you can do so with x86, but since you're actually going to need all those CPU's to match even an off-the-shelf dual quad-core Opteron system for most tasks, the end result is that you're still spending much, much more money and probably suck more power too. For tasks that cannot be parallelized or executed concurrently Sparc is rubbish in every aspect imaginable.
I work at one of those companies that got lured into standardizing on Sparc hardware years ago, and now we're kind of stuck on it because we have all those systems in the field, with customers. A while ago we investigated upgrading to newer Sparc hardware (M3000) and we leased a test system to assess it's performance. For compationally intensive (FPU) tasks running 8-threads, the ~$11,000 Sparc64 IV with 8 cores / 16 hardware threads was about as fast as a $400 Core 2 Duo laptop. I'm not kidding....
So unless you want to run an enterprise database that has to handle 1000s of requests a second, Sparc has zero added value. If you really need a Sparc system for high-load, high-availability server tasks, I don't know. I'd guess a Power6 server or a rack of Opterons or Xeons wouldn't do much worse.
Re:What are these architectures good for... (Score:5, Insightful)
What you say is often, but not exclusively, true. The main reason people buy SPARC:
I agree that in many cases, proprietary kit is overpriced and unnecessary. Which is why it's on the decline...
This Was Always Going to Happen (Score:5, Insightful)
1. Catch up to x86 platforms in terms of raw performance as most SPARC systems have tended to overlap with workloads x86 systems have taken over. Papering over cracks by promoting 'CoolThreads' and parallel processing as a way around this performance gap was never going to work. I can remember almost ten years ago working somewhere where a person discovered that their Athlon 1.4GHz desktop system had several times the performance of their UltraSPARC III server and could complete tasks several times sooner. Cue lots of panic as UltraSPARC was justified because it was 'enterprise' reliable.
2. Accept the inevitable and throw the towel in.
3. The third way: Do what IBM has done with Power and push it into a high-end and high premium niche. This is difficult because IBM itself can only cover Power by selling mainframe packages and a whole bunch of add-ons to make it pay. Sun have had difficulty with this because their hardware division has always relied on hardware sales themselves.
Option 2 has clearly become the only way out once Sun's difficulties resulted in a takeover and as poor as Oracle might be at some things they are extremely successful at judging bottom lines.
Re:What are these architectures good for... (Score:3, Insightful)
The problem is that a Xeon will complete each thread [task] in far less time than the SPARC and be on to the next one, and the workloads that most organisations have are entirely dependant on completing ever increasing single tasks in the shortest amount of time possible.
Re:What are these architectures good for... (Score:5, Insightful)
To get close to an off-the-shelf AMD or Intel system performance-wise your SPARC systems need to be running hell-for-leather at 100%, drawing maximum power permanently. The Xeon or Opteron systems will be able to scale up and down far more comfortably, so when comparing these systems you are never comparing apples with apples because the performance is just not comparable. Unless you have thousands of *completely independent* requests to handle per second then a SPARC system is useless to you and the writing has been on the wall on that for the past ten years.
heat? (Score:2, Insightful)
Well there's IBM. And they don't seem to be slowing down:
POWER 6 [wikipedia.org]
POWER 7 [wikipedia.org]
also:
http://www.theregister.co.uk/2008/07/11/ibm_power7_ncsa/ [theregister.co.uk]
POWER 7 sounds like crazy town...
The one thing I like about the Niagara-based CPUs (UltraSPARC-Tx) is that they're fairly low wattage for the work that they can do. These 4 and 5 GHz chips from IBM seem like they're going to be dumping heat like mad.
Unless you're doing HPC, and are willing to go into water-based cooling in your DC, it seems excessive to some extent.
Anyone have experience with POWER and how it differentiates with SPARC? It seems that there's a product split in SPARC, but everyone else (IBM, Intel, AMD) seems to have a one-size-fits-all kind of thing.
Re:What are these architectures good for... (Score:5, Insightful)
Yeah, but the thing is that 32-CPU systems are incredibly niche. I've been involved in projects that delivered a number of systems of that size over the years and I can count on one hand the times they've been used as single 32-CPU systems. In virtually all cases they were hard partitioned down to 4, 8 and sometimes 16 cpu systems. And x86 is walking all over that market now. Next year when the Nehalem-EX chips ship, you'll get your 32 cores on a standard 4 socket server with twice as many threads. It just shoves the high end systems more and more into a tight corner. RAIDed memory is great, but that alone is not worth the premium that proprietary solutions are burdened with.
Re:Oracle will jettison the entire hardware divisi (Score:4, Insightful)
Right. Spend $5 billion dollars for a company and then shut down 90% of it.
Re:More likely reason (Score:2, Insightful)
The Pentium 4 is the canonical example of a chip made with bad guesses. The P4 team were told to make it fast at all speed. They missed the market, because they didn't notice that people were starting to care about power consumption
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture. Sun has been wildly flailing its arms about trying to come up with an architecture worth carrying into the future. So far, no dice. This is just more of the same. Totally canning two architectures ought to be the end of Sun's attempts to make new SPARC processors; for the love of all that is holy, leave it to Fujitsu. Help them, if you must. It can only set them back so far...
Re:More likely reason (Score:2, Insightful)
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture.
How is that a problem with his analysis?
The Pentium 4 was designed to achieve high performance by having really high clocks to compensate for its poor per-cycle efficiency. It hit 3.8 GHz in late 2004, on a 90nm process. 4.5 years later, on 45nm, we still don't have any current processor design which clocks that fast (outside of overclocking, but then again P4 still overclocks higher than any current production processor -- IIRC people have gotten them over 8 GHz on liquid nitrogen).
The P3 basic design could have scaled if Intel was trying to, which they weren't. It did scale when they turned back to it. The reason why they went back was that (as the GP post said) P4 turned out to not scale as well as intended.
The P4 achieved high frequencies by using very fast (=high power) logic transistors and lots of pipeline stages. Lots of stages adds a lot more transistors, not just for the extra latches but also due to complicating many parts of the processor (for example, more stages requires the OoO engine to track more in-flight instructions). But the P4's architects didn't anticipate a severe problem which cropped up after the P4 hit the market. P4 was originally targeted at the 180nm process node, where there were only hints of the problem that would dominate the entire semiconductor industry at 130nm, 90nm, and 65nm: leakage current. As transistor gate insulators got thinner, it proved impossible to prevent significant leakage current; where once a CMOS logic gate would only draw power when it switched, half or more of the power of an operating CPU became constant leakage current through the gates. (Intel has finally solved this at the 45nm node with their HKMG process, which changes the materials used for gates and gate insulators.)
It turns out that high power transistors have worse leakage than transistors tuned for low power, and of course the P4 design had lots of them. Intel was forced to limit the P4's potential for clock scaling just to keep power down to relatively reasonable levels. (In absolute terms, P4 was still very hot -- it would have been a total dog if they didn't push it to the edge of what was possible to cool in a desktop computer.)
The P4 was intended to scale to 10 GHz and beyond during its lifetime. And it possibly could have -- in an alternate universe where the leakage current problem didn't exist.