Sun Kills Rock CPU, Says NYT Report 190
BBCWatcher writes "Despite Oracle CEO Larry Ellison's recent statement that his company will continue Sun's hardware business, it won't be with Sun processors (and associated engineering jobs). The New York Times reports that Sun has canceled its long-delayed Rock processor, the next generation SPARC CPU. Instead, the Times says Sun/Oracle will have to rely on Fujitsu for SPARCs (and Intel otherwise). Unfortunately Fujitsu is decreasing its R&D budget and is unprofitable at present. Sun's cancellation of Rock comes just after Intel announced yet another delay for Tukwila, the next generation Itanium, now pushed to 2010. HP is the sole major Itanium vendor. Primary beneficiaries of this CPU turmoil: IBM and Intel's Nehalem X86 CPU business."
another one bites the dust! x86 uber alles! (Score:2, Insightful)
Yuck.
Some days I hate this industry.
RPS (Score:5, Funny)
Sun Kills Rock CPU, Says NYT Report
Sun has instead moved on to develop the superior Paper CPU while critics argue about the hypothetical "Scissors CPU" that competitors may be secretly developing.
Re: (Score:3, Informative)
You forgot [wikipedia.org] the low-cost, low-power Lizard CPU (being developed by the designers of ARM CPUs) and the highly logical Spock CPU (from AMD, of course).
Re: (Score:3, Funny)
Actually, the real problem is that Rock CPU faced off with Guts CPU, Bomb CPU, Fire CPU, and Ice CPU, but hasn't been able to handle Cut CPU.
Re: (Score:3, Funny)
Re:RPS (Score:5, Funny)
critics argue about the hypothetical "Scissors CPU" that competitors may be secretly developing
I've seen the supposed specs for the scissors cpu, and I can attest that rock would have absolutely crushed it.
Re: (Score:2)
I think you'll find that any office supply company can provide scissors which beat Sun's CPUs.
Re: (Score:2, Insightful)
Re: (Score:2)
Actually it was crap (Score:5, Funny)
Oracle will jettison the entire hardware division. (Score:3, Insightful)
Unlike Sun (which will no longer build processors), Fujitsu does build processors and the servers that incorporate them. Building the processors gives Fujitsu engineers intimate knowledge of how the chips work and enables the engineers to optimize the processors' connection to the rest of the server ecosystem. Lacking this ability, Sun engineers will not be able to build servers that match the capabilities of F
Re:Oracle will jettison the entire hardware divisi (Score:4, Interesting)
The logical conclusion is that Oracle will jettison the entire hardware divison.
I don't think that'll happen. I think Larry wants you to buy Oracle (the database) running on Oracle (the OS) on Oracle (the hardware) and support contracts for the entire stack. There's a lot of PHB love for being able to call one phone number for anything that breaks because the same company is responsible for every component. IBM currently offers this, and now Oracle can, too.
Re: (Score:3, Interesting)
I don't think that'll happen. I think Larry wants you to buy Oracle (the database) running on Oracle (the OS) on Oracle (the hardware) and support contracts for the entire stack. There's a lot of PHB love for being able to call one phone number for anything that breaks because the same company is responsible for every component. IBM currently offers this, and now Oracle can, too.
True. But none of the above requires Oracle to manufacture one screw, chip, or board of hardware. OEM servers from Fujitsu (or D
Re:Oracle will jettison the entire hardware divisi (Score:2)
No-one in their right mind buys something they don't want because their friend is the salesperson. Much less pays extra for it!
Larry is way smarter than that, and I suspect he's looking at the chance to go from a database company to a whole-line vendor, just like IBM was back in the mainframe days.
I'll happily believe IBM would have laid off everyone, starting with the hardware folks. I'll bet they're cursing having missed the chance to buy Sun.
--dave
Re:Oracle will jettison the entire hardware divisi (Score:4, Insightful)
Right. Spend $5 billion dollars for a company and then shut down 90% of it.
Re: (Score:2)
Re: (Score:2)
128-core SMP enterprise hardware is not a competitive low margin business: 1-4 core small servers are. For the latter market, Sun sells 1U AMD and Intel boxes (;-))
--dave
Re: (Score:2)
If you get rid of processor business (which most need to do) to make sure you don't have to pay for all of that r&d (and possibly fabs) then you have a much harder time differentiating yourself from the competition. Additionally Intel is mak
Re: (Score:2)
90% of annual revenues does not equal 90% of value to Oracle.
Maybe not, but it's worth something. The other 10% is worth pretty close to nothing to Oracle. Yeah, it's "high margin" (when it makes a profit at all), but in dollar terms it's hardly worth Oracle's time, never mind $5 billion in cash.
Re: (Score:2)
They are going to find a way to make money with the hardware because you can't just get rid of it without seriously pissing off customers that also might/already purchase Oracle DB or applications. But you can bet they are going to be smart about the process.
Re: (Score:2)
Apple has a history of making ASICs. In fact the first intels were the first case of no Apple ASICs. The last example was the PHB chip on G5 systems, Apple designed, IBM fabbed. In fact they currently sell chips for iPod authentication fabbed in Taiwan. The rumors are that they are designing the SoC for a new line of small devices. Apple is very much in the hardware design game and positions continue to show-up on job listings.
Sun kills rock! (Score:2)
Re: (Score:2)
Uggggh.
Re: (Score:2)
Sun kills Rock? (Score:4, Funny)
Wait, so if sun kills rock, sun burns paper, and sun melts scissors... SUN IS INVINCIBLE!
Re: (Score:2)
Just as paper covers rock, Burns covers sun!
Um, Opteron? (Score:5, Insightful)
Not that I am an AMD fanboy, but, my dual opteron PC just ordered me to remind you all that AMD will also benefit from this choice. Indeed, Sun already uses AMD Opteron parts for some of its servers....
Re: (Score:2)
Maybe with the loss of some of the hardware Sun produced, Oracle will purchase AMD
Re: (Score:2, Interesting)
Re: (Score:2)
Re: (Score:2)
They also use Intel (in fact, IMO they seem to like more their intel partnership, probably due to the fact that AMD these days suck). So I don't see how this would benefit AMD alone...
Re: (Score:2)
True. But Sun also make Nehalem servers. And lately Nehalem has been getting a lot more interest.
More likely reason (Score:5, Interesting)
It is more likely that Sun compared the Rock to Fuji's new SPARC CPU and realized that it could not compare for the price/performance. Frankly, looking at the two, Sun made the wise move, killed off a weaker chip, and will likely push forward the SPARC64 VVIfx, which is further along in development and will be ready sooner.
Re: (Score:3, Interesting)
Re:More likely reason (Score:5, Insightful)
The Rock is an amazing chip on paper. It runs an extra fetch/decode part of the pipeline a few cycles ahead so that it is always loading the needed data into the cache before it's needed.
If this technology doesn't work, however, Rock is a pretty unimpressive chip and there is no evidence that it does actually work (for example, it doesn't predict across computed jumps, which accounts for a lot of cache misses in current chips). Even if it does work, Rock looked like it would perform best on the kind of workloads where the T2 does well, but probably not as well as the T2. Out of the SPARC64 series, Rock, and the T2 and successors, Rock is by far the weakest. The SPARC64 does well on traditional workloads, the T2 on heavily-parallel workloads. Between the two, Sun already has processors for pretty much any market they want to be in - Rock just doesn't fit commercially. Note that the summary's comment, there is no indication that they are killing off the Niagara line - they aren't exiting the CPU business, just killing one failed experiment. Not the first, and probably not the last, time Sun has killed off an almost-finished CPU because there was no market for it.
Re: (Score:2)
Re:More likely reason (Score:5, Interesting)
I was at a talk by a former Intel chief architect a while ago which explained this. It takes an absolute minimum of about five years to get a new CPU to market. When you start, you have to make guesses about the kind of workload people will be running, their power and financial budgets, and the process technology that will be available to you for producing it. Once you've made these guesses, you can generally come up with a chip that meets the requirements.
The Pentium 4 is the canonical example of a chip made with bad guesses. The P4 team were told to make it fast at all speed. They missed the market, because they didn't notice that people were starting to care about power consumption, and few people wanted a 120W CPU - especially not in the data centre where the margins are high, but power and cooling are expensive. They also made some bad guesses about process technology, thinking that the process guys would fix the leakage problem so they could ramp the clock speeds up to 10GHz. They came up with a design that scaled up to 10GHz, but needed a process technology that still doesn't quite exist to produce it at these speeds.
I suspect something similar happened with Sun. First, they made some bad guesses about how well the thread scout would work. It's a nice idea on paper, but doesn't seem to perform well. The result is that Rock will perform better than other approaches on highly-deterministic CPU-bound workloads with lots of threads, while in the real world highly-parallel workloads tend to be I/O bound or have less predictable code flow.
The T2 goes in completely the opposite direction. It contains a set of very simple cores. They omit most of the complex logic found in other processors, and instead just have a lot of execution engines. If you have a workload that contains a lot of I/O-bound threads, then the T2 gives insanely good performance (both per Watt and per dollar). Sun began designing this family of chips right at the peak of the .com boom, and they are perfectly suited to web-serving workloads (they also do well on a lot of database workloads, which is one of the reasons Oracle is interested in them).
One of the things Sun does very well is recycle technology. There are a lot of half-dead projects at Sun that are not commercially exploited, but have fed ideas into their other products. Even though Rock is dead, I wouldn't be surprised to see some of their ideas appear in the T3 or T4. The hardware scout is only useful on a few workloads, but it's relatively easy to implement on something like the T2, so we may see it reappear in a future design.
Re: (Score:2, Insightful)
The Pentium 4 is the canonical example of a chip made with bad guesses. The P4 team were told to make it fast at all speed. They missed the market, because they didn't notice that people were starting to care about power consumption
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture. Sun has been wildly flailing its arms about trying to come up with an architecture worth carrying into the future. So far, no dice. This is just more of the same. Totally canning two architectures ought to be the end of Sun's attempts to make new SPA
Comment removed (Score:4, Interesting)
Re: (Score:2)
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads;
It is but IIRC intel throught at the time that they would be able to push the P4 to crazy clock speeds which would more than make up for the lower performance per clock.
Unfortunately they didn't get the clock speeds they had hoped for and the high clock speeds they did get required very high power consuption.
Re: (Score:2, Insightful)
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale, but it begat the Pentium M and to some degree, the Core architecture.
How is that a problem with his analysis?
The Pentium 4 was designed to achieve high performance by having really high clocks to compensate for its poor per-cycle efficiency. It hit 3.8 GHz in late 2004, on a 90nm process. 4.5 years later, on 45nm, we still don't have any current processor design which clocks that fast (outside of overclocking, but then again P4 still overclocks higher than any current production processor -- IIRC people have gotten them over 8 GHz on liquid nitrogen).
The P3 basic design co
Re: (Score:2)
There are a number of problems with your analysis, not least that the Pentium III is faster clock-for-clock than the Pentium IV at almost all workloads; its failing was that it did not scale
Performance = Instructions Per Cycle * Cycles Per Second. Yes the P4 had lower IPC than the P3, but yes it did in fact scale very well with frequency. Both in the sense that it's highly pipelined design allowed for higher clock frequencies, but also in that IPC didn't drop off as fast with increased frequency as it did
Re:More likely reason (Score:5, Informative)
I decided to post anon as I worked at Sun during the tail end of Cheetah and the beginning of Rock.
Rock (aka Turd Rock from the Sun) was not the first turd from Sun. The last one was USIII (Cheetah). What happened there is that it got delayed and by then the L2 cache it had been designed for was not sufficiently larger than the competition's (I think the original idea was 1 or 2 MB configs), so the option was added to add really big L2 caches. One of the pie in the sky ideas early on was putting the L2 tags on the die for speed. So by then there was no room for more tags. You ended-up having a 512 byte L2 cache line size if I recall correctly if you had 8MB of L2 cache. Plus since when it was designed they addressed the problem of waiting around for a cache line to fill by making a special purpose wide fast bus for it they did not have much sectoring. There was either no sectors or only two, I cannot remember (by USIIIi all this broken L2 cache desing was rectified so I am fuzzy on when what was when). So say there were two. What would happen on a cache miss is that the 256 byte sector that needed would fill. when it was done, the instruction stream would continue (no amount of reordering would prevent a pipeline stall for filling 256 bytes) and the other sector would start filling. Now imagine that cache miss was for data. How often do you look at data structures that are 512 bytes big (common random access case)? Turns-out 64 bytes is a good real world figure that is ideal 95% of the time. Just think about how much memory bandwidth and time is being wasted. Now imagine that cache miss was for an instruction. 512 bytes is 16 instructions. Again in 95% of code there is a branch in less than 16 instructions.
So you might think how can something like this happen. The reason is that the the hardware people were their own kingdom, and the US people a fiefdom within. They #1 did not think like software engineers and came-up with pie in the sky ideas (like that L2 cache) which led to delays (another thing they could have done is made L1 caches that were physically tagged, but that is okay Sun engineers had been dealing with coloring for years already) and #2 did not simulate early on enough. When they did run simulations they had everything already worked-out on paper for up to 2MB L2 and things were good. Then they just did tweaks and did not run simulations again until much too late. The simulations showed that for almost all cases USIII was slower with 8MB L2 cache than with 2MB, think about that.
Rock was more of the same. In fact the simulation was done even later. The pie in the sky idea was the leap frogging prefetcher(they called it a hw scout). When they ran simulations after doing a bunch of work on it, they saw that the way typical code branched it was not all that good for the added memory bandwidth consumption. So they added a few tweaks to that, but it was hopeless. So they needed something else to make the chip worthwhile, transactional memory. Did they do it ala PPC et all with reservations on cache line boundaries, no they came up with a scheme with two new instructions and a status register. You did a chkpt instruction with a pc relative fail addr to jump to in case something was not guaranteed to be atomic. At the end you did the commit instruction. If something got in the way before everything got out the write buffer, you would arrive at the fail addr where you could check the cps register for info and nothing was committed. Can anyone else see how difficult this would be to get right? They were hardware guys and they did not see how hard of a problem it was? In fact the implementation they had had conditions like if an interrupt occurred or if you did a divide instruction you would end-up at the fail addr (yes if the other core on the die did it as well). My hunch is that the complexities of this transactional memory scheme is what delayed Rock for more than 2 years.
Another example was Jaguar USIV. For that one they decided that they could have less frequent pipe line stalls i
Wow, there's not much left then. (Score:5, Interesting)
Re:Wow, there's not much left then. (Score:5, Informative)
Well there's IBM. And they don't seem to be slowing down:
POWER 6 [wikipedia.org]
POWER 7 [wikipedia.org]
also:
http://www.theregister.co.uk/2008/07/11/ibm_power7_ncsa/ [theregister.co.uk]
POWER 7 sounds like crazy town...
heat? (Score:2, Insightful)
Well there's IBM. And they don't seem to be slowing down:
POWER 6 [wikipedia.org]
POWER 7 [wikipedia.org]
also:
http://www.theregister.co.uk/2008/07/11/ibm_power7_ncsa/ [theregister.co.uk]
POWER 7 sounds like crazy town...
The one thing I like about the Niagara-based CPUs (UltraSPARC-Tx) is that they're fairly low wattage for the work that they can do. These 4 and 5 GHz chips from IBM seem like they're going to be dumping heat like mad.
Unless you're doing HPC, and are willing to go into water-based cooling in your DC, it seems excessive to some extent.
Anyone have experience with POWER and how it differentiates with SPARC? It seems that there's a product split in SPARC, but everyone else (IBM, Intel, AMD) seems to have a one-s
Re: (Score:2)
It doesn't really benefit IBM (Score:3, Interesting)
Mostly, it just benefits Intel and AMD. Sun loses their high-end chip, which theoretically hurts their high-end offerings, but their high-end servers are an rapidly declining piece of their revenue. I've thought that Sun should drop SPARC entirely, except for supporting legacy customers. The niagara chip is an interesting concept, but most people today just want Intel/AMD chips in their servers.
Re: (Score:2, Insightful)
People want more viruses? As a virus is targeted at an architecture and api, and if you combine into a single chain, you wind up with a perfect storm for virus spreading. Witness the Irish Potato Famine.
I say we need more diversity of architectures, OS's, platforms and API's to prevent a Pandemic of computer malware. I still laugh at the memory of witnessing conficker trying desperately to install itself on my SPARC Kubuntu machine.
Re:It doesn't really benefit IBM (Score:4, Insightful)
By your own example, though, clearly the current level of diversity hasn't helped mitigate the spread of malware, since conficker was able to install on many many PCs.
Then, if we decided we needed more diversity, how many more? I can't see having 10 major OSes making a difference, perhaps 50 with wide distribution.
So now, businesses, software developers, hardware manufacturers, tech support organizations have to support 50 different operating systems? Where's the ROI on that? How will we hire enough people who are trained on that many different configurations?
Certainly, we all want better computer security, but improving security by increasing IT complexity is like permanently banning travel between countries because of the fear someone might bring a disease in. It solves the problem, but damages everyone every other way.
Re:It doesn't really benefit IBM (Score:4, Insightful)
Discounting x86 for big-iron server systems because they would otherwise attract viruses -much like the potato famine- is ridiculous. I think you're paranoia.
Re:It doesn't really benefit IBM (Score:4, Interesting)
I don’t think it would have been wise for them to kill their biggest-selling product.
Re: (Score:2)
Sun's hardware is hardly Oracle's biggest-selling product.
And, remember, Ellison explained the purchase of Sun entirely in terms of Sun's software (Java and Solaris), making no reference to its hardware.
Re: (Score:2)
Re: (Score:2)
Rock is the high-clock-speed chip, while the Ultra VII and future variants are the high-end chips, which are absolutely necessary for things like the transaction processing loads of an eBay, much less a bank or large retailer.
--dave
To summarise the article: (Score:3, Insightful)
How I love this industry
I'm sorry, what? (Score:2, Funny)
Intel announced yet another delay for Tukwila, the next generation Itanium
Please tell me that's not an actual product name. (apologies [slashdot.org])
MySQL (Score:2)
The summary is misleading.... (Score:5, Insightful)
Rock was Sun's effort to develop a processor with high single thread performance. Single thread performance doesn't help the database performance of Sun' s new Oracle Over Lords. What databases need is high multi-thread performance.
The Niagara line ( http://en.wikipedia.org/wiki/UltraSPARC_T1 [wikipedia.org] ) provides the proper architecture for improving database performance, and this effort by Sun has the added benefit of actually producing shipping products (Unlike Rock).
At this time, Oracle/Sun has NOT announced the killing off of further Niagara development.
The whole *article* is misleading.... (Score:3, Informative)
The article reads a lot like FUD written by Microsoft about particularly threatening Linux advances.
I just benchmarked a huge Oracle configuration on T5240/T5440, M5000s and M9000s, and it really made my little heart beat fonder (;-))
--dave
This Was Always Going to Happen (Score:5, Insightful)
1. Catch up to x86 platforms in terms of raw performance as most SPARC systems have tended to overlap with workloads x86 systems have taken over. Papering over cracks by promoting 'CoolThreads' and parallel processing as a way around this performance gap was never going to work. I can remember almost ten years ago working somewhere where a person discovered that their Athlon 1.4GHz desktop system had several times the performance of their UltraSPARC III server and could complete tasks several times sooner. Cue lots of panic as UltraSPARC was justified because it was 'enterprise' reliable.
2. Accept the inevitable and throw the towel in.
3. The third way: Do what IBM has done with Power and push it into a high-end and high premium niche. This is difficult because IBM itself can only cover Power by selling mainframe packages and a whole bunch of add-ons to make it pay. Sun have had difficulty with this because their hardware division has always relied on hardware sales themselves.
Option 2 has clearly become the only way out once Sun's difficulties resulted in a takeover and as poor as Oracle might be at some things they are extremely successful at judging bottom lines.
Re: (Score:2)
Actually Sun's been doing 3) for years, designing chips to work with a big fast backplane (ex-CRAY, at one time), and which Fujitsu has specialized in.
The Rock is their high-clock-speed box, not their big database box.
--dave
Re: (Score:2)
I'm guessing yours is not 400MHz 4MB non-mirrored e-cache model, but I digress...
The USII was arguably the best at the right time processor to come out of Sun. Its competitors were Power3 (came out the next year), Alpha 21264, PA8500, and R12000. Nothing from Intel or AMD at the time came close. The Power3 was the closest in performance, better in some respects, but AIX was very not pleasant and the cache tended to be better on the USII for the same price. The Alpha was getting dated, but you could get ones
Perhaps Fujitsu's SPARC line (Score:2)
R.I.P. SPARC (Score:2)
You will be missed.
I don't think that's right-- (Score:2)
I don't think Sun kills rock--I think sun burns paper, paper covers rock and rock blots out sun..
Resignation (Score:2)
Re:Very Interesting... (Score:5, Funny)
You may want to check your internet connection, I think your post has ended up in an alternate-universe Slashdot. How's the economy over there?
Re: (Score:2)
How's the economy over there?
Terrible, but I'm hopeful President Oprah will be able to turn things around.
IBM bought Sun? (Score:5, Funny)
Oracle is gonna be pissed.
Re:Very Interesting... (Score:5, Funny)
Re:What are these architectures good for... (Score:5, Insightful)
Scale. x86 cannot scale up anywhere near as far as SPARC (or even MIPS for that matter) can. You realize that the cheapest SPARC can handle more threads per cycle than a dual-quad Xeon, and do it while using less electricity, right? As for the big-iron chips, they handle databases on a scale that dwarfs the address range of x86, relying on more registers than even exists in the x86 architecture.
Re:What are these architectures good for... (Score:5, Insightful)
Yes, and all these threads you get have access to crappy fpu's and horrible memory bandwith.
It's true that you can easily slap a lot of Sparc CPU's into a single machine than you can do so with x86, but since you're actually going to need all those CPU's to match even an off-the-shelf dual quad-core Opteron system for most tasks, the end result is that you're still spending much, much more money and probably suck more power too. For tasks that cannot be parallelized or executed concurrently Sparc is rubbish in every aspect imaginable.
I work at one of those companies that got lured into standardizing on Sparc hardware years ago, and now we're kind of stuck on it because we have all those systems in the field, with customers. A while ago we investigated upgrading to newer Sparc hardware (M3000) and we leased a test system to assess it's performance. For compationally intensive (FPU) tasks running 8-threads, the ~$11,000 Sparc64 IV with 8 cores / 16 hardware threads was about as fast as a $400 Core 2 Duo laptop. I'm not kidding....
So unless you want to run an enterprise database that has to handle 1000s of requests a second, Sparc has zero added value. If you really need a Sparc system for high-load, high-availability server tasks, I don't know. I'd guess a Power6 server or a rack of Opterons or Xeons wouldn't do much worse.
Re:What are these architectures good for... (Score:5, Insightful)
To get close to an off-the-shelf AMD or Intel system performance-wise your SPARC systems need to be running hell-for-leather at 100%, drawing maximum power permanently. The Xeon or Opteron systems will be able to scale up and down far more comfortably, so when comparing these systems you are never comparing apples with apples because the performance is just not comparable. Unless you have thousands of *completely independent* requests to handle per second then a SPARC system is useless to you and the writing has been on the wall on that for the past ten years.
Re:What are these architectures good for... (Score:5, Interesting)
Several years ago, I had the opposite problem with a real world OLTP load. I replaced a 5 year old Quad SparcII 450MHz machine with a Dual Opteron 2.4GHz. The Opterons had 3x the total MHz, 4x the RAM, more PCI bandwidth, and faster disks. They were half the price of the Sparc relacements, so I was not allowed to evalate the Sparc options. I guestimated that the new Sparc option would have been 2x faster and handled 4x the transactions compared to the 5 year old machines.
The Opterons were slight faster, but did not handle load spikes nearly as well. Had I been allowed to purchase the 5 year old hardware used, I probably would have been better off sticking with the 5 year old hardware. If I allow hindsight, including all the architecture conversion problems and software upgrade issues I had, the old-but-tested hardware would have been a big win. (Note: I had the ability to scale my database horitzontally very easily, so old machines were still useful machines.)
For a database server, I highly recommend that a Sparc based machine be evaulated next to any X86 based machine. They cost more upfront, but I found them to be cheaper in the long run.
Re: (Score:2)
What OS did the Opteron run? I guess Sun puts a lot of effort to tweak Solaris for optimal performance in server tasks, so a desktop linux install on x86 with a low-latency kernel might in fact fall apart under high load, and comparing the two setups like that might be misleading. There's lots of ways to configure a linux kernel that really sucks under high load, even on the fastest of hardware. Try an early 2.6 kernel with the old VM, and you can slow a brand-new machine to a crawl by simply spawning a lot
Re: (Score:2)
The Sparcs had Solaris 8 64bit, the Intel's had RedHat Enterprise 4.2 64bit. ...running MySQL. I think the problem was partly that MySQL had been better optimized for 64bit Solaris, but not not so much for the Intel. Mostly because X86-64 was new enough that it hadn't had time to be optimized.
It was a database server. I needed very little CPU and a lot of IO. The Sun machines are designed to do IO. Even their X86 machines do some nice things for heavy IO loads. For my comparison, the 5yo Sparc machine
Re: (Score:2)
I'm really sorry but that just doesn't match my experience of reality.
The stuff I run on linux and solaris is about as FPU-bound as it gets, it does non-linear regression of sets of very complex, multi-dimensional model functions, inside the fitting loop it's all linear arithmetic (using LAPACK/BLAS on x86 and sunperf on Sun), lots of large matrix/vector operations, etc. I think I'd estimate the ratio of FPU code vs. control logic + system calls around 90%-10%. Since the model functions are separable over t
Re: (Score:2)
The M30000 has DDR2-533 ECC RAM, can you see now that you were not compute bound (unless your dataset was less than 4MB or so)? Also it uses Fujitsu SPARC64 processors which typically have worse FP performance than UltraSparc IV+ even years later. The other thing is were you doing 32-bit or 64-bit FP? By using 32-bit FP and the SIMD features on AMD/Intel that can be a lot faster than 32-bit FP. Sparc FP went through a few revisions over the years. What you have now is 32 double precision registers, the firs
Re: (Score:3, Insightful)
The problem is that a Xeon will complete each thread [task] in far less time than the SPARC and be on to the next one, and the workloads that most organisations have are entirely dependant on completing ever increasing single tasks in the shortest amount of time possible.
Re: (Score:2)
Re: (Score:2)
You realize that the cheapest SPARC can handle more threads per cycle than a dual-quad Xeon, and do it while using less electricity, right?
Err, no. The cheapest SPARC is probably still some UltraSPARC IV+ thingy, and those are absolutely hopeless (and single-core). The T2 might have a chance on a few work loads, but Intel will very soon have 8-core 16-thread CPU's out with twice the clock rate and much better per-thread IPC than the T2.
As for the big-iron chips, they handle databases on a scale that dwarfs the address range of x86,
Nehalem supports 44 physical address bits. I'll bet you that there are no NUMA or SMP SPARCs out there with 16TB memory. Indeed, the T2 is limited to 40 address bits, or 1TB,.
relying on more registers than even exists in the x86 architecture.
The number of regist
Re: (Score:3, Informative)
What keeps this SPARC space alive?
Solaris. ...)
Sun has maintained backward compatibility for applications for decades. You rarely encounter "oops, you need libc.2.0, but that is not supported on the newer kernels.". Also, the command-line system administration tools (especially for troubleshooting) are comprehensive (dtrace, truss, ptree, prstat, psrset,
Re: (Score:3, Interesting)
There's a lot to be said for backward compatibility. I recently migrated a very old database off of a Solaris 2.6 system and moved it to Solaris 10. I didn't have to search for back leveled software, the application just worked. Granted, this isn't something I need to do every day, but it's an invaluable feature to have when you're dealing with trying to support enterprise applications that just refuse to die.
Re: (Score:2)
You're not seriously attempting to take a back handed swipe at Linux by bigging up Solaris's command line admin tools are you? I mean, have you actually looked a recent release of Redhat or SuSE? You must be barking mad. Linux has a very rich and comprehensive set of command like tools at its disposal. Your few examples are easily matched:
dtrace = systemtap
truss = strace
ptree = pstree
prstat, prset = taskset
Re: (Score:2)
whoops, I'm getting sleepy.
prstat =ps
Re: (Score:2, Funny)
I totally agree with you!
Now excuse me while I go pitch a Windows ME + celery + mySQL solution to eBay and give them the 'real' facts.
Re:What are these architectures good for... (Score:5, Insightful)
What you say is often, but not exclusively, true. The main reason people buy SPARC:
I agree that in many cases, proprietary kit is overpriced and unnecessary. Which is why it's on the decline...
Re:What are these architectures good for... (Score:5, Insightful)
Yeah, but the thing is that 32-CPU systems are incredibly niche. I've been involved in projects that delivered a number of systems of that size over the years and I can count on one hand the times they've been used as single 32-CPU systems. In virtually all cases they were hard partitioned down to 4, 8 and sometimes 16 cpu systems. And x86 is walking all over that market now. Next year when the Nehalem-EX chips ship, you'll get your 32 cores on a standard 4 socket server with twice as many threads. It just shoves the high end systems more and more into a tight corner. RAIDed memory is great, but that alone is not worth the premium that proprietary solutions are burdened with.
Re: (Score:3, Informative)
Re: (Score:2)
I think 2010 will be a big turning point for x86, well specifically Intel's x86 anyway. There will be a convergence of technologies: Intel's new QuickPath Interconnect enabling high speed links between Nehalem-EX chips and IO Hubs on the motherboard, with affordable 10GigE ethernet, 8Gb/s FibreChannel and 6Gb/s for SATA. You'll probably still be pinning your big database tables in memory though as your expectations will be higher ;-)
Re: (Score:2)
Not joking, but betting that your business parallelizes wonderfully, so you can break up your transaction processing over a Beowulf cluster. Alas, large transaction processing tends to require something like a POWER or SPARC, to get 128 cores with a common locking architecture working on driving a large database.
This is the traditional IBM/SUN/H-P space, and fits, in rough order of difficulty, large manufacturing, large on-line retail, medium and higher regular retail, banks and telcos. Not my personal
Re: (Score:2)
Not joking, but betting that your business parallelizes wonderfully, so you can break up your transaction processing over a Beowulf cluster.
Nope. Only ever done one of those. It ran Fluent (on Redhat) and was for the Aerodynamics group for one of the Foruma One racing teams. The bulk of my experience comes initially from working for one of the big vendors, and then self employed as a consultant in banking, telcos, pharmaceutical and health. Some of the data crunching software typically deployed in these areas are: Ab Initio, Nucleus, Octopus and Oracle RAC.
Alas, large transaction processing tends to require something like a POWER or SPARC, to get 128 cores with a common locking architecture working on driving a large database.
Traditionally, yes. But times are changing as the top Intel Xeon and AMD chips are fi
Re: (Score:3, Informative)
It's closer to the other way around; ARM is the mostly widely used 32 bit architecture, and accounts for more than 75% of all 32 bit processors sold.
Really, the entire world has been forced onto the ARM monoculture (except perhaps for a few x86s at the high end).
Re: (Score:2)
Though the high end ARMs may giveit a good run for the money. The ARM family is getting too complex in my opinion, with multiple instruction sets. Despite the RISC heritage it's no longer an architecture for the keep-it-simple-crowd.
Re: (Score:2)
In the middle it is looking like Atom is in a very good place, as has PentiumM on PC104 (and PCI brethren) for a while. PPC is being squeezed-out of that arena especially with the motorola to freescale and emerson sales. IBM does not serve the likes of emerson and GE well.
Linux not Microsoft. (Score:2)
Linux+x86 is Solaris+SPARC's main competitor. Not Microsoft Windows+x86.
The stuff people would want to run on SPARC machines, can usually be run on Linux+x86 with decent performance (and often better price/performance).
And if they really wanted they could also do Solaris+x86. So Sun's also responsible for that...
If people like vmware manage to provide _seamless_ high availability features that ar
Re: (Score:2)
copy cats (Score:2)
Don't worry in 6-8 years Intel will copy all of that. Just like they copied SMT (called hyperthreading by Intel) from Sun.
Re: (Score:2)
If you mean SMT as is in mutli processor, Sun copied that from other folks. If you actually mean HTT then Intel came-up with that years before the T1 from Sun. Though the T1 uses more of a CMT idea. HTT was pretty bad in the P4, but returns in Nehalem. It works in a different way than the Sun Niagara approach (only issues so many instructions per clock and from two threads to fill in pipeline bubbles) But with the pipeline much wider than on P4 it is much better and fairer. That said if both of your threads