Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Sun Microsystems Businesses Databases Oracle Programming Software Hardware IT

Sun Kills Rock CPU, Says NYT Report 190

BBCWatcher writes "Despite Oracle CEO Larry Ellison's recent statement that his company will continue Sun's hardware business, it won't be with Sun processors (and associated engineering jobs). The New York Times reports that Sun has canceled its long-delayed Rock processor, the next generation SPARC CPU. Instead, the Times says Sun/Oracle will have to rely on Fujitsu for SPARCs (and Intel otherwise). Unfortunately Fujitsu is decreasing its R&D budget and is unprofitable at present. Sun's cancellation of Rock comes just after Intel announced yet another delay for Tukwila, the next generation Itanium, now pushed to 2010. HP is the sole major Itanium vendor. Primary beneficiaries of this CPU turmoil: IBM and Intel's Nehalem X86 CPU business."
This discussion has been archived. No new comments can be posted.

Sun Kills Rock CPU, Says NYT Report

Comments Filter:
  • by Funk_dat69 ( 215898 ) on Tuesday June 16, 2009 @09:54AM (#28347015)

    Well there's IBM. And they don't seem to be slowing down:

    POWER 6 [wikipedia.org]

    POWER 7 [wikipedia.org]

    also:

    http://www.theregister.co.uk/2008/07/11/ibm_power7_ncsa/ [theregister.co.uk]

    POWER 7 sounds like crazy town...

  • Re:RPS (Score:3, Informative)

    by Ender_Stonebender ( 60900 ) on Tuesday June 16, 2009 @10:09AM (#28347175) Homepage Journal

    You forgot [wikipedia.org] the low-cost, low-power Lizard CPU (being developed by the designers of ARM CPUs) and the highly logical Spock CPU (from AMD, of course).

  • by Macka ( 9388 ) on Tuesday June 16, 2009 @10:37AM (#28347431)

    What keeps this SPARC space alive?

    Same as with all proprietary high end solutions: customer ignorance. The customer goes to the vendors and says: "Here's my shopping list of business requirements. Please bid a solution that meets those needs". The vendor salesman (after wiping the drool from his/her chin) comes back with an Enterprise Class solution using propietary high end kit at the highest price the saleman thinks he/she can get away with to win the bid but beat off the competition. The whole thing is wrapped up in smoke and mirrors to make the customer feel valued and special with the assurance that they're getting the best in class. The whole thing is topped off with generous dollop of FUD dissing any other vendor solution. Things like: "The x86 space is too aggressive and its 3 year turnover cycle is bad for your business. Use our systems which have a 5 year life cycle and get a better return on your investment". Or here's another one: "Our chips are built with advanced RAS features. They're [self healing] and crash less often than x86". Oh and lets not forget that to buy one of their enterprise solutions, you usually also have to buy their proprietary enterprise OS and pay their enterprise software license fees at their inflated enterprise prices.

    Perhaps you think I'm joking !

  • by davecb ( 6526 ) * <davecb@spamcop.net> on Tuesday June 16, 2009 @11:05AM (#28347775) Homepage Journal

    The article reads a lot like FUD written by Microsoft about particularly threatening Linux advances.
    I just benchmarked a huge Oracle configuration on T5240/T5440, M5000s and M9000s, and it really made my little heart beat fonder (;-))

    --dave

  • by isj ( 453011 ) on Tuesday June 16, 2009 @11:08AM (#28347821) Homepage

    What keeps this SPARC space alive?

    Solaris.
    Sun has maintained backward compatibility for applications for decades. You rarely encounter "oops, you need libc.2.0, but that is not supported on the newer kernels.". Also, the command-line system administration tools (especially for troubleshooting) are comprehensive (dtrace, truss, ptree, prstat, psrset, ...)

  • Re:So, basically,... (Score:3, Informative)

    by akadruid ( 606405 ) <slashdot@NosPam.thedruid.co.uk> on Tuesday June 16, 2009 @11:48AM (#28348321) Homepage

    It's closer to the other way around; ARM is the mostly widely used 32 bit architecture, and accounts for more than 75% of all 32 bit processors sold.

    Really, the entire world has been forced onto the ARM monoculture (except perhaps for a few x86s at the high end).

  • by Anonymous Coward on Tuesday June 16, 2009 @12:08PM (#28348627)

    As a grad student studying computer architecture, Sun's Rock processor was one of the most exciting new architectures in the past few years.

    Scout Threads offer a lot of potential performance for single threaded applications. A T2 can provide great throughput for a database, but the latency of individual requests is relatively high because of the very simple architecture. Rock offers the possibility for lower latency requests, although this comes at the cost of using more power.

    Rock also includes support for Transactional Memory, which has been a hot-topic in research for many years. T2 is great for applications that are highly parallel, but if you don't know how to write parallel programs, all those threads are wasted. Transactional Memory provides a simple paradigm for writing parallel applications more easily than traditional paradigms.

    The fact that Rock includes both of these features made it very exciting and interesting. I think it's unfortunate and disappointing that Rock is getting killed before we get to see what it can really do. The first Itanium chip was terrible, but Itanium II was much better, and actually does a good job in a specific niche. The first Rock might not be perfect, but it represents a significant departure from previous designs, and I think it deserves a chance to prove itself and find its niche.

  • Read this ... (Score:1, Informative)

    by Anonymous Coward on Tuesday June 16, 2009 @01:35PM (#28350261)

    Rock, Sun's third-generation chip-multithreading processor, contains 16 high-performance cores, each of which can support two software threads. Rock uses a novel checkpoint-based architecture to support automatic hardware scouting under a load miss, speculative out-of-order retirement of instructions, and aggressive dynamic hardware parallelization of a sequential instruction stream. It is also the first processor to support transactional memory in hardware.

    http://ieeexplore.ieee.org/xpls/abs_all.jsp?isnumber=4812126&arnumber=4812132

  • by afidel ( 530433 ) on Tuesday June 16, 2009 @02:45PM (#28351455)
    Not to mention that everyone selling 4 way and larger x64 servers offers raided memory if you want it. My biggest gripe with x64 systems is the lack of sufficient I/O offloading. High workloads are fairly easily met by the CPU and memory subsystems but when it comes to moving big piles of data to and from the network and storage they kind of suck. We get fairly good performance by pinning our big database tables in memory and by using TOE cards (which are poorly supported) for networking. There is some hope on the horizon with many 10gig ethernet adapters being CNA's with a high degree of offloading, but it's one area where I think the x64 market needs to mature a bit more.
  • by Anonymous Coward on Tuesday June 16, 2009 @02:52PM (#28351573)

    I decided to post anon as I worked at Sun during the tail end of Cheetah and the beginning of Rock.

    Rock (aka Turd Rock from the Sun) was not the first turd from Sun. The last one was USIII (Cheetah). What happened there is that it got delayed and by then the L2 cache it had been designed for was not sufficiently larger than the competition's (I think the original idea was 1 or 2 MB configs), so the option was added to add really big L2 caches. One of the pie in the sky ideas early on was putting the L2 tags on the die for speed. So by then there was no room for more tags. You ended-up having a 512 byte L2 cache line size if I recall correctly if you had 8MB of L2 cache. Plus since when it was designed they addressed the problem of waiting around for a cache line to fill by making a special purpose wide fast bus for it they did not have much sectoring. There was either no sectors or only two, I cannot remember (by USIIIi all this broken L2 cache desing was rectified so I am fuzzy on when what was when). So say there were two. What would happen on a cache miss is that the 256 byte sector that needed would fill. when it was done, the instruction stream would continue (no amount of reordering would prevent a pipeline stall for filling 256 bytes) and the other sector would start filling. Now imagine that cache miss was for data. How often do you look at data structures that are 512 bytes big (common random access case)? Turns-out 64 bytes is a good real world figure that is ideal 95% of the time. Just think about how much memory bandwidth and time is being wasted. Now imagine that cache miss was for an instruction. 512 bytes is 16 instructions. Again in 95% of code there is a branch in less than 16 instructions.

    So you might think how can something like this happen. The reason is that the the hardware people were their own kingdom, and the US people a fiefdom within. They #1 did not think like software engineers and came-up with pie in the sky ideas (like that L2 cache) which led to delays (another thing they could have done is made L1 caches that were physically tagged, but that is okay Sun engineers had been dealing with coloring for years already) and #2 did not simulate early on enough. When they did run simulations they had everything already worked-out on paper for up to 2MB L2 and things were good. Then they just did tweaks and did not run simulations again until much too late. The simulations showed that for almost all cases USIII was slower with 8MB L2 cache than with 2MB, think about that.

    Rock was more of the same. In fact the simulation was done even later. The pie in the sky idea was the leap frogging prefetcher(they called it a hw scout). When they ran simulations after doing a bunch of work on it, they saw that the way typical code branched it was not all that good for the added memory bandwidth consumption. So they added a few tweaks to that, but it was hopeless. So they needed something else to make the chip worthwhile, transactional memory. Did they do it ala PPC et all with reservations on cache line boundaries, no they came up with a scheme with two new instructions and a status register. You did a chkpt instruction with a pc relative fail addr to jump to in case something was not guaranteed to be atomic. At the end you did the commit instruction. If something got in the way before everything got out the write buffer, you would arrive at the fail addr where you could check the cps register for info and nothing was committed. Can anyone else see how difficult this would be to get right? They were hardware guys and they did not see how hard of a problem it was? In fact the implementation they had had conditions like if an interrupt occurred or if you did a divide instruction you would end-up at the fail addr (yes if the other core on the die did it as well). My hunch is that the complexities of this transactional memory scheme is what delayed Rock for more than 2 years.

    Another example was Jaguar USIV. For that one they decided that they could have less frequent pipe line stalls i

Anyone can make an omelet with eggs. The trick is to make one with none.

Working...