Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Sun Moves Into Commodity Silicon 236

Samrobb writes "According to Sun Microsystems CEO Jonathan Schwartz, Sun has decided to release its UltraSPARC T2 processor under the GPL. Schwartz writes, 'We're announcing the fastest microprocessor we've ever shipped this week — delivering 89.6 Ghz of parallel computing power on a single chip — running standard Java applications and open source OS's. Simultaneously, we've said we're entering the commodity marketplace, and opening the chip up to our competition... To add fuel to the fire, the blueprints for our UltraSPARC T2... the core design files and test suites, will be available to the open source community, via its most popular license: the GPL.'" Sun is still working on getting these released; early materials are up on OpenSPARC.net.
This discussion has been archived. No new comments can be posted.

Sun Moves Into Commodity Silicon

Comments Filter:
  • by Anonymous Coward on Tuesday August 07, 2007 @06:15PM (#20148681)
    Go look at the CPU cycles per watt that the UltraSPARC T1 delivers.

    Now, figure the UltraSPARC T2 is better than that.
  • Power consumption? (Score:5, Interesting)

    by Toffins ( 1069136 ) on Tuesday August 07, 2007 @06:15PM (#20148691)
    I can't wait for somebody to design a new generation of desktop PCs that have lower power consumption than that of previous generations but without sacrificing performance and graphics. Anybody know how much power typical UltraSPARC based desktop PCs consume compared to Intel or AMD based desktop PCs?
  • Which GPL? (Score:5, Interesting)

    by junglee_iitk ( 651040 ) on Tuesday August 07, 2007 @06:16PM (#20148695)
    Nothing that it matter... just interested, but does anybody know if it is released under GNU GPL 2 or 3?
  • I'm thinking China. (Score:5, Interesting)

    by khasim ( 1285 ) <brandioch.conner@gmail.com> on Tuesday August 07, 2007 @06:24PM (#20148847)
    Depending upon how the patents (are there patents?) are handled. China has been researching it's own chip design in the past. This could be a huge push for Sun if China abandoned trying to re-invent the wheel and just started cranking out UltraSPARC's.

    Not to mention Windows not running on such, but Linux will.

    And China would have a home source of chips for their IT industry and would not have to import Intel or AMD.
  • Various options. (Score:4, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday August 07, 2007 @06:32PM (#20148965) Homepage Journal
    One would be to build a simulator that is accurate at the level of silicon, so that you can cross-compile and run binaries for this CPU on a non-native architecture. Another would be to look at some specific module within the core and re-use the code within an OpenCores project. A third would be to reverse this - take OpenCores code (or write your own) and generate a module that would work within the T2 and would provide functionality the developers might want. A fourth would be to produce a specialized version of the chip (rad-hardened, for example) without paying license costs. And so on.
  • Sparc co-processor? (Score:1, Interesting)

    by Anonymous Coward on Tuesday August 07, 2007 @06:38PM (#20149045)
    What workloads does sparc excel on? Is there any gains from running one as an add-in on PCIe and could an existing VM solution be hacked to take advantage of it?

  • by Ungrounded Lightning ( 62228 ) on Tuesday August 07, 2007 @06:43PM (#20149107) Journal
    Nothing that it matter... [is it] GNU GPL 2 or 3?

    It actually matters a lot because Sun probably owns a lot of patents.


    Too true.

    If I've got this right: Under GPL3 anybody with foundry access could make the chip or a derivative, with no more patent issues than Sun itself would have. But under GPL2 they might have to enter separate license agreements to actually implement it.

    = = = =

    Presuming this release does make the chip open to anybody absent further licensing, it will be interesting to see how it affects Sun's future.

    On one hand it means any company that wants to could build the chip and sell it in competition with Sun (which has borne the development costs on the SPARC series - but recouped much of them already).

    On the other hand, they have a number of advantages: Already up and fabbing, deep understanding of the chip, etc.

    Further, one big source of resistance to adoption of their chips is the concern for what happens if Sun abandons the line, stops developing it, goes belly-up, or closes up again. With a perpetual license to others to build this chip and make improvements on it, that's no longer an issue. Even if Sun went belly-up and left them with no other sources, a big enough company with a product based on this chip could even commission the fabrication of its own chips, rather than twisting in the wind for lack of supplies. So such a company can design this chip into their product line and buy it from Sun without betting their own company on a possibly weak supplier.

    Let's see Intel or AMD compete with that that. B-)
  • Re:Various options. (Score:5, Interesting)

    by mdmkolbe ( 944892 ) on Tuesday August 07, 2007 @06:49PM (#20149199)
    I do high performance numerical computation research, and something like this would help a lot.

    As part of my research I have to hand tweak and tune the inner most loops of our algorithms. Unfortunately, the performance of moderns processors behaves so counter-intuitively when pushing the floating-point units to the max, that it is basically impossible to guess whether a certain change will speed up or slow down the computation. Being able to know *exactly* what in in the CPU would greatly help with this.
  • by LWATCDR ( 28044 ) on Tuesday August 07, 2007 @06:57PM (#20149311) Homepage Journal
    Get a Core2Duo or one of the new low power AMDs. The just find a modern video card that is roughly the speed of a last generation card.
    If you want super high performance and super low power ... Not going to happen. They will always have the option to pump up the speed buy pumping up that watts.
    Top of the like will have high power draw.
    You have low power options that are pretty dang fast. The trade off is just up to you.
  • by MBCook ( 132727 ) <foobarsoft@foobarsoft.com> on Tuesday August 07, 2007 @06:59PM (#20149349) Homepage

    I see a 1 GHz T1 doing quite well compared to a 2.4 GHz Opteron and a 3 GHz Xeon. Things have improved on the Intel front, but the T2 should do quite well for the workloads it is designed for. Not only does it have more threads (and I think a better memory controller), but now it has one FPU per core instead of 1 per chip. That means 8x as many FPUs. That was the real weak point and now it has been addressed.

    I can't wait to see benchmarks of this chip. It is far more interesting than "the same chip for 3 years ago, now 0.3 GHz faster" or "now with one more micro-op fuser and a 2% better branch decoder."

  • by AvitarX ( 172628 ) <me&brandywinehundred,org> on Tuesday August 07, 2007 @07:05PM (#20149419) Journal
    I am actually hoping that AMD or Intel decide that there is useful technology they can use in their own chips.

    Especially AMD who needs whatever they can get at the moment. It is really far fetched, but possible we see AMD respond with a GPL chip that uses parts of Sun's tech they find useful. If they can get ahead of Intel for another generation or two it could be worth it to them.
  • by fm6 ( 162816 ) on Tuesday August 07, 2007 @07:08PM (#20149453) Homepage Journal
    I work at Sun (documenting x86 systems, as it happens) and I think you're really oversimplifying our business strategy. Just because we're doing x86 doesn't mean we're abandoning SPARC. Indeed, I see a lot of work going on with SPARC-based products. You might consider this a bad idea. (For obvious reasons, I can't possibly comment.) But it's the current business plan, and as long as that's the case, SPARC is not abandonware.
  • Re:Various options. (Score:5, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Tuesday August 07, 2007 @07:26PM (#20149693) Homepage Journal
    Generally, you have a library of routines tuned to different ranges of conditions, optimized by actually running them at different settings. ATLAS does this, for example, as do a number of other optimized libraries. However, you're absolutely right that modern cores are very sensitive to a range of conditions. Lookup/interpolation units are obviously not going to respond in a fixed interval, it will depend on what point you hit. Does the FPU have enough internal memory to avoid swapping in and out of core during calculations? If you re-order operations, can you squeeze better performance out of the L1 and L2 caches? Is a composite instruction faster or slower than executing the individual opcodes that would produce the same result?

    I don't know of anyone who has gone to the gate level to tune software - I've never found it necessary to go beyond a high-level definition of the processor, the sizes/speeds of the caches, the lanes between the segments, the length of each pipeline segment and other such information that can be basically listed. However, such information will not reveal unintended features (distinguished from bugs by being useful) and won't expose every possible shortcut.

    HPC is fun, though I agree that modern processors are counter-intuitive. They can do some seriously weird things at times, which is why CPUburn is such an interesting program. If only the developers still maintained it. :( A CPU that can self-destruct performing legal, documented operations is a buggy CPU. That goes for any other hardware, too.

  • Re:GPL and chips (Score:2, Interesting)

    by mrand ( 147739 ) on Tuesday August 07, 2007 @09:52PM (#20151095)

    As for FPGAs... You can get a few ARM7 cores onto a single FPGA that costs less than $10 and those prices are dropping. I have no idea how complex an OpenSPARC is, but I assume it is something equivalent to an ARM9 or so and will fit in a $10-or-so FPGA.

    The hurdles are not technology, but political. Sure people want free-as-in-beer cores, but they don't want GPL cores that force them to release their design.
    Just go look at the technical specs of the thing:

    http://www.sun.com/aboutsun/pr/2007-08/sunflash.20 070807.1.xml [sun.com]

    With specs like that, the OpenSparc T1 processor will not fit in any FPGA in existance right now, or in the next few years.
    So the hurdle is indeed technical.

          Marc
  • by porkchop_d_clown ( 39923 ) <mwheinz@nOSpAm.me.com> on Tuesday August 07, 2007 @10:13PM (#20151277)
    The posters here seem to be complaining that this is worthless because individuals can't make their own processor chips.

    That's not the point. Here's the point:

    1: Sun's processors are a niche market. People don't use them because they're harder to use than cheap commodity processors from Intel. Why are they harder to use? Because not enough people use them to create the kind of economic ecosystem that drives down the price of using the processors.

    2: All over Asia are chip factories that make low-end embedded devices, RAM chips, and so on. Factories that are owned by companies that don't have the cash on hand to do the R&D to design their own processors to compete with Intel.

    3: By GPL'ing their chip designs, Sun lets all those Asian factories produce chips that perform like Intels but cost even less. This gives people an extra incentive to switch away from Intel and to create the very economic ecosystem the processor needs.

    4. Next, Sun releases enhanced versions of the chip that aren't GPL'ed. Chip consumers can now choose from fast commodity processors or more expensive deluxe models - that are still code compatible.

    And Sun can repeat steps #3 and #4 as often as they like, feeding their previous generation designs to the GPL audience as their newest designs hit the market.
  • by mrchaotica ( 681592 ) * on Tuesday August 07, 2007 @11:46PM (#20152003)

    The T1 excels at large scale parallel integer operations. It had up to 8 cores and 32 execution units per chip. The biggest drawback was that there was one shared anemic FPU per chip so if even a relatively small amount of your workload was floating point performance took a serious dive.

    Hmm... that makes me want a dual-CPU system with one T1 and one Cell. Imagine if they were both Hypertransport-compatible...

  • Re:Various options. (Score:5, Interesting)

    by jd ( 1658 ) <imipak@ y a hoo.com> on Wednesday August 08, 2007 @12:18AM (#20152273) Homepage Journal
    "Should" and "Is" are often quite different. For example, no programmer of the 8086 would be caught dead using the instruction to roll left or right a given number of times. Nononono. It was far, far faster to have one operation for each roll. Division and multiplication on the 8087 was so slow that people even tried developing workarounds in software to get better performance! Multiplying by an integer amount was generally stupid - you were often much better off loading the value into two stack locations then adding repeatedly.

    CISC eventually collapsed precisely because of this. RISC was faster - far faster - without the composite instructions. Hybrids, like the Pentium series, have since developed, where the underlying architecture is RISC and the composite instructions are emulated by being split into much simpler ones. So far, so good, so what? You still have a translation layer. You still have that decomposition. That's not free, you know. It takes time.

    So why do this at all, and not have a pure RISC system? Well, many CPU manufacturers asked the same question. And decided to do exactly that. Have a pure RISC architecture. They generally do the same amount of real work with a fifth of the clockspeed of a CISC/RISC hybrid - so they run cooler and you can pack more into less space.

    Why don't Intel and AMD do this? Oh, they'd love to! The Itanic proved many things, though, one of which is that the 8086-style CISC layer has to remain. The customers have too much legacy software now. Not only are consumers locked into Intel's architecture, so is Intel! There's nothing they can do to escape, unless they make a chip that has some cores on the old design and some on a new one. But who is going to buy a processor that costs more and does less (for now)? Nobody. Thank you.

    This should be the lesson that companies learn from the IT industry (but won't): Too much lock-in locks the company in as well, making necessary changes and corrections impossible. Given enough time and enough failures to change, the company will destroy itself.

  • Re:Sweet (Score:3, Interesting)

    by rbanffy ( 584143 ) on Wednesday August 08, 2007 @12:27AM (#20152345) Homepage Journal
    I wouldn't say that. On proper hardware, it's fast enough.

    What I would point out is that x86 processors are incredibly crude, crufty and rather antiquated, retaining, even in the 64 bit implementations, features that were used in the lowly 8088/8086. In fact there was a time a selling point of the processors was that 8080/8085 assembly code assembled and ran correctly on 16-bit hardware. I would not be surprised if lots of CP/M software did have their first PC-DOS versions by little more than a straight recompile (or reassemble).

    It's a shame we are still using it instead of the much nicer and modern architectures that came after it.

    You know... there is more to processors than Intel and AMD.
  • by howlingmadhowie ( 943150 ) on Wednesday August 08, 2007 @03:52AM (#20153419)
    well, sun sells everything from workstation upwards. they tend to use their own chips, their own connectors, their own file systems and their own operating systems, all of which are now open-source and so can be freely implemented by the "competition". by open-sourcing their intellectual property (what a wonderful oxymoron), sun is doing correctly what microsoft did wrongly in china. in china, microsoft gave windows away for free. the result? market domination but a mono-culture where computing goes from a growth market to a replacement market.

    by allowing and encouraging competition and progress, sun is keeping computing a growth market for a long, long time. sun just has to have the intellectual clout to keep their head-start (i can give you the source code, but do you know what to do with it?). it's an interesting, very honest business strategy, and the free-software licenses used will keep it honest.
  • Re:Various options. (Score:5, Interesting)

    by kestasjk ( 933987 ) on Wednesday August 08, 2007 @04:18AM (#20153553) Homepage

    You still have a translation layer. You still have that decomposition. That's not free, you know. It takes time.
    It doesn't really take time, it just takes a longer pipeline and more space on the chip. Micro-ops from one instruction can get executed while instructions that are coming up get broken into micro-ops.

    The main reason this is actually slower is the ordering of instructions. Intel chips have out-of-order execution that lets them run micro-ops from instructions in a different order that will make things faster and make more use of all the parts of the processor.

    If a compiler could do this instead of the processor, by ordering the micro-ops itself, Intel wouldn't need die space for out-of-order execution. The space could be used for more cache or to squeeze more cores in.
    Also the compiler would be able to do better optimization because it has the bigger picture of what's coming up, and it has more time to do the optimization because it doesn't do it on the fly.

    They generally do the same amount of real work with a fifth of the clockspeed of a CISC/RISC hybrid - so they run cooler and you can pack more into less space.
    That's a pretty wild exaggeration. (UltraSPARC sure isn't 5x faster than Core 2 Duo, and PPC wasn't 5x faster either, despite what Apple marketing used to want you to think).
    Intel make excellent processors even if they do have to do CISC-RISC translation, and they still beat any competing RISC processor hands down (except in specialized applications like supercomputers or Sun benchmarks). This isn't because CISC is better than RISC, it's just because the difference isn't nearly as large as you make out, and Intel has a massive R&D budget that offsets any performance decrease and then some.

    If Intel really felt it was necessary to move to a new processor they would. They talked MS into using Itanium for high end apps so I'm sure they could push a transition if they wanted.
    They could include a Rosetta style software translator for old x86 binaries, and perhaps include an x86 translator on-die (like Itanium 1 did). The reason they don't is because it wouldn't give such a large boost, and would be relatively expensive, when they can get larger speed boosts for less by going for smaller processes and optimizing micro-ops.

    It wouldn't be as big a transition as you make out, and it wouldn't give as big of a performance increase as you make out. It would be better if they had gone with RISC, but not that much better.

What is research but a blind date with knowledge? -- Will Harvey

Working...