Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Sun Turns to Lasers to Speed Up Computer Chips 130

alphadogg writes to mention that Sun is attempting to move from the typical design of multiple small chips back to a unified single-wafer design. "The company is announcing today a $44 million contract from the Pentagon to explore replacing the wires between computer chips with laser beams. The technology, part of a field of computer science known as silicon photonics, would eradicate the most daunting bottleneck facing today's supercomputer designers: moving information rapidly to solve problems that require hundreds or thousands of processors."
This discussion has been archived. No new comments can be posted.

Sun Turns to Lasers to Speed Up Computer Chips

Comments Filter:
  • Great idea! (Score:5, Funny)

    by peipas ( 809350 ) on Monday March 24, 2008 @01:47PM (#22847304)
    I assume these systems will be water-cooled so the miniaturized sharks have somewhere to swim.
    • by UdoKeir ( 239957 ) on Monday March 24, 2008 @01:50PM (#22847366)
      To quote Scott McNealy:

      You know, I have one simple request. And that is to have SPARCS with frickin' laser beams attached to their heads!
      • by ndevice ( 304743 )
        Watch out for sun to buy out, or merge with Analog devices soon. If they get their lasers going, they could put them on those Analog Devices DSP parts too.
      • and some lazer beans, real loud...
    • by aarku ( 151823 )
      Yarrrrrrrrrrrrrrrrrr I'm sick of shark/laser jokes. No offense personally intended, just to the whole meme.
    • Cypress Semiconductor has already figured this out with their tech. Check out Silicon Light Machines and you will see T. J. Rodgers acquired a former Cypress Semiconductor alumnus in that acquisition and all Sun needs to do is work with CY.
  • I wonder if the time saved transmitting information via light is offset by the transition time used to translate that back into electric signals. On a single board, the distance travelled is on the order of decimeters. On a chip, micrometers. Are the time savings *that* significant? Even between peripherals, the time saved seems negligble.
    • I am not an expert in electricity by no means, but I have a fundamental understanding of it (or so I think). Energy is energy. With no resistance (don't overlook this point), light traveling via laser or via electrons flowing over a wire, the speed would be the same. Now, in reality, there IS resistance... there is always a "friction" or resistance (ohm) when energy is passing over a wire. In a vacuum, a laser will move as fast as energy can possibly travel. At least on paper.
      • Re: (Score:3, Insightful)

        by isomeme ( 177414 )
        Electrons in a superconductor (a material with zero resistance) do not travel at the speed of light.
        • by Gabest ( 852807 )
          I guess both of you meant electromagnetic weave, electrons do not move too fast.
        • by bartosek ( 250249 ) on Monday March 24, 2008 @02:36PM (#22848236)
          In fact electrons in your typical electrical wire don't move anywhere near the speed of light.

          http://www.eskimo.com/~billb/miscon/speed.html [eskimo.com]
          • The electricity-water analogy saves the day again: If you have a pipe filled with water you could push on one side of the pipe, and at the other end of the pipe a person would very quickly notice the increase in pressure, as water would start flowing out of his side. This doesn't mean that a molecule of water in the pipe would move far at all.

            (It's actually to do with electric fields, which do travel at the speed of light, but the water analogy works well)
        • Re: (Score:3, Interesting)

          by imgod2u ( 812837 )
          No, but it depends on whether or not the receiver is current-steered or voltage-steered. If it's voltage steered then it's the propagation of the electric field that carries the signal. In which case, it can be near the speed of light.

          Also, future chip-to-chip interconnects seem to be moving towards transmission lines rather than treating circuit paths like bulk interconnects. Wave-pipelining the signal will mean that data transfer rates will not be hindered by the time it takes a voltage swing from tran
      • by ChrisA90278 ( 905188 ) on Monday March 24, 2008 @03:41PM (#22849100)
        When you look at a wire, or printed trace on a PCB it is not the resistance that limits how fast you can send a signal. It is inductance and capasitance that act like a low pass filter. We don't care how fact eletrons travel in wire what we care about is how fact we can change the voltage in the wire. We send data by changing voltages not by sending electrons.
        • Re: (Score:3, Interesting)

          by TheRaven64 ( 641858 )
          And when you look at a PCB, it's not just the speed of the signal that determines the time it takes, it's also the distance it travels. Wires on a PCB can only cross by being at different heights (expensive) so it is common to route signals indirectly, which increases their distance quite a lot. When you have 64 wires coming from your RAM chips, and needing to get to your CPU, this sort of thing adds up quickly. Beams of light, in contrast, can cross without interfering with each other.
      • Resistance isnt the problem. Its a few cm of copper.

        The problem is inductance and cross talk causing interference.
        One solution is to shield every wire in a bus but its not really practical. ;)
      • by dwye ( 1127395 )
        > In a vacuum, a laser will move as fast as energy can possibly travel. At least on paper.

        Of course, the light will be going through air, where the spped of light is only 99% of C

        Anyway, the speed of conduction (i.e., signal propagation, as opposed to that of the actual electrons) in copper wire is about 1/3 C, and in a coax about 95% C.

        For the distances involved, the difference in speed of signal propagation is not that important. OTOH, light gates are supposed to be capable of faster switching than si
      • Very good. The other major issues are interference (e.g. capacitive interaction between lines etc) and the sheer bandwidth -- you can modulate your carrier with plenty of other frequencies (referred to as wavelength in the optical domain and frequency in the audio/radio domain -- go figure).
    • I think the photons strike a really small sort of solar panel where the burst of light turns instantly into a burst of electricity. So there's no digital translation by a chip necessary. Of course you lose a lot of power converting it like that cuz let's say the solar sensor is 50% energy efficient, well you have to use 2x the electricity in the first place to get the desired 1x electricity at the end. So these chips are gonna be fast but they'll suck up energy faster than me eating 50% Walgreens Cocoa P
      • Re: (Score:2, Interesting)

        You're on the right track, but you're not quite there. Solar panels are more or less arrays of photodiodes. AFAIK most fiber system use PIN photodiodes to convert the light intensity over a specific band of wavelengths in a fiber to electrical current. Note that I said current, not voltage. Typically a transimpedance amplifier and some kind of comparator circuit is then used to measure the intensity of the signal. The PIN diodes can convert very small quantities of light to very small currents, and tra
    • by JustinOpinion ( 1246824 ) on Monday March 24, 2008 @02:52PM (#22848486)
      The article doesn't make it clear whether using optical communications is intended to reduce latency or increase bandwidth.

      With respect to latency: the electrical signals travel at ~30% the speed of light, whereas the optical signals travel at ~70% the speed of light (it depends on refractive index, etc.). Over the distances we're talking about (as you said, mm to dm), that's only fractions of a nanoseconds delay savings [google.com]. This is on the order of a modern computer's switching time [google.com]. All this complexity to get rid of a one or two processor cycles of latency?

      I suspect instead they are looking to increase bandwidth. An optical fiber can carry very high data rates. Moreover a single physical fiber can carry multiple simultaneous channels (e.g. different wavelengths of light). So the intention may instead be to create high-bandwidth links between various processors. Using on-chip lasers can make the entire assembly smaller and faster than the equivalent for electrical wires.

      Really what they want, I think, is to implement the same kind of high-speed optical switching we use for transcontinental fiber-optics into a single computer or computer cluster. If you can put all the switching and multiplexing components directly onto the silicon chips, then you can have the best of both worlds: well-established silicon microchips that interface directly into well-understood high-speed optical switching systems.
      • Not to make bigger chips, but to solve the interconnect problem when you use a lot of small chips in a big package.

        Although, even on-chip, at 1 cm^2 and above, optical conversion might beat be able to beat the reactance+buffering on a channel that crosses the whole chip, especially when a single physical channel might be able to carry 64 logical channels.

        It's not a new idea, it's just one that needs to be revisited from time to time, to see if the optical tech is up to the job yet.
      • Re: (Score:2, Insightful)

        by Anonymous Coward
        This idea absolutely correct. It is all about bandwidth. If you have several chips on the same board and want to send data between them, you either use board traces, or you build a custom package, but either way you have to use metal and you hit a wall. Even if you cover the entire surface of your chip in solder bumps you will never get as much bandwidth as you would like.

        Think about where the bottlenecks are in your computer... memory and IO. You want a faster supercomputer, well you need more processor
      • Cross-chip, it's latency. Timing signals take too long to cross the chip.

        Inter-chip, it's probably a little of both. Bandwidth in some cases, and timing for complex circuits in other cases.
      • Since the article quotes a bandwidth, "billions of bits of data a second," rather than a latency, I think it's fairly obvious that Sun is attempting to increase bandwidth between sections of the processor.
    • Re: (Score:3, Insightful)

      by arjay-tea ( 471877 )
      It's not so much transit time, as parallelization where the big advantage is. Many frequencies of light can share the same medium without interfering with each other. Imagine many processors and memory chips streaming data to each other simultaneously, over the same backplane.
    • by rbanffy ( 584143 ) on Monday March 24, 2008 @03:03PM (#22848646) Homepage Journal
      I don't think it's about the time it takes to transfer a single bit but the amount of bits that can be transmitted at once with light rather than wires. If we can talk line-of-sight transmission between boards, it's easy to line up an array of about a million emitters with an array of a million detectors and send back and forth the same amount of data you would need a couple thousand wires (taking translation times into account) to do.

      Sun is a very entertaining company to watch. Even when their gizmos never end up in products, they are always cool.
    • Re: (Score:3, Interesting)

      by mikael ( 484 )
      There are several major issues:

      The first is the size of the packaging of the chip - the actual silicon might only occupy the space a quarter the size of the whole unit. All that extra space is just used to manage the 500+ copper connections between the silicon and the rest of the circuit board. [intel.com]

      The second problem is that as the clock speed of these connections becomes faster, synchronisation becomes a problem. While CPU's are running in the GHz frequencies, the system bus is still running in the hundreds of
    • Then again, I'd take the photoelectric effect over heat anyday.
  • Commentary on this? (Score:2, Interesting)

    by Anonymous Coward
    Commentary on this, from an actual EE, not the pretend ones on Slashdot (you know who you are)?

    Sounds sweet, but is it expensive in terms of energy/time/money? Does EMI become less of a problem on circuit boards? Will this make designer's lives easier?
    • by ergo98 ( 9391 )

      Commentary on this, from an actual EE, not the pretend ones on Slashdot (you know who you are)?

      Just look up any of the countless other "use light instead of wires" stories that have been widely reported over the past decade(s). I'm not saying it's not going to happen — I'm sure at some point it will — but barring additional information, preferably actual accomplishments, this is just more of the same.
      • Re: (Score:2, Funny)

        by ergo98 ( 9391 )
        To get you started, here's a search for you [google.ca]. It looks like IBM is only promising a 100-fold performance increase, but Sun got the contract (despite the possibly inaccurate story, it doesn't sound like they actually figured out anything thus far, besides "how to get some government loot") by promising a 1000x increase.

        Hey DARPA — I'll give you a 1,000,000x improvement! Email and I'll tell you where to send the cash.
    • Sounds sweet, but is it expensive in terms of energy/time/money?

      The article claims it will reduce energy usage. It's much faster, so it saves time. And because time is money, it also saves money. I'm going to make a wild guess that it'll be more expensive to manufacture, because wires and solder and very very easy to put down.

      Does EMI become less of a problem on circuit boards?

      Yes, because you're no longer trying to send lots of high-frequency signals thru arrays of tiny antennas.

      Will this make designer's lives easier?

      That would probably depend on what they're designing.

    • Technically possible, financially infeasible.
      What else is new?

      It's been so long since SUN was relevant, and this story changes nothing.
  • Why not... (Score:2, Interesting)

    by weaponx86 ( 1112757 )
    If the "lasers" require an electrical signal to be generated, isn't this just adding a step? Also you need an optical sensor somewhere which converts the light back into an electrical signal, no? Sounds like building a tunnel where there is already a bridge.
    • by sdpuppy ( 898535 )
      In that case the light could be used :

      to connect parts in the chip that are furthest away

      or

      some of the computing / logic is performed in the light domain before it is translated back to electron domain.

      • by sdpuppy ( 898535 )
        Also light behaves in a non-linear fashion which opens the possibilities for speeding up certain types of calculations (logs etc)
    • A really high bridge (Score:5, Informative)

      by Pinky's Brain ( 1158667 ) on Monday March 24, 2008 @02:21PM (#22847956)
      On chip they are pumping the signal over a traces with mm range lengths and um range widths, off chip it's over traces with dm range lengths and mm range widths. Timing and power consumption are hard enough problems on chip, off chip they become much harder ... not to mention that most of the power consumed either goes into EM or gets coupled into other signals.

      Serial connections help with the timing, but do diddly for power and noise. That's where optical comes in.
    • Re:Why not... (Score:4, Insightful)

      by JustinOpinion ( 1246824 ) on Monday March 24, 2008 @02:37PM (#22848252)
      To use the beloved transportation analogy: it's like moving your cargo off of trucks and onto a high-speed train. Yes it takes time to move cargo, but it's worth it if the time savings of the high-speed train are big enough (for long enough distances, the savings can be significant).

      In this case, there may be a delay associated with signal processing, but if the optical transmission is sufficiently faster than an equivalent electrical one, then it's worth it. Considering that electrical signals themselves need to undergo various kinds of switching and processing anyway (data written or read from a bus), I don't know that converting to laser signals will add much of a delay.
  • by fahrbot-bot ( 874524 ) on Monday March 24, 2008 @01:56PM (#22847480)
    From TFA: Each chip would be able to communicate directly with every other chip via a beam of laser that could carry billions of bits of data a second.

    Do not look at chip with remaining good eye.

  • I wonder what will happen to thier investments if someone shakes the table, or knocks the computer on its side, or even if there is an earthquake.

    What happens when the computer gets dusty, or mold starts to grow on one of the lenses?

    how will dust be solved? Water? Bugs (of the insect variety)?

    • Re: (Score:3, Funny)

      by Belial6 ( 794905 )
      Don't worry, someone will ask it a question that is a paradox before then, and the whole thing will destroy itself with sparks and slowed audio.
    • Re: (Score:3, Informative)

      by Kadin2048 ( 468275 )
      I don't know if this is a serious question or not, but one assumes that the lasers will operate in completely sealed environments (e.g. inside an IC package) or over optical fibers if they need to traverse free space. I think the intra-package situation is probably more common; you could communicate from one core to another on the same die using a laser rather than a wired interconnect and hopefully have less interference/RF/capacitance issues to deal with. This also makes sense given what I know about mo
      • Free space would be quite a pain. You'd need to collimate the beam, worry about acceptance angles, mode field diameters, etc.
    • by vertinox ( 846076 ) on Monday March 24, 2008 @04:03PM (#22849340)
      how will dust be solved?

      Why don't you crack open your 3.5" hard disk drive and find out why dust doesn't bother those sensitive platters? ;)
      • I agree that dust will not be a problem, as the pathways through which the light signal would travel would probably be sealed in some way and I can't even begin to guess why the GP was concerned about a computer being knocked on its side. However, I would imagine that since the pickups for a hard drive are magnetic, dust would not make much of a difference. Now I don't know how big the gap is between a head and the platter, so I guess if this was close enough dust could scratch a platter? But our CD driv
        • The head is close enough to the platter that it would hit a piece of dust. In fact, the head is *so* close to the platter, that it would hit a fingerprint on the surface. It floats on a cushion of air created by the high speed of the spinning platter.

          Scratching the surface renders that part of the surface unusable, but also creates pieces of shrapnel which cause more problems.

          I think it's absolutely incredible that hard drives work at all.

  • Will these be in the visible or infrared range? Will the laser beams terminate or leak outside the unpackaged chip? I ask because engineers are constantly looking at decapped chips or doing various types of testing under the microscope of live circuitry. I'd hate to get hit by a laser beam through a microscope.
  • Sorry, hadda be said. :)
  • by Anonymous Coward
    I didn't read TFA, but I did read the headline...
    So you are telling me that Star at the center of our solar system (Sol or some people call it "Sun") is somehow changing its rate of rotation/turning to track lasers and the side effect of this turning is to increase the production speed of inedible chips made out of computers?
    No wonder, I don't read TFA... the headline is just plain silly.
    • Re: (Score:1, Flamebait)

      You twat. Stop trying to be a pedantic prick. It says "Sun", The shortened name of a company called Sun Microsystems thats typically used in conversation by a large number of people who don't have shit for brains. Lets not forget the logo displayed to the side of the article summary.

      You might not be such a dumb fuck if the title said "The sun".
      • Pot. Kettle. Black.

        I agree with the mods on GP (for once). It was an attempt at humor and was properly labelled as such.

      • If it said, "The Sun" I would have been worried about the british tabloid... :)
  • Comment removed based on user account deletion
    • You weren't the only one to confuse Sol with the computer company.

      My first thought was "The sun is lasing? Cool!"

      My second thought was "space sharks! Way Cool!!!"
  • Remember the article not long ago about micro transmitters/receivers on a chip?

    Considering no special connections are needed for wireless, unlike light which woud likely need fiber or line of sight, chips equipped with that mini wireless tech would, in theory, only need to be powered and placed in proximity to each other.

    Not as sexy as SPARCs with friggin' lasers, but certainly a plus from a computer design perspective.
    • by imgod2u ( 812837 )
      Even a directed wireless transmitter through a waveguide only manages to send a fraction of its signal power over to the receiver. There's also the problem that it's much more susceptible to interference, it drains a lot of power because RF signals are not easy to generate at high speeds, the extra logic required and the fact that the bandwidth is just nowhere near what traditional wired links are capable of might not make it all that attractive.
      • Even a directed wireless transmitter through a waveguide only manages to send a fraction of its signal power over to the receiver. There's also the problem that it's much more susceptible to interference, it drains a lot of power because RF signals are not easy to generate at high speeds, the extra logic required and the fact that the bandwidth is just nowhere near what traditional wired links are capable of might not make it all that attractive.

        Exactly. Hence the reason 802.x wireless is much slower than its wired counterpart or why fiber optics are used for high-speed networking over great distances (like between North America and Europe) (as opposed to satellites).

      • Agreed on all your points, although I don't think getting 100% of the sigal power to the receiver is an issue. And maybe wired bandwidth is greater, but if you only need 1 gigabit, who cares if fiber can do terabit speeds?

        Sun's research is aimed at supercomputers... getting 1024 processors to all talk to each other. Simultaneously. That's a lot of cross connections, and some heavy duty switching gear. But as long as any two processors can switch to the same frequency, they could communicate. Meaning 512 pro
  • by florescent_beige ( 608235 ) on Monday March 24, 2008 @02:49PM (#22848416) Journal

    "This is a high-risk program," said Ron Ho, a researcher at Sun Laboratories who is one of the leaders of the effort. "We expect a 50 percent chance of failure, but if we win we can have as much as a thousand times increase in performance."

    Whenever anyone says there is a 50% chance of something happening they really mean "I have no idea. No idea at all. I'm guessing."

    In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.

    What is probably vaguely intended when a number like this is given is that if you took all the things in the history of the world that "felt" like this in the beginning, half of them will have worked out and half will have not.

    How on earth could any mortal human know that?

    But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number. So really a person should say the probably of success of this endeavor is between 45% and 55% and this estimate will be correct 19 times out of 20.

    With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%". But that is wrong, the statement should be "I estimate the probability of success to be between 0% and 100%".

    But nobody does that because it makes them look stupid.

    So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.

    • Re: (Score:3, Insightful)

      Absolutely. Personally, I do the same thing: if someone asks me about the likelihood of something happening about which I have no clue, I tell them flat out "50/50. Here, let me flip a coin." I expect the same thing to have happened here as well.

      Now, someone please mod me redundant. Executive summaries should be discouraged wherever possible.
    • by QuantumFTL ( 197300 ) on Monday March 24, 2008 @04:08PM (#22849390)

      In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.
      This is true, if by "probability theory" you mean "Frequentism [wikipedia.org]". Frequentism is nice, for those cases where you are dealing with nice, neat ensembles. For a lot of real world situations which require probabilistic reasoning, there are no ensembles, only unique events which require prediction. For that, we often use Bayesian Probability [wikipedia.org].

      Take the assertion "I'd say there's a 10% chance that there was once life on Mars." Well, from a Frequentist point of view, that's complete bullshit. Either we will find evidence of life, or we won't - either the probability is 100% or 0%. There's only one Mars out there.

      In order to deal with this limitation, Bayesian Probability Theory was born. In it probabilities reflect degrees of belief, rather than frequencies of occurance. Despite meaning something quite different, Bayesian probabilities still obey the laws of probability (they sum/integrate to one, etc), thus making them mathematically compatible (and thus leading to confusion by those that don't study probability theory carefully.) Of course there are issues with paradoxes and the fact that prior distributions must be assumed rather than empirically gathered, but that does not prevent it from being very useful for spam filtering [wikipedia.org], machine vision [visionbib.com] and adaptive software [norvig.com].

      As someone who professionally uses statistics to model the future performance of a very large number of high-budget projects at a major U.S. defense contractor, I can assure you that his statement was much more in line with the Bayesian interpretation of probability than the Frequentist view you implicitly assume.

      Sorry for the rant, I just get very annoyed when people assume that Frequentism is all there is to statistics - Frequentism is just the beginning.

      But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number.
      Of course! But where did the confidence interval come from, and how much confidence do we have in it? It's important to provide a meta-confidence score, so that we know how much to trust it! That too, however, should be suspect - indeed even moreso because it is a more complex quantity to measure! So a meta-2 confidence score is in order, for any serious statistician... But why stop there?!

      With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%".
      So, if someone does not give an error bound on an estimate, we should assume that the error is maximal?

      So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.
      Or, it's entirely possible that that 50% number is somewhat accurate, because they know something about the subject that you do not.
      • Just my luck huh, here I go looking all smart then some uber Bayesian has to come along and spoil my party.

        Anyway, with little expectation of anything good coming from this (for my ego I mean), here's why I don't usually think in Bayesian terms. Correct me if I'm wrong which I probably am.

        While I have heard Bayesians talk about probability not meaning the same thing as as "normal", I've never seen any Bayes p which means anything other than a relative likelihood that I'm familiar with. If there is a bag

        • Just my luck huh, here I go looking all smart then some uber Bayesian has to come along and spoil my party.

          I'm hardly a Bayesian in spirit, but it's useful enough when treated properly. I'm actually much more likely to say "Bayesian statistics is absolute bollocks - which just so happens to work very reliably in many cases". This is due to the well known paradoxes with priors, and issues associated with the certainty of beliefs (which you referenced). I prefer Dempster Shafer evidence combination when

      • My mod points expired recently, so could someone mod this up? I do machine learning and computer vision with Bayesian statistics, and the above poster is spot-on. The GP sounds like a frequentist trying to regain control over statistical vocabulary.

        FWIW, the frequentists can keep "confidence interval". We don't want to sully our theoretically sound vocabulary with its filthy connotations. :p But "probability" is something we'll lay uncompromising claim to, however much detractors say that subjective probabi
        • This is so fascinating. I had no idea there could possibly be two-way advocacy about this. If I had of known I would have worn my asbestos underwear :)

          Honest question, Bayesian-wise, how could/would one interpret the 50% number in the article? Is there an intuitive interpretation? Is it quantitative or qualitative?
        • I think there are a lot of people who are not really taught Bayesian statistics, and so they are limited to think of probability solely in terms of frequentist terminology.

          To be fair, many things about Bayesian statistics are odd, and possibly even unsound (yay prior distributions we just made up!) The confidence interval thing can get a bit ridiculous, but I prefer Dempster Shafer theory for the precise reason that I emphatically DO NOT want to treat all evidence with equal weight.

          Interestingly enough
  • by renoX ( 11677 ) on Monday March 24, 2008 @02:51PM (#22848468)
    If I understood correctly this is not about single wafer design but exactly the opposite: regaining the speed of 'single wafer design' with multiple chips by using optical communications between chips increasing the inter-chips bandwidth (normally intra-chip bandwith is much higher than inter-chip bandwith so this is a bottleneck).

  • by spage ( 73271 ) <spage&skierpage,com> on Monday March 24, 2008 @03:25PM (#22848912)

    Why, why, why do people submit second-hand links to Slashdot?

    The byline of the Seattle Times story is "John Markoff New York Times". 5 seconds with Google's site:nytimes.com reveals the original story [nytimes.com] with better explanation and more quotes from Sun personnel.

    • by dwye ( 1127395 )
      > Why, why, why do people submit second-hand links to Slashdot?

      Because the NY Times used to require registration to read their articles?

      .

      Of course, the article still makes some bonehead errors. They do not cut wafers of identical chips apart to be able to eliminate the few failures in a circuit, but because we want a hundred CPU chips more than we want a single four inch processor with about 100x4 or x8 cores. You do not need that many processors to do your own taxes (unless .Net is far more wast

  • Interesting, so what they want to do is to be able to create larger multi-chip packages where each the chips are connected to each other optically rather than the traditional wire-bonds on a SiP. I'm honestly not seeing the advantage here in terms of speed. A single LVDS pair across a chip pad and wire-bond can already carry "tens of billions of bits per second" of bandwidth. Many can be put in parallel. I can see this being an advantage if they've discovered some ultra-efficient electro-optical convers
  • Intel's [betanews.com] already been working on this for a few years. For Sun's sake, they better hope that Intel didn't file for a patent on this already, otherwise this could get messy.
  • Instead of goofing around with connections, why not build a chip occupying the entire 300mm wafer? Any local manufacturing problem would disable just one specific core out of the hundreds of cores on the wafer-chip. Isn't it done already? Cell, AMD tri-core, old celerons... Even the memory could be on the wafer, or at worst, one wafer for the cores and one for the memory, vertically stacked with through-silicon vias.
    • by IdeaMan ( 216340 )
      Cooling?
      Oh and how long are those vias? Will you be trying to get heat to flow through the memory wafer?
      • by robi5 ( 1261542 )
        This needs one large cooler instead of hundreds of smaller ones. You can do something useful with the concentrated heat, for example provide hot water. Better than letting it go useless. But I think a good tradeoff would be to lower the frequency an order of magnitude, and use the massive parallelism - hundreds or thousands of cores on a die. Better, make it fully three-dimensional for a massive explosion of processing units. The brain is large and is 3D and still does not get really hot.
    • Because we don't have the fabrication technology to expose a whole wafer at once. Since we're essentially shining light on the surface, the wider we make the beam, the softer the features get, specially towards the edges (because the light hits the edge at a different angle).
      There's a sweet spot of size-vs-yields. Trying to make bigger chips requires multiple exposures for the same die, and getting the exposures to line up properly is extremely tedious.

      That's why it's easier to make lots of small chips
      • by robi5 ( 1261542 )
        Maybe several hundred connection lines between any two "cores" or memory banks would do, in which casse the edge traces to align with can be orders of magnitude wider, making the alignment possible.
  • I'm not very well versed in chip design, as I only took one class a few years ago. Could someone please confirm or disprove the following hypothesis?

    Assumption: The energy dissipated in a chip generates heat, which could be avoided by the use of lasers, resulting in lower heat generation and energy consumption.

    I'm fully aware that my speculative hypothesis may be completely unfounded, especially given that not much heat should be dissipated when electricity flows through a superconductor. If someone w
    • Yes, that's true, but that's not the focus of the article. The article is aboot replacing electrical lines on the PCBs. The biggest bottleneck in a PC is the front side bus. This is the connection b/w the memory, the HDDs, and the CPU. If you could switch these types of connections from electrical to optical then you could increase the communication bottleneck b/w the chips. The next step would be faster RAM and then faster HDDs, next a faster CPU, then a faster bus, and the circle continues.
  • The area of photonics is largely related to physics and electrical engineering, not so much with computer science, which deals with information processing and computations. Being someone who works in the area of silicon photonics, this is some pretty exciting news.
  • by saccade.com ( 771661 ) on Monday March 24, 2008 @05:13PM (#22850124) Homepage Journal
    It was quite the smoking crater [wikipedia.org] last time around. Maybe technology has improved since then...
  • I have to wonder, if Sun is pursuing Defense contracts, does Sun know where it's business is headed? Usually companies do the Defense contracts when they are small, need money, and don't really have a product yet. Since Sun made $740 million last year, you'd think they could afford to spend $40 million on this (probably over several years), and then they'd get to keep all the knowledge to themselves (including their R&D direction). So I can only assume that either Sun thinks this has too small a chan
    • by dwye ( 1127395 )
      >blockquote>Usually companies do the Defense contracts when they are small, need money, and don't really have a product yet.

      Like AT&T, the entire aircraft industry, IBM, etc.?

      This is NOT just an SBIR grant.

      Maybe it's just because I'm not in the server space, but it's unclear to me why exactly I would buy a Sun machine.

      Yes, I would not recommend them for cheap laptops, or to give to your grandmother to handle her email needs. That would be a bit of overkill. OTOH, if you have a problem where

  • "The company is announcing today a $44 million contract from the Pentagon to explore replacing the wires between computer chips with laser beams.

    I hope this means that servers with the new chips will not actually cost 2-4x as much as an equivalent Dell server. IMHO, Sun needs to do something about the cost of their servers. I try to only use them when required because of their cost and I'm told the inflated price is due to the low yields of the SPARCs.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...