Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Sun Microsystems Hardware

Sun Turns to Lasers to Speed Up Computer Chips 130

alphadogg writes to mention that Sun is attempting to move from the typical design of multiple small chips back to a unified single-wafer design. "The company is announcing today a $44 million contract from the Pentagon to explore replacing the wires between computer chips with laser beams. The technology, part of a field of computer science known as silicon photonics, would eradicate the most daunting bottleneck facing today's supercomputer designers: moving information rapidly to solve problems that require hundreds or thousands of processors."
This discussion has been archived. No new comments can be posted.

Sun Turns to Lasers to Speed Up Computer Chips

Comments Filter:
  • I wonder if the time saved transmitting information via light is offset by the transition time used to translate that back into electric signals. On a single board, the distance travelled is on the order of decimeters. On a chip, micrometers. Are the time savings *that* significant? Even between peripherals, the time saved seems negligble.
  • Commentary on this? (Score:2, Interesting)

    by Anonymous Coward on Monday March 24, 2008 @01:50PM (#22847364)
    Commentary on this, from an actual EE, not the pretend ones on Slashdot (you know who you are)?

    Sounds sweet, but is it expensive in terms of energy/time/money? Does EMI become less of a problem on circuit boards? Will this make designer's lives easier?
  • Why not... (Score:2, Interesting)

    by weaponx86 ( 1112757 ) on Monday March 24, 2008 @01:52PM (#22847408)
    If the "lasers" require an electrical signal to be generated, isn't this just adding a step? Also you need an optical sensor somewhere which converts the light back into an electrical signal, no? Sounds like building a tunnel where there is already a bridge.
  • by florescent_beige ( 608235 ) on Monday March 24, 2008 @02:49PM (#22848416) Journal

    "This is a high-risk program," said Ron Ho, a researcher at Sun Laboratories who is one of the leaders of the effort. "We expect a 50 percent chance of failure, but if we win we can have as much as a thousand times increase in performance."

    Whenever anyone says there is a 50% chance of something happening they really mean "I have no idea. No idea at all. I'm guessing."

    In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.

    What is probably vaguely intended when a number like this is given is that if you took all the things in the history of the world that "felt" like this in the beginning, half of them will have worked out and half will have not.

    How on earth could any mortal human know that?

    But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number. So really a person should say the probably of success of this endeavor is between 45% and 55% and this estimate will be correct 19 times out of 20.

    With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%". But that is wrong, the statement should be "I estimate the probability of success to be between 0% and 100%".

    But nobody does that because it makes them look stupid.

    So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.

  • by rbanffy ( 584143 ) on Monday March 24, 2008 @03:03PM (#22848646) Homepage Journal
    I don't think it's about the time it takes to transfer a single bit but the amount of bits that can be transmitted at once with light rather than wires. If we can talk line-of-sight transmission between boards, it's easy to line up an array of about a million emitters with an array of a million detectors and send back and forth the same amount of data you would need a couple thousand wires (taking translation times into account) to do.

    Sun is a very entertaining company to watch. Even when their gizmos never end up in products, they are always cool.
  • by imgod2u ( 812837 ) on Monday March 24, 2008 @03:19PM (#22848828) Homepage
    No, but it depends on whether or not the receiver is current-steered or voltage-steered. If it's voltage steered then it's the propagation of the electric field that carries the signal. In which case, it can be near the speed of light.

    Also, future chip-to-chip interconnects seem to be moving towards transmission lines rather than treating circuit paths like bulk interconnects. Wave-pipelining the signal will mean that data transfer rates will not be hindered by the time it takes a voltage swing from transmitter to reach the receiver. Latency is still a problem, however but I imagine the electro-optical conversion process already adds plenty of that.
  • by TheRaven64 ( 641858 ) on Monday March 24, 2008 @03:59PM (#22849304) Journal
    And when you look at a PCB, it's not just the speed of the signal that determines the time it takes, it's also the distance it travels. Wires on a PCB can only cross by being at different heights (expensive) so it is common to route signals indirectly, which increases their distance quite a lot. When you have 64 wires coming from your RAM chips, and needing to get to your CPU, this sort of thing adds up quickly. Beams of light, in contrast, can cross without interfering with each other.
  • by QuantumFTL ( 197300 ) on Monday March 24, 2008 @04:08PM (#22849390)

    In probability theory, "p" has a specific meaning which is roughly stated as "the ratio of the total number of positive outcomes to the total number of possible outcomes in a population". So for the number of 50% to be right, it must be known that if this research was repeated a million times, 500,000 times there would be success and 500,000 times there would be failure. But this makes no sense because the thing being measured is not a stochastic property. It is simply an unknown thing.
    This is true, if by "probability theory" you mean "Frequentism [wikipedia.org]". Frequentism is nice, for those cases where you are dealing with nice, neat ensembles. For a lot of real world situations which require probabilistic reasoning, there are no ensembles, only unique events which require prediction. For that, we often use Bayesian Probability [wikipedia.org].

    Take the assertion "I'd say there's a 10% chance that there was once life on Mars." Well, from a Frequentist point of view, that's complete bullshit. Either we will find evidence of life, or we won't - either the probability is 100% or 0%. There's only one Mars out there.

    In order to deal with this limitation, Bayesian Probability Theory was born. In it probabilities reflect degrees of belief, rather than frequencies of occurance. Despite meaning something quite different, Bayesian probabilities still obey the laws of probability (they sum/integrate to one, etc), thus making them mathematically compatible (and thus leading to confusion by those that don't study probability theory carefully.) Of course there are issues with paradoxes and the fact that prior distributions must be assumed rather than empirically gathered, but that does not prevent it from being very useful for spam filtering [wikipedia.org], machine vision [visionbib.com] and adaptive software [norvig.com].

    As someone who professionally uses statistics to model the future performance of a very large number of high-budget projects at a major U.S. defense contractor, I can assure you that his statement was much more in line with the Bayesian interpretation of probability than the Frequentist view you implicitly assume.

    Sorry for the rant, I just get very annoyed when people assume that Frequentism is all there is to statistics - Frequentism is just the beginning.

    But it gets even more complicated. One cannot state a probability like this without stating how confident one is in the estimate of the number.
    Of course! But where did the confidence interval come from, and how much confidence do we have in it? It's important to provide a meta-confidence score, so that we know how much to trust it! That too, however, should be suspect - indeed even moreso because it is a more complex quantity to measure! So a meta-2 confidence score is in order, for any serious statistician... But why stop there?!

    With that as background here is what I humbly suggest 50% really means: it means "I have no idea how to quantify the error of this estimate. It doesn't matter what the estimate is because the error band could possibly stretch between 0% and 100%. So I'll split the difference and call it 50%".
    So, if someone does not give an error bound on an estimate, we should assume that the error is maximal?

    So whenever anyone says there is a 50% chance, or a 50/50 probability of something happening, they might as well talk in made-up Klingon words, the information content of their statement will be equivalent.
    Or, it's entirely possible that that 50% number is somewhat accurate, because they know something about the subject that you do not.
  • by mikael ( 484 ) on Monday March 24, 2008 @04:21PM (#22849526)
    There are several major issues:

    The first is the size of the packaging of the chip - the actual silicon might only occupy the space a quarter the size of the whole unit. All that extra space is just used to manage the 500+ copper connections between the silicon and the rest of the circuit board. [intel.com]

    The second problem is that as the clock speed of these connections becomes faster, synchronisation becomes a problem. While CPU's are running in the GHz frequencies, the system bus is still running in the hundreds of MHz.

    If the chip could connect to the circuit board through optical connections, then all this could be simplified. You would eliminate the need for all the copper connections while simultaneously speeding up the external clock speed.
  • by warmflatsprite ( 1255236 ) on Monday March 24, 2008 @04:53PM (#22849896)
    You're on the right track, but you're not quite there. Solar panels are more or less arrays of photodiodes. AFAIK most fiber system use PIN photodiodes to convert the light intensity over a specific band of wavelengths in a fiber to electrical current. Note that I said current, not voltage. Typically a transimpedance amplifier and some kind of comparator circuit is then used to measure the intensity of the signal. The PIN diodes can convert very small quantities of light to very small currents, and transimpedance amplifiers can deal with very small currents as well. Generally the limiting factor for low-light intensity systems like is the "dark current" of the diode you're using. If the current generated due to your light source is within the noise of the dark current you won't be able to detect any change in the system. Fiber systems operate at light intensities that generate currents well above this dark current, and they do so without a high power demand.

    Power issues can be born from speed issues, though. Since photodiodes need a fairly large surface area to be able to generate enough current from light signals, the PN (or PIN) junctions act like a capacitor. Capacitors act like low-pass filters and this limits the switching frequency of the signal you can transmit. This effectively limits the data rate of the system. If you make the surface area smaller, you'll need to increase the intensity and focus of your light beam in order to make up for the change. This could cause high speed systems to have high bus power requirements and higher manufacturing costs.
  • by TruthfulLiar ( 927336 ) on Monday March 24, 2008 @06:49PM (#22850968) Homepage
    I have to wonder, if Sun is pursuing Defense contracts, does Sun know where it's business is headed? Usually companies do the Defense contracts when they are small, need money, and don't really have a product yet. Since Sun made $740 million last year, you'd think they could afford to spend $40 million on this (probably over several years), and then they'd get to keep all the knowledge to themselves (including their R&D direction). So I can only assume that either Sun thinks this has too small a chance of success to invest in, or they can't think of any ideas for the future and are using government money to explore lots of ideas and hope that one of them keeps the company afloat.

    Maybe it's just because I'm not in the server space, but it's unclear to me why exactly I would buy a Sun machine. I used to know--they were fast and had a nice version of Unix--but now Solaris is free and I'm not even sure if Sun makes their own chips any more.

If you have a procedure with 10 parameters, you probably missed some.

Working...