Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Devises Chip Speed Breakthrough 465

Chad Wood writes "According to the New York Times (free reg. req.), Intel has demonstrated a research breakthrough, making silicon chips that can switch light like electricity. The article explains:''This opens up whole new areas for Intel,' said Mario Paniccia, a an Intel physicist, who started the previously secret Intel research program to explore the possibility of using standard semiconductor parts to build optical networks. 'We're trying to siliconize photonics.' The invention demonstrates for the first time, Intel researchers said, that ultrahigh-speed fiberoptic equipment can be produced at personal computer industry prices. As the costs of communicating between computers and chips falls, the barrier to building fundamentally new kinds of computers not limited by physical distance should become a reality, experts say.'"
This discussion has been archived. No new comments can be posted.

Intel Devises Chip Speed Breakthrough

Comments Filter:
  • Google link (KW) (Score:5, Informative)

    by jaxdahl ( 227487 ) on Wednesday February 11, 2004 @08:15PM (#8254673)
    No req. required [nytimes.com]
  • NYTimes Reg (Score:-1, Informative)

    by slashdot2004 ( 750446 ) on Wednesday February 11, 2004 @08:16PM (#8254684) Homepage Journal
    User: slashdot
    Password: slashdot


    More [bugmenot.com]
  • NYT not necessary (Score:4, Informative)

    by djupedal ( 584558 ) on Wednesday February 11, 2004 @08:19PM (#8254702)
    SAN JOSE, California (AP) -- In an advance that could inexpensively speed up corporate data centers and eventually personal computers, researchers used everyday silicon to build a device that converts data into light beams.

    Light-based communications has until now largely been the realm of large telecom companies and long-haul fiber-optic networks because of the expense of the exotic materials required to harness photons, the basic building block of light.

    Now, researchers at Intel Corp. say their results with silicon promise to reduce the cost of photonics by introducing a well-known substance that's more readily available.

    In the study, published in Thursday's journal Nature, the Intel researchers reported encoding 1 billion bits of data per second, 50 times faster than previous silicon experiments. They said they could achieve rates of up to 10 billion bits per second within months.

    "This is a significant step toward building optical devices that move data around inside a computer at the speed of light," said Pat Gelsinger, Intel's chief technology officer.

    Intel believes the finding could have profound implications for the links between servers in corporate data centers. Eventually, the technology could find its way into personal computers and even consumer electronics.

    "It is the kind of breakthrough that ripples across an industry over time, enabling other new devices and applications," Gelsinger said. "It could help make the Internet run faster, build much faster high-performance computers and enable high bandwidth applications like ultra-high-definition displays or vision recognition systems."

    Unlike electrons that flow through copper connections common today, the photons in light are not susceptible to data-slowing interference and can travel farther.

    The Intel researchers built a device called a modulator, which switches light into patterns that translate into the ones and zeros of the digital world.

    A light beam was split into two as it passed through the silicon, which has tiny transistor-like devices that alter light. When the beams are recombined and exit the silicon, the light goes on and off at a frequency of 1 gigahertz, or a billion times a second.

    Infrared light is used because it can pass through silicon.

    "Just as Superman's X-ray vision allows him to see through walls, if you had infrared vision, you could see through silicon," said Mario Paniccia, a study author and director of Intel's silicon photonics research. "This makes it possible to route light in silicon, and it is the same wavelength typically used for optical communications."

    The researchers expect to be able to increase the frequency to 10 gigahertz, making the technology commercially viable, said Victor Krutul, senior manager of Intel's silicon photonics technology strategy.

    "This implies that the economies of scale that we have seen for the electronics industry could one day apply to the photonics industry," Graham T. Reed, a professor of optoelectronics at the University of Surrey's Advanced Technology Institute, said in a commentary that accompanied the research paper.
  • by Anonymous Coward on Wednesday February 11, 2004 @08:22PM (#8254734)
    ... not a chip you can 'overclock'. Basically, it is a way to send LOTS of data over a fiber line. They use an example of picking any seat in a stadium and having a dynamic TV show you that seat based on an angle you sit to the TV. So unless the data is pre-processed, this is NOT a new CPU.

    "The device Intel has built is the prototype of a high-speed silicon optical modulator that the company has now pushed above two billion bits per second at a lab near its headquarters in Santa Clara, Calif. The modulator makes it possible to switch off and on a tiny laser beam and direct it into an ultrathin glass fiber. Although the technical report in Nature focuses on the modulator, which is only one component of a networking system, Intel plans on demonstrating a working system transmitting a movie in high-definition television over a five-mile coil of fiberoptic cable next week at its annual Intel Developer Forum in San Francisco."

  • Re:Still binary.. (Score:3, Informative)

    by Edmund Blackadder ( 559735 ) on Wednesday February 11, 2004 @08:23PM (#8254742)
    why do you think there will be size and speed gains?

    the complexity of most logical and arithmetic operations that have to be performed on a bit increase exponentially with the number of possible states in the bit.

  • by Orthogonal Jones ( 633685 ) on Wednesday February 11, 2004 @08:26PM (#8254769)

    Disclaimer: I am a Ph.D. in fiber optic physics

    This is a 2 Gb/s modulator, whereas III-V semiconductor modulators above 40 Gb/s are commericially available.

    A modulator by itself is nothing new, and not the whole story. You need optical waveguides with bending radii much smaller than currently available for routing, and optical logic gates which are an even worse problem.

    The article doesn't describe the technology -- is it electroabsorption? Mach-Zehnder?

    Nevertheless, a small and fast silicon modulator has obvious commercial value, even if it isn't the greatest thing since sliced bread.

  • Re:Still binary.. (Score:5, Informative)

    by HeX314 ( 570571 ) on Wednesday February 11, 2004 @08:30PM (#8254825) Homepage
    The difficulty with mastering tri-state and quad-state computers (as opposed to bi-state or binary) comes with the gates used. How would one perform an inverse operation when there are two other choices from which to choose? Instead of AND, OR, and NOT (not to mention combinations such as XOR, NOR, NAND, etc.), you would have at least 8 gates (if I recall correctly; I worked on something similar to this during the summer) doing things such as shifting, reversing, "inverting," and such. The different permutations of these make it even more confusing.

    In addition to this, you would need to find a medium capable of carrying a tri-state signal (electrons are not best suited for this). In fact, due to the fact that we have a tough time determining on and off sometimes, I would personally suggest we leave it at binary for the time being.

    I know it's a long post, but most of it is necessary.
  • by RandBlade ( 749321 ) on Wednesday February 11, 2004 @08:32PM (#8254842)
    Flourescent and LED lights do generate heat, just not to the same order of magnitude as incadescent lights. Its significantly less, which I specifically mentioned in the post! However there is still some heat generated. If you place a lot of LED lights together though then they can generate enough heat as to become significant.
  • by egomaniac ( 105476 ) on Wednesday February 11, 2004 @08:32PM (#8254844) Homepage
    Fluorescent and LED lights do not get hot.

    Sure they do. They are far more efficient than incandescent bulbs, so they produce significantly less heat per lumen, but a very bright fluorescent or LED light can get quite hot.

    In fact, high-brightness LEDs like the Luxeon Star have to be mounted on heat sinks to keep them from burning up.
  • Re:Still binary.. (Score:5, Informative)

    by fredrikj ( 629833 ) on Wednesday February 11, 2004 @08:33PM (#8254851) Homepage
    More info about base 3 computing here [americanscientist.org].
  • by DarkOx ( 621550 ) on Wednesday February 11, 2004 @08:33PM (#8254856) Journal
    temperature, is really not the problem. The problem is stabilization. Different gates "stabilize" that is produce consitant output high or low at different rates, gates are strung together into circuits on the chip and thouse circuits then take a certain amount of time to stabilize, this is critical because the output of one circuit will be the input to another be it on the same IC or interfacing with something else. The reason you can overclock is in most cases ICs in computers the CPU in particular are underclocked to begin with. The clock cycle is longer then the stabiliation time when the chip is cool. However the voltage running though the traces and the swiches meets some resistence and part of it is disipated as heat, when silicon-eletric gates heat the respond slower and the stabilization time becomes longer, so the clock cycle must be longer if you want correct output. This is why if you take special meausers to keep the chip cooler you can often run it faster. Fiberoptics are not perfect and can heat too, the smaller you make them that problem is likely to exacerbate. The question I can't answer for you is wether that is a problem at all. silicon-optic gates may not vary in stabilization time in the same way that the electric counter parts do? They may and then the same rules apply or they could have some optimal temp where a cold chip does not work as well as a warm one? It might be they work perfectly up to a certain failure point?
    I would love some answers form an engineer who is working with this stuff.
  • by NanoGator ( 522640 ) on Wednesday February 11, 2004 @08:39PM (#8254909) Homepage Journal
    "Fluorescent and LED lights do not get hot."

    This is not true. They do get hot, just not as hot. They don't require as much energy to generate light.

    With that said, the question really can only be answered after we know about the design of the chip. If all the light emitting aspects of the chip can be run at full intensity without ever being turned off, and the chip can survive that, then the answer is yes, you can overclock it to the max without it burning out. Will the chips work that way? Well I don't know. We are talking about very small components.

    His question was quite valid.
  • by Pharmboy ( 216950 ) on Wednesday February 11, 2004 @08:40PM (#8254915) Journal
    probably the same person that modded yours informative. You are incorrect regarding fluorescents. I can't speak to diodes, but I have known them to be quite hot (such as in a rectifier) so I have doubts about that as well.

    Fluorescents DO get hot, as do the ballasts (see post below). I just got done in the lab measuring different ballast systems that use high frequency to energize high output fluorescent lamps. Current generation systems are twice as efficient as older systems by using HF but they still are hot as hell. The ambient temperature of a 100 watt fluorescent lamp, powered by only 65 watts of power (typical cpu power) at high frequency has an ambient temperature of over 100F at 6cm away. The surface temperature is over 212F (100C).

    So yes, fluorescents DO get hot. They just produce alot more light per BTU of waste heat, but still hot.

    Another problem: fluorescents are plasma devices, similar to neon signs. This means they operate in a semi vacuum (1% of atmosphere), with the electrical fields generated causing an outer electron of the mercury atom to fly off toward the positive end of the lamp, and strike the phosphor coating of the lamp. This reduces the energy in the electron, which then is captured by any mercury atom with an electron missing, thus with a positive charge. This is not a practical solution inside a integrated circuit. This isn't even including the other problems I mentioned in the other post, such as ballasting.
  • by jaoswald ( 63789 ) on Wednesday February 11, 2004 @08:45PM (#8254952) Homepage
    Can you guys all shut up about Pentium and clockspeed for crying out loud?

    This is about optical networking using silicon as the semiconductor. Not about a CPU.

    Everyone who doesn't understand what an optical modulator is can go post on the latest SCO story. That is all.
  • by mamba-mamba ( 445365 ) on Wednesday February 11, 2004 @08:45PM (#8254956)
    Right. The article implies that they found a way to make modulators that doesn't involve any fancy process steps or exotic substrates. This could open the door to modulators built-in to processors or chipsets, instead of relying on expensive, power-hungry external modulators.

    It's a bit like when they figured out how to build serializers in CMOS. Suddenly there are serializers everywhere that don't need a separate physical layer device. This is almost like the next step.

    Also, this could mean that things like optical fibre-channel and possibly 10 gigabit ethernet will be cheaper. Who knows.

    Interesting!

    MM
    --
  • Not really (Score:5, Informative)

    by Sycraft-fu ( 314770 ) on Wednesday February 11, 2004 @08:47PM (#8254973)
    Problem is to have three or four states, you need more complex circuity. Binary is simple and works well. A bit it a gate, a transistor. It's on or it's off, 1 or 0. Well if I want to represent four states, how do I do that? I guess I need to do it by voltage or amperage level. MEans I need a more complicated circut.

    Give you something of a parallel in another digital field:

    Digital CD audio is stored as 16-bits per sample, 44,100 samples per second. Well that means that to convert the digital data to analogue, which is what sound waves are, you need to change the output voltage of the state 44,100 times per second, and do it to a resolution of 65,536 different levels. Originally, D/A converters tried to do just that, and failed rather miserably. It was just all hell to build a circut that could do a good job of controling voltage that accurately that quick in that fashion.

    The answer, it turns out, came from computers and high current variable speed electric motors. Motors of that type are controlled using what is known as pulse wave modulation. Their power source is either all the way on, or all the way off, binary in other words. It pulses at a high rate of speed. What you do is the faster you want the motor to go, the more on pulses you have. Works great, you have a simple design that provides a fine level of speed control. Only down side is the motor whines at the frequency of the pulse.

    Now this was applied to audio as well. What you do is convert the PCM data on the CD to a much higher frequency 1-bit PWM stream. That then controls the analogue voltage. It ends up working great, so good in fact that sony has a new system called Sony Direct Stream Digital that just takes and stores the PWM data directly. This type of converter is called a Delta-Sigma D/A converter and is basically the only kind used any more. You may CD consumer equipemnt, espically older stuff (Sony Discmans did it a lot), occasionaly advertise it as "1-bit D/A".

    Binary systems are just simpler to implement in electronics, hence we do. It is at higher levels that they start representing data with multiple states.
  • Thermal expansion? (Score:3, Informative)

    by Absurd Being ( 632190 ) on Wednesday February 11, 2004 @09:01PM (#8255068) Journal
    Heat will probably be a problem. Since you're dealing with photonic crystals, a small change (a few angstroms) in size (heat related) will change the optical properties of the device dramatically. But light doesn't heat up materials quite as dramatically as rapidly switching MOSFETS. And you don't get waste tunneling currents at small sizes either. So you can make better device. However, you CAN'T actually overclock, you'll mess up the optical properties of the device severely if you switch to different frequencies (turning a diffraction pattern that indicates an OR into an AND, for instance).
  • by qedigital ( 545151 ) on Wednesday February 11, 2004 @09:01PM (#8255073) Homepage

    It is a common misconception that electrons move quickly through conductors. This, however, is not the case. When an electric field is applied to a conductor (e.g. from a battery), the random motion of the electrons in the material gain a small drift velocity. In copper (a relatively good conductor), this drift velocity is on the order of 10^-5 m/s to 10^-4 m/s (much less than c=3E8 m/s). The reason that conductors work the way they do is that the information is carried by the electric field rather than the individual electrons. A good analogy here is to think of a tube filled with ball bearings. Stuff one more bearing in the tube at one end and one pops out of the other "instantaneously". While the inserted bearing didn't travel the distance, it did have an effect at the end of the tube.

    Another common error is raised by the parent post. Transmission rate and bandwidth are completely different concepts. The transmission rate refers to the number of bits of information that can be transmitted down a pipe without loss (i.e. the capacity). Bandwidth, on the other hand, is a frequency domain concept and refers instead to the range of frequencies that the pipe can support. While it is true that a system with greater bandwith usually has greater capacity, it is a gross generalization.

  • by pla ( 258480 ) on Wednesday February 11, 2004 @09:07PM (#8255108) Journal
    and it seems you have that special sauce investors are looking for down perfectly.

    Pah... Save a few bucks and just use the Dilbert mission statement generator [dilbert.com]

    Customize the list of nouns, and you can even make it sound relevant to your own business.

    And, for reference, I did actually use that to come up with an "Objective" line for my SO's resume (though as a warning, she works in a field where the resume counted as a formality - she could have used "I want you to pay me to scratch my ass all day" as her objective, and still gotten the job).
  • Re:Still binary.. (Score:2, Informative)

    by cubic6 ( 650758 ) <tom@losthalHORSEo.org minus herbivore> on Wednesday February 11, 2004 @09:21PM (#8255179) Homepage
    If I remember correctly, the optimum base for data size is base e (approx 2.7). I guess that base 3 would be the best we could achieve. Can anybody who knows more about information theory back me up?
  • by Crypto Gnome ( 651401 ) on Wednesday February 11, 2004 @09:30PM (#8255238) Homepage Journal
    After reading the article, it turns out that *all* this hoo-ha is about the fact that INtel has worked out how do do telecommunications level optical switching (read LED-LASER-RAPID-BLINKING) on a chip built using "normal" chip fabrication techniques.

    This is in no way about "faster CPUs" it's ALL about "now we can fabricate telecomms equipment using standard CPU techniques, so they'll be cheaper and therefore easier to put into devices".

    So you're not likely to be getting significantly faster PCs from this technology, though it *does* make more likely the chance of (one day) having a direct gigabit fiber port on your PDA (or digital camera/other-small-electronics-device)
  • by sfp2322 ( 25637 ) on Wednesday February 11, 2004 @09:32PM (#8255253)
    The nature paper [cornell.edu]
  • by DeeKayWon ( 155842 ) on Wednesday February 11, 2004 @09:34PM (#8255268)
    This is not really a reply to the parent. This is meant to help explain why silicon is so tough to make optoelectronics with.

    The electrons in materials have many different energies - in metals, the possible energies are so tightly spaced that you have what looks like a single continuous band of energy levels. With semiconductors, you have two effectively continuous bands with an energy gap between them. For silicon, for example, the gap is 1.1eV. The higher energy band is called the conduction band (CB) while the lower is called the valence band (VB).

    When an electron in the CB falls into the VB (direct recombination), it loses energy which is emitted in the form of heat (phonons, aka lattice vibrations) or light (a photon). Electrons in the CB prefer to hang around in the lowest energy states of the CB, so that's where they usually fall from. The unoccupied states of the VB tend to be the highest energy states in that band, so that's where electrons fall to.

    Now, the problem: momentum conservation. An electron can only directly fall from the CB to the VB and emit a photon if momentum is conserved, and photon momentum is negligible compared to that of the electron. So the momenta of the source and destination states must be pretty close, and for there to be an appreciable amount of direct recombination, the momenta of the CB's lowest-energy states must correspond to the VB's highest energy states, and this happens in direct bandgap semiconductors.

    Si, unfortunately, is an indirect bandgap semiconductor. The preferred source and destination states don't line up on energy-momentum diagram.

    Now, that doesn't mean it's impossible to get light out of silicon, just more difficult. You need what are called recombination centres, which are defects which the electrons can get trapped in (emitting phonons in the process and changing momentum) and from there drop to the VB (indirect recombination). For example, Al-doped SiC can be used to make blue LEDs, but their efficiency is measured in fractions of a percent.

    III-V semiconductors are made of elements in the III and V groups in the periodic table, GaAs being the most well-known. They tend to be direct bandgap semiconductors, and so they are far more conducive to direct recombination and are easier to make optoelectronics out of.
  • by Ungrounded Lightning ( 62228 ) on Wednesday February 11, 2004 @09:40PM (#8255310) Journal
    Not even LEDs are 100% efficient. However, for an optical system, the heat production is related to the duty cycle of the lamps, rather than the switching speed, so the heat production should remain constant regardless of clock speed.

    That's true of the heat production in the guts of the lamp itself (at a given light intensity). But there are other factors.

    On the one hand, this means you don't need to improve cooling to overclock. On the other, it means that you can't improve the overclock level with improved cooling.

    Most of the heat loss in a circuit comes from the I-squared-R losses of the currents needed to charge and discharge the stray capacatance of the wiring (even the tiny traces on the ICs) and the space-charge of the devices.

    In particular, if the wire has any significant length, you need to run that current through a series resistance (at least at the driving end) matching the impedence of the wire, in order to produce a nice waveshape at the far end and prevent "ringing" as the signal bounces back-and-forth (which would degrade the waveshape at the inputs to far-end gates and make the signal both more sensitive to noise AND more generative of noise to interfere with its neighbors.)

    With CMOS you only pull power (except leakage power) when you CHANGE the state of a signal. But when you do, you have to charge, or discharge, the signal wiring through that matched resistance. The impedence of the wiring doesn't change a lot with technology and speed. So with a given length of wire, you have a given amount of energy dropped every time you switch it. Switch it twice as fast, generate twice as many pulses of heat.

    New generations of semiconductors fight this in three ways:
    - Shrink the components (so they have less stray capacatance to charge and discharge).
    - Shorten the signal runs by making the components smaller so they can be closer together (reducing the stray capacatance of the lines). (But this doesn't help for signals that HAVE to cross the chip, or leave it.)
    - Lower the power supply voltage (so you don't have to swing it as far. Current goes up with the the voltage, heat loss with the square of the current.) (For signals that leave the chip this may be harder to do than for signals that stay on it - due to external interference.)

    For switching a light-emitting device you still have to charge and discharge the capacatance of the device itself and the wiring to it. Switch it faster and IT doesn't heat up much more. But the driver circuit does.

    By putting a light modulator on the chip, Intel's new technology wins in two ways:
    - You don't have to rapidly switch the power to the laser (which involves switching a LOT of current through an impedence-matching resistor).
    - You don't have to run a microwave-speed signal through a long resistive wire, which degrades its waveshape and also produces still more losses.
    Instead you switch a low-power, short-range, on-chip wire to a low-capacatance active region on the on-chip modulator. Switching losses are relatively small, comparable to those of a gate-to-gate internal signal in the same chip.
  • by taniwha ( 70410 ) on Wednesday February 11, 2004 @09:46PM (#8255356) Homepage Journal
    'tertiary' logic and 'tristate' have different meanings. Tristate is simply a way of making a gate not drive a wire - so that some other gate can without 'bus fights' - there are no gates there that can sense that the wire is not being driven.

    In fact the signal on such a wire will tend to hang around at about the level it was last driven for quite a while (the wire is a cap) untill it discharges or some other gate drives it.

    In fact internal wires that are genuinely tristate are considered evil in most chip deigns - a floating signal will tend to turn on both the transistors in the gate(s) being driven causing current to flow where it shouldn't (one should be on or the other not both) - chips with internal floating nodes can et into horrible lockupstate which cause thermal runnaway and chip death. Normally if you are using tristate circuits you have a resistor to pull the wire to a known value when not in use, a weak 'keeper' transistor, a protocol which makes sure that someone is always driving them or a combination (PCI is a great example where all the bus clients know whow's driving each wire at any time and when wires are released they are first driven to a safe keeper voltage and then released so a weak resistor can hold them)

  • Re:damn universe.. (Score:4, Informative)

    by Tailhook ( 98486 ) on Wednesday February 11, 2004 @10:25PM (#8255562)
    My Quake game is limited by physical distance. It takes 100ms to go across the country and back. Latency is the killer here.

    Rough, napkin quality calculations here...

    m = miles to server = 2000 (round figure for "across the country")
    c = miles covered by light in 1 sec

    2m/c = 21ms round trip time

    100ms - 21ms = time lost to switching hardware, mostly, given that (in my experience) a simple ICMP ping will usually show very similar results, we probably can't attribute it to server processing time.

    So, as you can see, there is plenty of room for improvement. Faster/less switching between you and them means less latency. If you have 1/50 second latency, events are reported to you in the time it takes a good CRT to refresh twice.

    Light is fast.
  • by Hal-9001 ( 43188 ) on Thursday February 12, 2004 @01:40AM (#8255703) Homepage Journal
    The article doesn't describe the technology -- is it electroabsorption? Mach-Zehnder?
    Thanks to my university's online subscription, I was able to read the actual Nature article. The device is a phase modulator and it actually uses the free carrier plasma dispersion effect (not a classical electrooptic field effect like the Pockels effect) to modulate the refractive index of silicon. They achieve this effect using a MOS capacitor instead of carrier injection or depletion in a p-i-n device. By doing so, they've boosted the modulation speed from 20 Mbps to 1 Gbps. To convert the phase modulation to amplitude modulation, they fabricate the device in one arm of a waveguide Mach-Zender. Admittedly, it's not a great advance in overall bitrate, but it is a significant step forward for silicon as a photonic material.
  • by casehardened ( 700814 ) on Thursday February 12, 2004 @02:23AM (#8255904)
    The neat thing about silicon-on-insulator photonics is the bending radius. Since the index contrast is so high (3.5 vs 1.5), bends with radii under 50 microns are easily achievable. This makes high levels of integration possible. Thus, you can have modulators, wavelength filters, etc all on the same chip. Now your CPU can talk to your RAM at 16 Gbits with, say, 8-wavelength multiplexing.
  • Re:Google link (KW) (Score:4, Informative)

    by Bull999999 ( 652264 ) on Thursday February 12, 2004 @02:38AM (#8255961) Journal
    What educational edge? U.S. students has some of the worst scores in math and science among the first world countries.

    I've had some experience with Asian school system and the reason why maybe Asian students out score U.S. students with less money spent on the education is that;

    1. They allocate more resources toward math and science than other subjects. I'd say their math level is closer to two years ahead of U.S. students.

    2. Teachers can administer corporal punishments. Students respect (or fear) teachers more in general, which means less disruptive students ruining the learning experience for the rest of the class.

    3. They eat their lunches in their classroom and clean after themselves, including the hallways and restrooms. Less money spent on staff.

    4. Asian students study for longer on the average than the U.S. students.

    As the "defunding" goes, education is not the only program that is getting reduced funding, and the cuts in education has been targeted towards subjects like arts and music while perserving math and science related subjects.
  • Re:Still binary.. (Score:3, Informative)

    by MechaStreisand ( 585905 ) on Thursday February 12, 2004 @03:00AM (#8256039)
    Minor nitpick: b doesn't approach Pi, it approaches e. Otherwise, that looks like the formula I've seen in an article on this subject.
  • We HAVE that (Score:3, Informative)

    by Sycraft-fu ( 314770 ) on Thursday February 12, 2004 @03:23AM (#8256111)
    It is fairly uncommon to find transmissions over long distances that are just simple on-off pulses. Even modesms don't do that, and haven't for a long time. They came to find out that 300bps is about the max you can do with simple on-off signaling. So faster modems use more complex modulations that heve multiple different tones and amplitude levels.

    On the newest and most abstract level we see DWDM fibre transmissions. This takes multiple signals at different fewquencies of light (the individual transmissions which are usualy more than simple on/off) and multiplexes the singal over a single fibre.

    None of that bears any relation to processing on silicon chips.
  • Re:Still binary.. (Score:2, Informative)

    by jrobertray ( 86711 ) on Thursday February 12, 2004 @04:27AM (#8256304) Homepage
    Perl golf time...

    perl -pe 's:(\d{8})\s*:chr oct"0b$1":ge'

    Feed it the string of binary on STDIN.

    Is there a shorter translator?
  • Afloat you say? (Score:5, Informative)

    by DrSkwid ( 118965 ) on Thursday February 12, 2004 @06:11AM (#8256556) Journal
    what is keeping America afloat?

    is a good question

    The 8.2% third quarter growth was purchased on credit-the $374 billion budget deficit that was the largest in the country's history. All indications are that next year's deficit will be even larger, exceeding half a trillion dollars.

    Any idiot with a hand full of credit cards charged to the next generation's children can gin up the short term illusion of prosperity. Until, that is, the bills come due.

    George W. Bush inherited a $127 billion fiscal surplus but ran through all of that and more in his first year. He has turned a $5.6 trillion 10 year forecast surplus into a $3+ trillion forecast loss-an almost unimaginable reversal of $9 trillion in only three years.

    The result of this almost psychotic profligacy, according to the Congressional Budget Office, will be a national debt of $14 trillion in 10 years. Interest payments alone will approach a trillion dollars a year and will exceed spending for all discretionary federal programs combined.

    http://www.commondreams.org/views04/0105-08.htm [commondreams.org]
  • Re:Afloat you say? (Score:2, Informative)

    by AbsalomDaak ( 711187 ) on Thursday February 12, 2004 @12:10PM (#8258569)
    "George W. Bush inherited a $127 billion fiscal surplus but ran through all of that and more in his first year. He has turned a $5.6 trillion 10 year forecast surplus into a $3+ trillion forecast loss-an almost unimaginable reversal of $9 trillion in only three years."

    Isn't true, there hasn't been any "surplus". When Clinton got out we we're in DEBT and we still are, just worse. In actual fact the government has been runnning in debt since at least the 1920's. (Note these figures are based on the fiscal years, not when the President was inaugurated, from Official Current US Debt [treas.gov])

    The Debt when Reagan got in:

    12/31/1980 $930,210,000,000.00
    The Debt after Reagan's first term:

    12/31/1984 $1,662,966,000,000.00 (+.6 trillion)
    The Debt when Bush I got in:

    09/30/1988 $2,602,337,712,041.16 (+1.6 trillion for Reagan in eight years)
    When Bush I left and Clinton got in:

    09/30/1992 $4,064,620,655,521.66 (+1.4 trillion for Bush I in four years)
    The Debt after Clinton's first term:

    09/30/1996 $5,224,810,939,135.73 (+1.1 trillion)
    The Debt when Clinton left office:

    09/30/2000 $5,674,178,209,886.86 (+1.6 trillion for Clinton in 8 years)
    The Current debt:

    02/10/2004 $7,012,102,110,400.63 (+1.3 trillion for Bush II in 3 years)
    All of the last few presidents have been steadily increasing the debt by huge margins, it's nothing new (unfortunately).
    See Government Debt [att.net] or Official Current US Debt [treas.gov]

  • Re:Afloat you say? (Score:4, Informative)

    by Atryn ( 528846 ) on Thursday February 12, 2004 @01:20PM (#8259373) Homepage
    Isn't true, there hasn't been any "surplus". When Clinton got out we we're in DEBT and we still are, just worse.
    This is a very common misunderstanding and the language must be very clear. Many Americans (unfortunately) do not understand [geocities.com] the difference between deficit/surplus and debt. The "deficit" is the amount by which federal spending exceeds federal income in the current year budget. The debt, OTOH, is what the U.S. owes its creditors. See also here [treas.gov]

    The relationship is that the deficit is the amount by which the federal debt will grow in a given year. To complicate matters, the Congressional Budget Office forecasts the "projected deficit/surplus" often for the next 5, 10 or 20 years. These "projections" are based on a host of variables but are generally based on the current tax policies, projected tax revenues (hence projected employment, spending, etc. are factors) and projected expense changes (bills already passed that have spending which kicks in in the future, etc.). These CBO reports are valuable for showing what may or may not need to be fixed/changed, but they should never be considered accurate as all of the variables change (often significantly) each year (espescially the tax code lately).

    There was a forecasted [cbo.gov] "surplus" at the end of Clinton's term. This did not mean that we would be out of debt (a $179B surplus cannot pay off $5 trillion in debt). However, it did mean that we should be able to begin to pay off the debt, thereby reducing future interest payments (which yields a higher forecasted surplus).

    Since most American's do not understand this, and most cannot comprehend what $7 Trillion really is, they tend to ignore the issue. But if we do not start paying down the debt, we will run into major problems. If the world stops buying US Treasury notes, we will have to find some other way to get the money to pay for our deficit spending.

    I'm sure the above has a few mistakes, this topic is fairly confusing and controversial. Several of the above items are also interpreted differently by some folks. See Also Here [harvardmag.com]

    Flame away.
  • Re:Afloat you say? (Score:2, Informative)

    by AbsalomDaak ( 711187 ) on Thursday February 12, 2004 @02:15PM (#8259898)
    Except that there has not been a single year in the last 20 that the debt has gone down. In other words each year there has been a deficit.
    (Information source: Historical Debt [treas.gov])

    2002-2003 $555B deficit
    2001-2002 $421B deficit
    2000-2001 $133B deficit
    1999-2000 $ 18B deficit
    1998-1999 $130B deficit
    1997-1998 $113B deficit
    1996-1997 $189B deficit
    1995-1996 $251B deficit
    1994-1995 $281B deficit
    1993-1994 $281B deficit
    1992-1993 $347B deficit
    1991-1992 $399B deficit
    1990-1991 $432B deficit
    1989-1990 $376B deficit
    1988-1989 $255B deficit
    1987-1988 $252B deficit
    1986-1987 $225B deficit
    1985-1986 $180B deficit *note fiscal year end changed from Dec 31, to Sep 30
    1984-1985 $283B deficit
    1983-1984 $252B deficit


    Where are the surpluses?

    There is not a single year the debt has gone down. In fact, the last actual surplus was a $581M dollar surplus in the 1959-1960 year.
    12/31/1959 290,797,771,717.63
    12/30/1960 290,216,815,241.68
    1959-1960 ---- 580,956,475.95 surplus

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...