Forgot your password?
typodupeerror
HP Hardware

Could HP Beat Moore's Law? 176

Posted by CmdrTaco
from the i'll-believe-it-when-it-boots dept.
John H. Doe writes "A number type of nano-scale architecture developed in the research labs of Hewlett-Packard could beat Moore's Law and advance the progress of of microprocessor development three generations in one hit. The new architecture uses a design technique that will enable chip makers to pack eight times as many transistors as is currently possible on a standard 45nm field programmable gate array (FPGA) chip.""
This discussion has been archived. No new comments can be posted.

Could HP Beat Moore's Law?

Comments Filter:
  • by chriss (26574) * <chriss@memomo.net> on Wednesday January 17, 2007 @09:54AM (#17646180) Homepage

    Since the wiring in an FPGA is not fixed, they have to integrate more flexible ways of routing. According to TFA this takes up 80% to 90% of the silicon, leading to a much worse ratio of wiring to transistors dedicated to logic processing compared to "normal" chips. HP is developing something they call "field programmable nanowire interconnect (FPNI)", which consumes a lot less space. So they are not beating Moore's law, they improve chip space use in FPGAs to become similar to what todays dies with fixed routing achieve.

    And even if you are desperately seeking more efficient FPGA, you'd have to be patient. TFA mentions that they are targeting a 25-fold increase packing density compared to todays 45nm chips in 2020. That's thirteen years, which in Moore's laws steps means about eight 18 month periods, each doubling density. My math may be flawed, but shouldn't that mean that by then we have 2^8 = 256 times the density in the normal process as we have today?

    • So, who uses FPGAs in a big way? Whom is this likely to affect?
      • by quarrel (194077) on Wednesday January 17, 2007 @10:03AM (#17646310)
        Xilinx is the worlds largest producer of FPGAs.

        Their biggest customer? Cisco. (by far)

        The big iron routing guys use heaps in high end devices.

        --Q
      • by TheRaven64 (641858) on Wednesday January 17, 2007 @10:05AM (#17646340) Journal
        Anyone who wants a low-volume run of custom chips. For runs up to a few thousand, FPGAs are cheaper than ASICs (and have the advantage of being firmware-upgradable). If you don't need latest-process speed or power efficiency then FPGAs are likely to be good enough. Take a look here [xilinx.com] for some of the people who use them.
      • See http://wiki.duskglow.com/tiki-index.php?page=Open - Graphics [duskglow.com].
        The development board is going to use a FGPA, because a custom chip design would be too expensive. For later, they plan to produce it as ASIC to improve the price/performance ratio. With better FGPAs, they could stick to the FGPA for the end-user version which would help to reduce investment costs.
        Quote about the ASIC design:

        RTL for the ASIC will be released under a dual license (GPL and proprietary) There will be a time-delay on some parts (to

      • Re: (Score:2, Insightful)

        by zeldor (180716)
        there are lots of uses for FPGAs in radar processing, image recognition, you can even do small
        floating point kernels REALLY fast on FPGAs if done correctly.
        granted on most of them you have to know verilog or vhdl to use them, but there are a couple
        companies that have fully functional C/Fortran programming environments that take it all
        the way down onto an FPGA. using those general codes can run faster on FPGAs.
        plus they are really low power. a room full of general computers running a teraflop
        takes large amo
        • granted on most of them you have to know verilog or vhdl to use them

          JHDL [jhdl.org] is my favorite alternative to these languages. Rather than embedding the behavior in the language itself (which I personally think is the source of most confusion and poor HDL design) JHDL provides you with Java APIs that can be used to construct the circuit.

          It works surprisingly well, in part because circuit design is more object oriented to begin with. Just like in good OOP design, you want your circuits to be simple, black-box desig

      • by Manchot (847225)
        FPGAs are sweet. In addition to what the sibling posts have said, FPGAs are great for prototyping, because programs running on them can be implemented so quickly and easily. Finite state machines are a cinch on FPGAs, which makes them perfect for embedded systems. Plus, when programming them, there's the added benefit that you don't have to worry about the complexity of an actual processor or microcontroller: no stacks, no instruction sets, no interrupts, etc. Obviously it comes with a trade-off of processi
    • I entirely agree. Moore's law is about general ICs, not FPGAs.

      But that does not mean this is insignificant. FPGAs are extremely useful in many applications, but cost and transistor count hold it back for a lot of applications. An increase in transistor density by 3 orders of magnitude is significant enough that it could make FPGAs a viable option for a lot more people.

      Too bad the article made no mention to the effect on cost ;)
    • So they are not beating Moore's law, they improve chip space use in FPGAs to become similar to what todays dies with fixed routing achieve

      Agreed, but if the article were titled "HP Enables Increase in FPGA Logic Density", it would have never made it to a slashdot headline.
    • Roughly, other advancements... multi-cores etc. We should keep pace with Moore's law. It is a rather stupid suggestion. Every time one of these stories comes along they always suggest they are beating Moore's law, when really they keep pace.

      New Wammy Co. method for silicon fab... this is going to double the speed of our computers and crush Moore's Law! It should be on the market about 18 months from now!
  • Why a law (Score:5, Insightful)

    by gravesb (967413) on Wednesday January 17, 2007 @09:55AM (#17646202) Homepage
    I never understood why it was called a law. It was an incredibly accurate prediction, but there was nothing holding is there. I would think that any dramatic increase in technoloby would lead to a jump larger than Moore's law.
    • Re: (Score:2, Informative)

      by fitten (521191)
      It's a prediction and actually a self-fulfilling one, to some degree. In fact, it's as much, or more, about economics than technology. If you look, the original wording even states "cost". Upgrade too fast and you'll go broke because people won't upgrade with you that fast (they'll start skipping 'generations' in their upgrades).
      • by Vreejack (68778)
        Natural laws are based on observation. Moore's law has held to this observation (with some slight tinkering) for decades, now.

        Natural laws are not only useful for their predictive feature but for the fact that existence cries out for an explanation. The parent refers to a popular one: that Moore's law is a self-fulfilling prophecy because of social interactions. That probably makes it less reliable than g=-9.8ms^-2, but everyone seems to know that and knows how much faith to put in it.
    • by Colonel Angus (752172) on Wednesday January 17, 2007 @10:02AM (#17646280)
      Sounds better than Moore's Prediction?
    • by kabocox (199019)
      I never understood why it was called a law. It was an incredibly accurate prediction, but there was nothing holding is there. I would think that any dramatic increase in technoloby would lead to a jump larger than Moore's law.

      Shh, it's just a trend. It could have been wrong or we could have hit a physical limit. One day we will. I like to think of Moore's law as more a goal post of the eletronics industry. They have to double every 12-18 months because of Moore's law. Could this mindset actually work in oth
    • by Junior J. Junior III (192702) on Wednesday January 17, 2007 @10:45AM (#17646990) Homepage
      I'm waiting for the /. article in which it's announced that some school board has declared that Moore's "Law" is really only a Theory, and should be taught alongside "intelligent design" courses which demonstrate how highly specialized researchers and engineers colloqually known as "gods of tech" design and build denser integrated circuit chips using computer assisted methodologies. These things don't manifest out of the ether, and they don't evolve themselves, people.
    • by mwvdlee (775178)
      It's called a law because of the way it is formulated.

      If it were written as "processing speed could increase two-fold about every 18 months in the forseeable future", it would've been called a prediction. Since it is written in an unambiguous way, leaving no margin of interpretation, it's called a law.
    • by AndersOSU (873247)
      Because it's better then evolution?
    • by gr8_phk (621180)
      The original Moore's law defined the relationships between a number of parameters - dimensions and such - of a transistor. From this, one could easily "shrink" a transistor and get everything right. As such, it really was a sort of physical law. I don't recall how the doubling every 18 months came about (I think it was just an observation based on real data), but it was enabled by Moore's law and the two things have been used interchangeably ever since. Something like that anyway...
    • I never understood why it was called a law.

      Because Moore's law is like Moore's love: Hard and fast, and doubles every 18 months.

    • Just like the speed limit is a law. When the speed limit is 65, it doesn't mean that I can't go faster than 65. It just means that if I do, there may be consequences. Moores Law is kind of like that. It has become a self fulfilling prophecy. It has become the defacto standard for that industry. If you advance at less than that rate, you're in trouble. It seems pretty clear to me that 40 years ago, he set a realistic goal, which today everybody has accepted as the standard, just like we have accepted
  • Moore's Law (Score:2, Informative)

    by shirizaki (994008)
    http://en.wikipedia.org/wiki/Moore's_law [wikipedia.org]


    The number of transistors on an integrated circuit for minimum component cost doubles every 24 months.
  • by awing0 (545366) <adam@bad t e c h.org> on Wednesday January 17, 2007 @09:59AM (#17646244) Homepage Journal
    HP has research labs? Honestly, I thought they were an ink company. Damn, and I was getting quite used to mocking their "Invent" logo.

    • They need these chips to enforce bogus DMCA restrictions on thier ink cartriges...ooops, I meant make smarter ink carts that can keep track of how much ink is left.
    • I know you're joking, but back in the day, HP Labs division used to be awesome, where everyone at HP wanted to work, just like PARC used to be for Xerox. I'm glad to know that something still exists there, although at this point it's like the convulsive twitches of a cat that just got hit by a car.
    • by StikyPad (445176)
      Well they did spin off their hardware unit as Agilent [agilent.com] a few years back, which in turn spun off their semiconductor unit as Avago Technologies [avagotech.com], so yeah.. it's surprising they're doing semiconductor research, since the point of spinning off those units was to allow them to focus more exclusively on selling PCs and printers.
  • Moore's "Law" is actually a prediction that's been remarkably accurate.

    I think, though, that's what happening here is employing the technology is causing positive feedback loops in the design and development of the technology, which is accelerating the improvement of the technology.

    It's only going to get faster from here. Human consciousness executing on "silicon" by 2030.

    Welcome to the singularuty.
    • It has to be here by 2038...


      For the non UNIX geeks, that's when UNIX'es time runs out, the equivalent of Y2K, except much worse.
    • by TheWoozle (984500)
      Oh, really? Human brain activity is non-deterministic and sometimes unreliable. Exactly how does this translate to any kind of logic-based, deterministic system?

      The 'singularity' is a particularly foolish pipe dream.
      • by xtal (49134)
        Exactly how does this translate to any kind of logic-based, deterministic system?

        You can't connect a non deterministic system to a logic-based one? What happens when I program a computer?

        The brain is a pile of connected goo. Extremely well connected goo, but connected goo that we will either model the underlying principles of, or connected goo we will just clone verbatim in silicon. Resistance is futile.

        • by TheWoozle (984500)
          No. When you understand why we can't perfectly predict the weather, you will understand why we will never be able to replicate brain function.

          Resistance is not necessary, the "singularity" is a fairy tale.
          • Re: (Score:2, Interesting)

            by Stefanwulf (1032430)

            I understand why we can't predict the weather.
            I understand why we can't _predict_ brain function.

            I don't understand why that means we can't build a new brain that will simply remain equally unpredictable.
            Just because a system is chaotic doesn't make it impossible to construct.

          • by Bassman59 (519820)

            Resistance is not necessary, the "singularity" is a fairy tale.

            I think that Ray Kurzweil should go back to building keyboard synthesizers and stop yammering on about "singularities" and other such nonsense. It's embarrassing, really.

          • by nasch (598556)
            When you understand why we can't perfectly predict the weather, you will understand why we will never be able to replicate brain function.
            You're implying the human brain is a chaotic system. Is that true? I've never heard it described that way before. Secondly, you're saying that since we cannot predict the outcome of a chaotic process, that means we cannot produce a system that works in the same way as an example chaotic system? How do you figure?
      • by spun (1352)
        Prove that human brain activity is non-deterministic. I have a good friend who is getting his PhD in neuroscience, and from conversations I've had with him, I'd say it's pretty damn deterministic. Human consciousness does not exist outside the laws of nature. It is not a special type of process, unlike any other. It is as amenable to simulation as any other process in the universe, and like any other process, it can be modelled to any arbitrary level of versimilitude by throwing more computational power at
        • by TheWoozle (984500)
          Wow, what a lot of arrogant presumption in a single post! I never said anything about humans being special, my particular view of human consciousness, etc.

          I will kindly ask you to leave emotional knee-jerk reactions out of a perfectly reasonable discussion. Thank you.

          Have a pleasant day. :-)
          • by spun (1352)
            It's perfectly reasonable to call someone's ideas a fairy tale and a pipe dream based on a completely flawed understanding of the physical world and human consciousness? You have a very different definition of 'reasonable' than most. If not for the reasons I assumed, why DO you have such a strong emotional reaction to the idea of reproducing human consciousness in another media? Barring a decent explanation from you, I'm going to stick with my assumptions, arrogant though they may be. I've talked with many
        • by operagost (62405)

          One word for you: Heisenberg.

          Have a nice day! Hope I didn't determine your mental state!
          • by spun (1352)
            Jebus, man! How does Heisenberg enter into human consciousness? Can you show a quantum mechanical mechanism? Some people like Penrose think there has to be, but most people in the field (from covnersations I've had, I could be wrong) think there are NO quantum mechanical processes that play a part in consciousness.

            However, even if there are such processes, they can be modelled. Ever hear of quantum computing?
            • by Decaff (42676)
              Jebus, man! How does Heisenberg enter into human consciousness? Can you show a quantum mechanical mechanism? Some people like Penrose think there has to be, but most people in the field (from covnersations I've had, I could be wrong) think there are NO quantum mechanical processes that play a part in consciousness.

              Because every single electrochemical and chemical reaction is subject to quantum mechanics, and so Heisenberg's uncertainly principle.

              However, even if there are such processes, they can be modelle
              • by spun (1352)

                Because every single electrochemical and chemical reaction is subject to quantum mechanics, and so Heisenberg's uncertainly principle.

                Do you even know what this means? The wave function is deterministic. Only the collapse is not. Is it determined that the sun will rise tomorrow? By any sane definition of the word, yes. The original poster was trying to making the strong argument that modeling human consciousness is impossible. Not only is that not proveable, you can't even prove that we are not now running

            • by crgrace (220738)
              I'm a bit confused by your reasoning, spun. A few posts ago you were saying that Quantum Computing would someday enable us to model the Quantum state of each brain molecule, then you're saying NO quantum mechnical processes play a part in consciousness. First off, the complexity of quantum entanglement grows geometrically with the number of quanta in question. It is analgous to calculating the quantum probability density functions (PDFs) of electrons in an element. It's easy to do for Hydrogen, harder f
              • by spun (1352)
                I'm saying there are probably no quantum states to model, but if there are, quantum computing may let us model them. I thought Penrose was one of the ones arguing that we couldn't simulate the brain due to quantum effects? Didn't he originally present the argument you used?

                And actually, you may be interested to learn that, yes, though we are an unnkown distance away from understanding consciousness, our understanding of the brain and various components of consciousness is growing by amazing leaps and bounds
        • by Decaff (42676)
          Prove that human brain activity is non-deterministic. I have a good friend who is getting his PhD in neuroscience, and from conversations I've had with him, I'd say it's pretty damn deterministic. Human consciousness does not exist outside the laws of nature. It is not a special type of process, unlike any other.

          Perhaps you could then, given the current understanding of neuroscience, explain qualia. Not so easy, is it?

          The point is not that human conciousness exists outside the laws of nature, just that our
          • by spun (1352)
            The concept of qualia is an intersting one, and the question of determinism in cognition is, as yet, unanswered. My friend thinks consciousness is a deterministic process, a view shared by many in his field. But we don't know, and we may never know.

            We could develop a simulation of human consciousness that had no internal experience at all, but presented the appearanc of having one. People would download themselves and their friends would ask the download, "So how's it feel" The simulation would answer in so
            • by Decaff (42676)
              As I understand it, determinism in quantum mechanics is a settled issue. The quantum wave function is deterministic. The collapse of said function is not. You may not be able to tell, instant to instant, where a particle is, but you know exactly how the quantum wave function will evolve.

              The problem is that quantum mechanics is incomplete, so any conclusions we draw from it are extremely suspect.

              In any case, indeterminism only applies at quantum scales. At macro scales, including the level at which neurons e
              • by spun (1352)
                The collapse doesn't produce randomness. If it did, then the wave function itself would not be deterministic, but it is.
                • by Decaff (42676)
                  The collapse doesn't produce randomness. If it did, then the wave function itself would not be deterministic, but it is.

                  Of course the collapse produces randomness. The great mystery of quantum mechanics is how the derministic wave function collapses randomly.

                  If you don't believe the collapse produces randomness, perhaps you could point me to an equation that describes how we can predict where the result of a collapse (such as the precise location of a diffracted electron) will appear.
                  • by spun (1352)
                    Okay, that's a good point. But the major characteristic of randomness is that it is, well, random. So any source of randomness will do in our simulation. We don't have to simulate anything quantum at all. We just add randomness, after all, the collapse of the quantum wave function can't be adding information to the model. If it is non-deterministic, if it is random, it can't be adding anything coherent to consciousness. We can quantify how much randomness the effect is adding and add precisely that much ran
                    • by Decaff (42676)
                      That is a good reply. Unfortunately, quantum mechanics involves far more than simple randomness. What I am about to say may sound weird and even mystical, but it is solid physics. We really haven't the faintest idea what conciousness is, or what its role is in terms of quantum mechanics. We are trying to describe the production of conciousness involving what, at the lowest level, is quantum mechanical systems - this is a confusing mess. There is even good evidence that quantum mechanical systems are co
      • by vertinox (846076)
        Oh, really? Human brain activity is non-deterministic and sometimes unreliable. Exactly how does this translate to any kind of logic-based, deterministic system?

        The mind is made of chemicals reactions and organic material which is in turn made of energy and atoms.

        Atoms and energy must adhere to the laws of physics so they are deterministic.

        Otherwise either you have to assume that the brain does not have follow the laws of physics since we live in an illogical universe or that the mind is not because of the
        • by Decaff (42676)
          Atoms and energy must adhere to the laws of physics so they are deterministic.

          Woah! Hold it there. Who says that the laws of physics are deterministic? This has been a matter of heated (and unresolved) debate regarding atoms and subatomic particles for close to a century. You simply can't claim this.
          • by spun (1352)
            the quantum wave function is deterministic. If you know the state of the wave, you know EXACTLY how the wave function evolves. You don't know how it will collapse, but that doesn't matter, because no matter how it collapses, the evolution of the wave function itself is deterministic. Whether reality works that way or not is certainly open to debate.

            In any case, it doesn't matter. If consciousness is non-deterministic, we add in a random factor to our simulations. After all, non-deterministic means not deter
  • Math says: yes. (Score:5, Informative)

    by Just Some Guy (3352) <kirk+slashdot@strauser.com> on Wednesday January 17, 2007 @10:03AM (#17646312) Homepage Journal
    The mean value theorem shows that if the average rate is x, and the instantaneous rate ever goes below x, then it must necessarily also be above x sometimes. Put another way, progress will sometimes be faster than other times.
  • 6 to 1 (Score:3, Insightful)

    by guysmilee (720583) on Wednesday January 17, 2007 @10:05AM (#17646330)

    As a rule of thumb i was told ... an fpga normally uses 6 gates to 1 gate used by a custom ASIC chip ... so a 5 million gate chip would require a FPGA with 30 million gates ...

    This may have changed over the years ... but i'd like to know how this announcement changes this heuristic ...

    • As a rule of thumb i was told ... an fpga normally uses 6 gates to 1 gate used by a custom ASIC chip ... so a 5 million gate chip would require a FPGA with 30 million gates ...

      Pardon me if I'm speaking out of turn, but don't you mean transistors, not gates? In theory, the gate count should remain the same between the two, with most differences being accounted for by designing gates out of different gates. (e.g. Using NAND to create all other gates.) Or are you referring to a formula for translating the FPGA

      • Re: (Score:2, Informative)

        by guysmilee (720583)

        No true ... because of timing requirements ... if one gate is used it may rule out using others because of how the gates are connected ... i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe. This is because all gates are not all directly connected to each other ...

        If this new technology allows more routes ... i believe you will get less gate waste ...

        I am just a software dev ... so i could be wrong though ... bu

        • i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe.

          I see what you mean. Generally, FPGA devs are always talking about reworking your design to eliminate as many wasted gates as possible. (The ISE tools help with this, IIRC.) Xilinx claims that their compilers are smart enough to rework your design automatically for a high rate of utilization.

          Of course, proper utilization is partly a function of which FPGA you use. Mos

        • i.e. picking one gate and 1 route may not allow certain gates to be connected ... so the 6 to 1 ratio refers to "wasted gates" ... I believe. This is because all gates are not all directly connected to each other ...

          FPGAs use lookup tables to simulate gates: See here for a description of a basic Configurable Logic Block [wikipedia.org]

          If this new technology allows more routes ... i believe you will get less gate waste ...

          This is true. However, it is more important than simply wasting gates. Performance of an FPGA is

          • Re: (Score:2, Interesting)

            by imgod2u (812837)
            This is only true of some FPGA's. Xilinx, in particular, uses look-up tables to simulate logic (along with dedicated flip-flops). Actel, however, has a fine-grain architecture that uses basically a matrix of configurable (solid-state, flash-based) 3-input, 1 output tiles that very much resemble gates. Upon configuration (done once), a high voltage (higher than normal core or IO voltage) is applied and fuses the interconnects in these tiles to behave like the particular gate/flip-flop it's suppose to be h
        • Re: (Score:2, Informative)

          by imgod2u (812837)
          No. Contrary to popular belief, ASICs don't utilize all of the gates they have either. There are limitations (even more so) in ASIC-land where you only have so many metal layers on top of your silicon to route your interconnects. Granted, a human being laying it out by hand is much better than an auto-router, but there will still be waste. The same is true of an FPGA and the general rule is that you never utilize more than 70-80% of your available logic resources. This way, there is some flexibility th
  • 2008 (Score:5, Insightful)

    by mastershake_phd (1050150) on Wednesday January 17, 2007 @10:05AM (#17646332) Homepage
    HP Engineers Defy Moore's Law, New Nano-Chip Prototype in 2008

    They havent even made a chip yet.
    • They're planning on breaking the inverse moore's law, which states that:

      "If a tech company announces a big breakthrough, which they claim will be available to consumers in 18-24 months, then the probability of the breakthrough becoming vaporware will approach 1."
  • by creimer (824291) on Wednesday January 17, 2007 @10:14AM (#17646462) Homepage
    Maybe HP should focuse on beating the illegal wiretapping case before they break another law? They're not Microsoft, you know.
  • What? What? (Score:5, Insightful)

    by Mike1024 (184871) on Wednesday January 17, 2007 @10:18AM (#17646518)
    OK, the actual paper's here [iop.org] (full text freely available).

    As far as I can tell this has nothing to do with standard processors and everything to do with FPGAs.

    It seems what they propose is: Instead of the FPGA configuration bits being done with gates on the silicon wafer, why not perform configuration by configuring the metal-to-metal interconnects? After all, if the metal layers are thick compared to the interconnects between them, you can blow connections you don't need like blowing a fuse. By removing the FPGA configuration bits from the silicon wafer, they can save a lot of space, leading to higher speeds and lower costs.

    They have a clever way of arranging such a system, which should be easy to fabricate.

    What Moore's law is supposed to have to do with this I don't know.

    Michael
    • As far as I can tell this has nothing to do with standard processors and everything to do with FPGAs.

      It seems what they propose is: Instead of the FPGA configuration bits being done with gates on the silicon wafer, why not perform configuration by configuring the metal-to-metal interconnects? After all, if the metal layers are thick compared to the interconnects between them, you can blow connections you don't need like blowing a fuse. By removing the FPGA configuration bits from the silicon wafer, they

    • What Moore's law is supposed to have to do with this I don't know.

      Hey, at least they correctly identified Moore's Law as having something to do with the number of transistors on a chip, and not CPU clock speed or some other factor which contributes to performance but was never spoken to by Moore himself.

      • by Mike1024 (184871)
        True that - I was putting together the obligatory Moore's law graph for a presentation I was doing and there are about 50 million different interpretations. Is it about channel lengths? Or gate area? What about DRAM cell pitch? Transistor density? Transistors per chip? Transistors per dollar? Clock speed? Clock speed per dollar? Calculation speed per dolar? Does reducing or increasing die area count? Can Moore's law be extrapolated to include relays and vacuum tubes? Should it be? Is Moore's law a doubling
  • Its not like Moore's Law is a law of physics (like the speed of light). Its more like an observation.
  • Of course (Score:4, Funny)

    by Billosaur (927319) * <wgrother@OOOopto ... inus threevowels> on Wednesday January 17, 2007 @10:18AM (#17646532) Journal

    If they wait for it in a dark alleyway with a lead pipe and stay very, very quiet...

  • by Stevecrox (962208)
    I thought FPGA's were a common microcontroller that *could* be altered to run as a microprocessor. You can configure FPGA's to run as a micro-controller and you can get microprocessors to act like a microcontroller but they are not the same thing. Most FPGA's run at far lower clock frequencies and far lower transistor density's when compared to your desktop CPU. This isn't because one is better than the other its because they are designed for different purposes, getting more transistors on a chip is great f
    • by AKAImBatman (238306) * <.akaimbatman. .at. .gmail.com.> on Wednesday January 17, 2007 @11:14AM (#17647394) Homepage Journal
      The largest FPGA I have been taught about (and gotten to use) had 22,000 transistors on it, I thought your average CPU was supposed to have billions.

      You are seriously behind the times, my friend. Xilinx's smallest offerings provide ~20,000 gates, while their largest offerings offer millions of gates placed on a chip of over 1.1 billion transistors [sda-asia.com].

      22K transistors is solidly inside CPLD territory these days. :)
    • by greenrom (576281) on Wednesday January 17, 2007 @11:15AM (#17647406)
      FPGAs are not microcontrollers. They are programmable logic devices. You can use an FPGA to implement a microcontroller, a microprocessor, or any other logic device.

      You probably wouldn't be able to put the latest Xeon processor on an FPGA, but to say that they are far slower and smaller than modern processors is incaccurate. There are plenty of FPGAs that can handle signals in excess of 1GHz, and a 22,000 transistor FPGA is a VERY small FPGA.

      Many custom chips including custom processors are first developed and tested on FPGAs before they become ASICs. In fact, you can give your FPGA design files to an IBM or a TI, and they'll gladly turn it into an ASIC for you -- for a fee. Often times, FPGAs are used in designs without ever going to an ASIC. Generally, the only reason you build an ASIC is because the per chip cost is much cheaper. Heat and performance are usually secondary considderations. There is, however, a big up front cost to doing an ASIC, so for low volume parts or designs that might need to be upgraded or fixed later, FPGAs are generally the better option.

      There's also a middle ground -- so called "hard copy" FPGAs. This is when you give your design files to Xilinx or Altera with a big check, and they sell you special FPGAs that are guaranteed to work with your design (but not necessarily other designs). In exchange, you get the chips a lot cheaper and they can also disable parts of the chip your design doesn't use to reduce power consumption. The FPGA manufacturers benefit by being able to sell chips that would otherwise be defective but are suitable for certain designs.
      • For those of you who didn't quite follow greenrom's excellent (but rather technical) post, he's basically saying that doing a task in hardware is faster than doing it in software. What FPGAs allow you to do is to create nearly any form of hardware device just by uploading a new design. While you can use this ability to create a new CPU, it's likely to be much slower than a regular CPU. Thus FPGAs are more useful for hardware like network routers, graphics chip research, codecs, and other highly specialized
  • One of the reasons that Moore's law has so accurately predicted the continual doubling of storage and speed is that it offers companies an excellent guideline for product roll-out. It's a self-fulfilling prophecy. Customers expect computers to get more-bigger-better-faster at that rate, so companies have a production target. That provides a much more stable product ecosystem than one that is marked by a punctuated equilibrium of sudden large advances followed by unpredictable periods of status quo.
    • by khallow (566160)
      Shared expectations aren't "marketing hype" especially expectations that are proven out over more than four decades. I think of it more as a development model that does what you say it does.
  • As if HP's shit didn't run hot enough as it is.

    They really need to focus on better cooling before they go anywhere. Damned laptops overheat daily because of the crap cooling systems in them.
  • by virtigex (323685) on Wednesday January 17, 2007 @11:11AM (#17647348)
    Those scoundrels at HP are doing it again. They probably managed to do this by tapping Moore's phone line or something.
  • The Open Graphics Project http://lists.duskglow.com/mailman/listinfo/open-g r aphics [duskglow.com] is an attempt to make an open-source-hardware graphics card, so that we don't have to wrestle with companies like nVidia (ok, Intel) or ATI (ok, AMD) to get decent open-source drivers.

    The OGP cards use FPGAs, which is the technology that HP's work (hopefully) innovates. I wonder if this advance will make OGP's cards much more competitive with nVidia/ATI cards? Heck, maybe HP would even consider showcasing its technolog

  • The expected technology leap is given as a difference from this trend:

    International Technology Roadmap for Semiconductors [itrs.net]

    You can read more about it at the ITRS website. [itrs.net]

    A quick scan of the website reveals this interesting image [itrs.net]. The observant will note that with current news progress is already ahead of their curve.

  • Soon we will have even faster, smaller prototype use graphics calculators with horrible user interfaces! SWEET.
  • by hotdiggitydawg (881316) on Wednesday January 17, 2007 @11:48AM (#17648028)

    A number type of nano-scale architecture developed...
    Mi scusi? No habla Engrish... Seriously Taco, got editing skills? The whole summary is a direct cut-and-paste of the first paragraph of TFA, grammatical errors and all. Perhaps "A number of types of nano-scale architectures developed..." would've made more sense.
  • Suuuurre... HP "invented" a breakthrough in new chip design that will launch them 3 generations ahead. We all know they have just been studying that chip from a certain android that came back from the future. Soon they will announce AI, then SkyNet will launch and begin to take over, and then we will have a nuclear holocaust and will be fighting the very machines we invented!! Then the earth will be crawling with robots that have thick Austrian accents and like to shoot people. Destroy the chip now before i
  • Else they are slower than Moore's Law in 18 month doublings.
  • I must be hungry, because when I saw that this story was tagged with "mooreslaw", I thought, "Mmm, that sounds good. Is that anything like coleslaw?".

Lo! Men have become the tool of their tools. -- Henry David Thoreau

Working...