Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

The Year 2004 in Microprocessors 94

DeanMan writes "From spintronics to clockless CPUs, 2004 was a year of process and research in the microprocessor industry. As a way to transition into the new year, this article offers a month-by-month look at the highlights of the 2004 microprocessor timeline."
This discussion has been archived. No new comments can be posted.

The Year 2004 in Microprocessors

Comments Filter:
  • by Anonymous Coward
    I can't believe you'd post this story and the year isn't even out yet.
  • Clockless CPUs (Score:2, Interesting)

    by koreaman ( 835838 )
    how does that work, someone enlighten me please.
    • Re:Clockless CPUs (Score:2, Informative)

      by Wiser87 ( 742455 )
      There's an interesting article about it here [geek.com].
    • Re:Clockless CPUs (Score:5, Informative)

      by Spitfire75 ( 800119 ) on Friday December 31, 2004 @05:12PM (#11230490)
      From TFA: http://www.geek.com/news/geeknews/2004Nov/bch20041 104027700.htm Asynchronous processors are capable of allowing each of their units to run independent of a global synchronizing clock, saving the power consumption--not to mention the design life cycle--of a complicated and usually power-hungry clock route scheme. The clock is increasingly the source of a large amount of power consumption, because of both the increasingly long relative wire length and the buffers (extra gates) required to repeat the signals in high-clock-speed devices. Obviously, the elegance of this low power design comes at a cost, in fact a barrier cost to high volume manufacturers. First of all, there is a great reliability issue for high-speed devices. No clock means potential race conditions and other performance/functional conflicts.
    • by tloh ( 451585 ) on Friday December 31, 2004 @05:15PM (#11230516)
      It implements the refinement of a 10 year old technology [biol.rug.nl] invented in Belgium.
    • Re:Clockless CPUs (Score:2, Informative)

      asynchronous CPUs - different parts of the CPU go at their own speed and the results are co-ordinated in other ways (i.e. without lockstep execution across the whole chip).

      This is much more complicated to design and to mass produce but the power savings may make it worthwhile.
    • The very first CPUs were designed without a clock. However the introduction of a clock made motherboard design far simpler, at the expense of flexibility on the chip itself. Now that technology has progressed forward as much as it has, the original clock-less type cpu design is more viable.

      I for once am glad to be rid of our GHz-overlords.

  • FPGAs? (Score:3, Interesting)

    by paithuk ( 766069 ) on Friday December 31, 2004 @05:07PM (#11230463) Homepage
    No mention of FPGAs?
    • Re:FPGAs? (Score:2, Interesting)

      by stratjakt ( 596332 )
      FPGAs have been around since the 70s, what's new about them?
      • FPGAs have been around since the 70s, what's new about them?

        AFAIK its the late 80ies.. sources? SPLDs do not count, naturally.

        Regarding the original poster - if you have been following the recent developments in FPGA you may have noticed that it moved from niche logic replacement to mainstream SOC during the last two years.
  • So.... (Score:2, Insightful)

    IBM is a news source now, eh?
    • Given my initial reaction, which was along the same lines as yours, I looked down at the author's biography at the bottom:

      Kane Scarlett is a technology journalist/analyst with 20 years in the business, working for such publishers as National Geographic, Population Reference Bureau, Miller Freeman, and International Data Group and managing and editing for such journals as JavaWorld, LinuxWorld, DV Magazine, NC World, and of course, developerWorks.

      Seems legit? Obviously he is carried on the IBM website, w
      • Re:So.... (Score:2, Insightful)

        True. Google shows a lot of other articles by him, in other places. But, while I like IBM, linking to stories like this sets a precedent I don't like. Most companies will flat-out lie to make themselves look better.

        And- Kane? Did anyone else look at his name and picture and think of Command & Conquer?
  • by paithuk ( 766069 ) on Friday December 31, 2004 @05:19PM (#11230536) Homepage
    Intel's plan all along has been to reduce the size of their pipeline stages in order to increase the possible clock rate. However, with the halt of the 4GHz processor, and their new found interest in multicore chips, it'll be interesting to see how they'll compare with FPGAs in the upcoming years since both offer what the other is looking for. Intel want to be parallel, and FPGAs want to be sequentially quicker. The only difference is that Intel has been researching how to be quicker for a lot longer than FPGAs have been around so the guys at Xilinx shouldn't have too much difficulty following Moore's Law, whereas Intel might have more difficulty expanding into multiple cores since their chips are already huge. Who will win out in the end? Will Intel start snatching up companies like Celoxica and Xilinx in the coming years?
    • Intel's plan all along has been to reduce the size of their pipeline stages in order to increase the possible clock rate.
      Unless this is a new plan the word reduce should be increase.
      • No, actually it should be reduce... each stage is smaller---> more overall stages. Witness Prescott.. it has ~10 more pipeline stages than Northwood did, each one is doing less (and hence can be clocked faster)
        Intel is now moving away from this since the performance gains just aren't there and the power consumption is getting terrible (like they said in the clock-less posts... you must distribute clock to all those stages amongst other power-sucking things)
    • >...it'll be interesting to see how they'll compare with FPGAs in the upcoming years since both offer what the other is looking for.

      How so? FPGAs operate at the digital logic level and CPUs operate at the machine language level. Either you have an FPGA emulating a CPU, in which case the basic inefficiency of a FPGA kills you, or you recompile every bit of software into its digital logic description to be used on some sort of uber-FPGA, in which case architectural and compiler problems kill you. The
  • Summary (Score:3, Funny)

    by af_robot ( 553885 ) on Friday December 31, 2004 @05:20PM (#11230541)
    Quick summary for slashdot's readers:
    1.AMD rocks (generally good, til their CPU prices are lower Intel's and we can overclock cheap Athlons to save some $$$)

    2. Intel sucks (Pentium IV = really bad, PentiumM = good but pricy, but we still hate intel today, because they are evil)

    3. IBM rocks (good boys, cause they support Linux and can beat SCO's ass)

    4. There are some companies in a world but we don't give a shit until Linux can run on their processors.
    • Re:Summary (Score:3, Insightful)

      by paithuk ( 766069 )
      Intel doesn't suck. AMD wouldn't be here today if it wasn't for Intel, and probably nor would be the machine you're sitting at right now. Intel have some amazing guys working for them, and have hosted brilliant minds in the past. Linux may be open source and beautiful in some respects, but it hasn't done for operating systems what Intel have done for processors. Come on, be serious pal.
      • AMD wouldn't be here today if it wasn't for Intel

        By the same token, Intel probably wouldn't be here in their current form if it weren't for AMD and the other X86 clone manufacturers. The PC industry would be reluctant to stick with a CPU architecture that is only available from a single source. (In fact, Intel originally licensed AMD to produce 8086s because IBM insisted on having a second source as a condition for choosing the CPU for its new PCs.)

      • Actually that supposed to be a joke :)
        Intel doesn't suck, but they made so many bad and stupid marketing decisions recently that slashdot's community doesn't like Intel too much.
        Do you remember "PentiumIV will make Internet faster" marketing campaign? Or soap gigahertz war? CPU ratio lock? And so on.

        Amazing guys at Intel can do amazing things but bad marking can easily kill it.
        • Do you remember "PentiumIV will make Internet faster" marketing campaign?

          That's right up there with the 'Algore claim he invented the internet' urban folklore, and of similar discussion value.

          If you've ever tried to connect to an even nominally multimedia-rich web site on a 386 or 486 box that has a broadband connection, you know what I mean.
      • AMD wouldn't be here today if it wasn't for Intel, and probably nor would be the machine you're sitting at right now.

        Intel never having existed would certainly change history, but not necessarily for the better.

        If Intel never existed, we might all be using DEC Alphas as our desktops/workstations/servers, which would be a good thing, in many ways.

        I also think AMD (or some similar company) would exist without Intel, and would be cloning processors like the Alpha instead of x86.

      • paithuk wrote:

        Intel doesn't suck.

        Have you ever talked to anyone who has worked for Intel? You probably wouldn't be saying this if you'd ever worked their yourself (phrases like "meat-grinder" and "big brother" are pretty common from people who have).

        It's amazing that Intel has held on to it's lead for as long as it has, considering how poorly it treats it's employees.

        My impression is that the best people in the business are now working for AMD, because no one with any self-respect wants to work fo

  • by nizo ( 81281 ) * on Friday December 31, 2004 @05:20PM (#11230542) Homepage Journal
    The space savings in the clockless CPUs is worth it, plus you don't have to keep winding them up all the time.
    • The space savings in the clockless CPUs is worth it, plus you don't have to keep winding them up all the time.

      But HOW i supposed to check a time with the clockless CPU?!

      They must first create some kind of clock-coprocessor before releasing clockless CPU to market!
    • It was picking up the new 40lb+ self-winding computers to shake them daily that was getting to me.

      On the plus side, you should see how big my shoulders are!
    • From a pin-count point-of-view, some of the 'clockless' micros from Microchip are pretty cool. They even have a PIC or two in a four pin package now.

      The parts have a regular syncronous clock inside, and are termed 'clockless' because they run on an internal RC clock and thus have no exterior 'clock' pins, and so aren't what is being referred to here. But from a hardware-hacker's point of view that's what first came to mind. Who'd have thought the processor could one day masquerade as a bridge rectifier?
      • If I'm not mistaken the AVR mcu's didn't require any external clock either.

        A "clockless design" really means a circuit in which you don't drive a clock signal down the pipeline. Also known as an "asynchronous design".

        Tom
        • Yes, I know. But I thought some topic drift would be good to introduce, and why not drift off to something cool like 4-pin embedded controllers?

          This is supposed to be about Microprocessors in 2004, and by volume, most of them are still either 4 or 8 bit parts, in the real world we live in (a world which differs from the fantasy world that people who've mastered the phillips screwdriver and 'built their own computer from scratch' live in, mind you)
  • Whoever read that article line by line and clicked all the links wins the Geek of the Year award for 2004.
  • by slashdot_nobody_nowh ( 845201 ) on Friday December 31, 2004 @05:42PM (#11230661)
    One should remember that clockless design
    poses two huge difficulties:

    1) verification (both logical and timing);
    2) in-chip noise.

    Clocking allows oscillations created
    by generating edges to fade out before
    the sampling edge.

    In clockless designs signals change whenever they
    want in a sense, so sampling may occur while
    the noise (parasitic oscillations) is still high,
    and wrong values will be stored/used.
    • In clockless designs signals change whenever they want in a sense

      Huh? Free Will for silicon?

      Not very likely. Stuff will be designed to work. Just because timing issues become different doesn't mean they cease to be a concern. Also, the 'noisiest' time in a lot of saturated logic circuits is the big 'thump' when the syncronous clock changes state.

      • Look, I tried to be short in my post.

        When I'm saying signals change more or less
        whenever they want, I mean there is no
        generating clock to relate their changes to
        and thus to avoid interference created
        by signal changes.

        With modern frequencies connectors in chips
        behave to big extent like transmission lines.

        I worked as VLSI designer and STA (Static
        Timing Analysis) methodology engineer,
        so I hope I know what I'm talking about.
    • Wow, I have not heard such a well-crafted-but-wholly incorrect highly-rated response on Slashdot in, oh, what, a day or two.

      I suggest you learn about clockless VLSI (AVLSI) prior to commenting on it. Check out the website of either the group at Caltech or Manchester.

      I think you will find that asynchronous VLSI is actually *easier* to verify, less liable to problems with noise and power, is more scalable, higher performance, and more flexible than today's dominant VLSI technologies.
  • Just think how much trouble a clockless CPU would have saved leading up to Y2K.
  • primer post del MMV : ), segun hora española ----- Truman Burbank
  • This is an informative source of links about what happened in the last year, but the author almost sounds surprised that so much has happened in one year. I have always been of the belief that the rate of innovation has been increasing on a yearly basis. As impressed as I am with some of the things that have been developed this last year, I am not terribly surprised that this much has happened.
  • by AtariDatacenter ( 31657 ) on Friday December 31, 2004 @06:30PM (#11230893)
    Remember IBM's microprocessor history that was posted to Slashdot a week or so back? I have to say, this one is far more even handed to the competition. Quite a lot of mentions of SPARC for the first time.

    But this one line cracks me up...

    American Technology Research predicts that Sun® and IBM® are well positioned to capture the 64-bit desktop market since both use the Opteron processor as an integral part of upcoming product lines and both have initiated flexible CPU roadmaps.

    Sun? IBM? Capture the desktop market? My, these folks at American Technology Research much be geniuses! Or is that genusi?

    FWIW, Sun has been doing 64 bit computing for quite some time now with the 64 bit SPARC chips it has been putting out for ages. But Sun Microsystems and IBM, masters of the 64 bit desktop? Oh boy.
    • Yer right. Heck, Sun and IBM's 64-bit boxes don't even run Windoze. And everybody knows Windoze is the ONLY future desktop...

      (actually, Microsoft and others are kinda banking on the general purpose 'desktop' ceasing to exist before long)
      • Yer right. Heck, Sun and IBM's 64-bit boxes don't even run Windoze. And everybody knows Windoze is the ONLY future desktop.

        Well, there is Linux. But IBM's OS/2 department called. They want their mindshare back.

        actually, Microsoft and others are kinda banking on the general purpose 'desktop' ceasing to exist before long

        I really am starting to see the case for a walled garden in PCs. My mother in law's computer, with AOL no less, became so clogged with adware that I don't see how she used it. Or worse,
    • But Sun Microsystems and IBM, masters of the 64 bit desktop? Oh boy.

      It's funny that an IBM story would mention Sun in the same positive breath as IBM, as Sun is IBM's biggest competitor in high-end computing. You'd think they'd mention another company, instead.

      Actually, I found this line to be far more of an IBM advertisement:

      IBM has a great month: IBM releases POWER5-based eServers p5-520 and p5-550 and xSeries servers based on Intel Xeon processors; it showcases Open Blade Server initiative by introd

  • by AtariDatacenter ( 31657 ) on Friday December 31, 2004 @06:33PM (#11230904)
    "IBM debuts Cell processors, designed to be used in workstations, Sony PlayStations gaming consoles, and in Toshiba televisions. Programming the processor is said to be relatively easy."

    How much did they have to couch it? "Relatively easy?" "Said to be..." ?

    Translation for the technical crowd:
    "Programming a cell processor is hard."
  • by zymano ( 581466 ) on Friday December 31, 2004 @06:54PM (#11231005)
    A hybrid might ease us into all optical chips.

    Why is everyone dropping this field ? Quantum is way off in the distance and so is spintronic.

    Using optical buses would reduce wiring complexity too.
  • 2004 timeline? (Score:1, Interesting)

    by Anonymous Coward
    2004 seems to continue several years more.

    "November

    Plastic electronics start to be considered for more uses, and Infineon demonstrates a new technique in which two chips are sandwiched together and interconnect among hundreds of surface contact pads.

    ARM plans a design center in India. By 2008, China will knock Japan out of the top spot as consumer of chips.

    AMD sees a bright future, and signs a second fabrication partner to start in 2006."
  • by raptor21 ( 47540 ) on Saturday January 01, 2005 @01:13AM (#11232375)
    The article claims that Sun is outsourcing Niagara, which is a 65Nm process to Fujitsu. This is absolutely false. Niagara is to debut in 2005-2006 according to Sun and on 90Nm technology not 2007.

    http://blogs.sun.com/roller/page/jonathan/200409 10

    Since the chip is already in the Sun labs how can it be 65Nm? No fab, in my knowledge, is ready for 65Nm yet,

    http://aceshardware.com/read.jsp?id=65000293

    Also sun never claimed to outsource all chip manufacturing to Fujitsu. The article is based on blurbs from unreliable sources, example geek.net.

    This is the second IBM article to calim that Sun is outsourcing all chip desgin and manufacturing to fujitsu. Is this some sort of FUD IBM is trying to spread?
  • by MtViewGuy ( 197597 ) on Saturday January 01, 2005 @01:37AM (#11232426)
    And that is the real story of desktop computer technology in 2004.

    It's no longer how fast you can crank up the CPU speed, it's now how fast the rest of the system runs. Look at what we have now on desktop machines:

    1. The development of faster motherboard interconnects with improved chipsets and things like HyperTransport and its competitors.

    2. The wide availability of PC3200 (DDR-400) DDR-SDRAM system RAM, with even faster RAM coming over the next 18-24 months.

    3. The development of AGP 8x and new PCI Express connections for graphics cards with 3-D processing ability that would be the domain of ultra-expensive workstations only a few years ago.

    4. The development of ATA-100/133 IDE, Serial ATA and soon Serial ATA-II IDE, and UltraSCSI 160/320 interfaces and 10,000+ RPM drives with 8 to 16 MB on-drive memory caches for very fast hard disk access. Even optical disk drives are benefiting from these faster interfaces.

    5. The very wide availability of 100Base-T Ethernet connections on most motherboards, plus some motherboards now sport 1000Base-T Gigabit Ethernet connections.

    6. The near-universal availability of USB 2.0 connections and increasing use of IEEE-1394 connections to external devices, which makes the use of external disk drives to back up data and connect to digital camcorders possible.

    All of these developments have resulted in vastly faster computers in terms of overall speed even if you don't have the fastest CPU installed on the motherboard.
    • I disagree that these so fast bust are so useful:
      -Using ever faster IDE/ATA bus means few improvement when disk access time is not reduced at the same rate..
      -I remember that a review showed going from an AGP*2 to a PCI express give very little improvement (except for low-end solution with shared memory system).

      Increasing bus bandwith alone is nearly meaningless if it isn't the bottleneck in the fist place!

      • Using ever faster IDE/ATA bus means few improvement when disk access time is not reduced at the same rate..

        Why do you think they're putting in larger hard drive memory caches and speeding up spindle speeds to 10,000 RPM on some Serial ATA drives? The 10K spindle speed will probably do much to make hard drive access faster, especially now with the faster interfaces possible.

        I remember that a review showed going from an AGP*2 to a PCI express give very little improvement (except for low-end solution with
        • Very few people has a 10k RPM disk as they are very expensive and there not going down in price. And speed&cachesize within the disk has not really increased over the year for reasonably priced disk: still at 7200RPM and 8MB cache since a long time, the only thing that change is capacity (access time is also reduced year after year, even with a constant RPM but the improvement is quite slow).

          As for the video benchmark, it was a benchmark made of games of course and it showed that the gains where really
  • Asynchronous CPU's have been around since at least the mid 60's. The GE-600/Honeywell-6000 series had several separate parts to the CPU that ran indenpendently. IIRC up to 5 instructions could be executed at the same time in different parts of the CPU.

"The following is not for the weak of heart or Fundamentalists." -- Dave Barry

Working...