Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Intel Hardware Technology

Intel's 4004 Microprocessor Turns 40 126

Posted by timothy
from the will-never-amount-to-anything dept.
harrymcc writes "On November 15th 1971, Intel introduced the 4004 — the first single-chip microprocessor. Its offspring, needless to say, went on to change the world. But first, Intel tried using the 4004 in a bunch of products that were interesting but often unsuccessful — like a pinball machine, an electronic vote-counting machine, and Wang's first word processor. Technologizer's Benj Edwards is celebrating the anniversary with an illustrated look back at this landmark chip." Here's another nostalgic look back at V3.co.uk, and one at The Inquirer. And an anonymous reader points out another at ExtremeTech, from which comes this snippet: "Designed by the fantastically-forenamed Federico Faggin, Ted Hoff, and Stanley Mazor, the 4004 was a 4-bit, 16-pin microprocessor that operated at a mighty 740KHz — and at roughly eight clock cycles per instruction cycle (fetch, decode, execute), that means the chip was capable of executing up to 92,600 instructions per second. We can’t find the original list price, but one source indicates that it cost around $5 to manufacture, or $26 in today’s money."
This discussion has been archived. No new comments can be posted.

Intel's 4004 Microprocessor Turns 40

Comments Filter:
  • by Anonymous Coward on Tuesday November 15, 2011 @10:22AM (#38059966)

    Oh wait, that was something else...

    • by necro81 (917438) on Tuesday November 15, 2011 @11:03AM (#38060544) Journal
      Dude, even if you are too tone deaf to notice the difference, I totally sample my music at 740 kHz - it's the only way to go in order to get clean sound. 16b / 44.1 kHz is for the poor soiled masses, and 24b / 96 kHz is for studio jerk-offs.
      • by BobNET (119675)

        740kHz? Sounds great if you're making eMPty3's for your iPod.

        I use at least 1.48MHz, 256-bit. And even then I can tell it's digital. It just doesn't have the warmth and rich sound of vinyl or that mix tape I made in 1993.

        • 1.48 MHz?, 256-bit? Wow! I can't believe you can hear anything through the distortion and aliasing at that rate!

          I sample with at least 2.4GHz and 2048-bit accuracy. And maybe, just maybe, I can get a basic approximation to what I get from my 1928 Victrola. Now, maybe that lack of sound quality is because I'm only using Monster cables for my monitors right now. The good news is that I'm on a waiting list to sell a kidney so I can get these [avland.co.uk] - a bargain at $21K for 3 meters! But I have to believe that the samp

          • Steve Martin would just tell you to do googolphonic sampling using a moon-rock needle.

      • by tlhIngan (30335) <slashdot@wor[ ]et ['f.n' in gap]> on Tuesday November 15, 2011 @12:27PM (#38061744)

        Dude, even if you are too tone deaf to notice the difference, I totally sample my music at 740 kHz - it's the only way to go in order to get clean sound. 16b / 44.1 kHz is for the poor soiled masses, and 24b / 96 kHz is for studio jerk-offs.

        There's actually a sane reason for 95kHz sampling - filtering.

        Before any ADC, you need to put in an analog filter (anti-aliasing filter - basically it ensures that the signal going to the ADC is bandlimited below the Nyquist frequency).

        The problem with filters is that it's very hard to get the desired characteristics (flat response in frequency band, narrow transition band) without doing a ton of work (lots of parts, etc).

        Using 44kHz or 48kHz sampling, it means if you want to capture to 20kHz, your filter must "brickwall" in the 2-4kHz region between 20kHz and 22/24kHz. This is extremely hard to do, and the eng result is you usually get rolloff around 16kHz or so. Or the frequency response gets wilder and definitely not flat.

        Using 96kHz or even 192kHz means the anti-aliasing filter can be much gentler and designed with more stuiable characteristics. When you can have rolloff start at 20kHz and extend all the way out to 48/96kHz, you can make some very nice filters indeed - flat from 20Hz-20kHz, very little phase distortion, etc.

        The other end benefits as well - the antialiasing filter on the DAC side can be a lot nicer as well.

        And of course, the more bandwidth you have to play with, it means the filters are also much cheaper and simpler (and that also means less distortion).

        • by mcgrew (92797) *

          Using 44kHz or 48kHz sampling, it means if you want to capture to 20kHz, your filter must "brickwall" in the 2-4kHz region between 20kHz and 22/24kHz. This is extremely hard to do

          Huh? A simple bandpass filter does the trick. Higher sampling rate, higher frequency filter. You realize that there are radio frequency filters, which are far higher frequencies than sound?

          The higher your sampling rate, the less aliasing you get. At 44.1 sample rate has only three data points to describe a 15 kHz wave's shape. That

          • by bitrex (859228)

            Huh? A simple bandpass filter does the trick. Higher sampling rate, higher frequency filter. You realize that there are radio frequency filters, which are far higher frequencies than sound?

            The maximum achievable dynamic range of an ADC is determined by the bit depth of the converter. In practice, this dynamic range is limited by the steepness of the roll-off of the anti-aliasing filter - you can have a 16 bit converter, which implies 16*6 = 96 dB of dynamic range, but if your low pass filter only rolls off 12 dB before you hit the Nyquist frequency, your dynamic range will be limited to 12 dB. The OP is working under the assumption that the "final" sample rate, as indicated by the informat

            • by bitrex (859228)
              Just noticed that you didn't specify what type of 15 kHz wave we're talking about, I assumed sine. Obviously a 44.1 kHz rate can't reproduce any arbitrary wave shape at 15 kHz, because by Fourier theory that arbitrary wave shape could contain frequency components well above Nyquist - even a 4 times greater sampling rate probably wouldn't be enough to reproduce a nice-looking square wave at 15 kHz, as you'd only be getting the fundamental and the first square wave Fourier term (3rd harmonic.)
            • by mcgrew (92797) *

              The maximum achievable dynamic range of an ADC is determined by the bit depth of the converter.

              I completely forgot about dynamic range. Yes, the greater the bit depth the greater the range, and the bit depth does indeed have to match all the way through the system; a chain is only as good as its weakest link. But the dynamic range of CDs, greater than that of vinyl, isn't ever used that I've seen. In fact, the CD I made from Boston's first LP has more dynamics than the CD I bought (the band itself complaine

        • by bitrex (859228)
          Almost all modern audio ADCs use oversampling to relax the requirements on the input analog anti-aliasing filter, regardless of what the final sample rate is going to be. The digital anti-aliasing filter still has to roll off before the Nyquist frequency of the final sample rate, but since it's much easier to construct good filters with sharp stop bands in the digital realm, I'd think what sonic advantages you get from brickwalling between 20kHz and 24khz vs. 44 to 48 khz would be debatable.
        • In the early days of CDs, a brick-wall analog filter was relatively cheap compared to doubling the bitrate of the laser... of course, that's all turned on its head now.

  • Cue Kurzweil... (Score:2, Redundant)

    by JoeMerchant (803320)

    If we've come this far in 40 years, where will we be in 40 more?

    • by ledow (319597)

      Nearly 70 and doing everything I can to avoid a computer for my entire retirement?

      • Re:Cue Kurzweil... (Score:5, Insightful)

        by JoeMerchant (803320) on Tuesday November 15, 2011 @10:38AM (#38060192)

        Nearly 70 and doing everything I can to avoid a computer for my entire retirement?

        You miss the Kurzweil reference, if medical progress keeps pace, 70 will be young.

        I think the half-way mark 1991 makes an interesting reference point: in 1991, my desktop PC at work cost 2 months salary, it was a 16MHz 386 with a 640x480 resolution 15" color monitor. My desktop PC at work today cost about 3 days pay and is a 2+GHz i5 with two 1920x1080 24" flat panels.

        • by Pope (17780)

          Good thing you got that raise!

          • Good thing you got that raise!

            More like, my boss in 1991 believed in the value of cutting edge hardware (more than the value of paying people anything above minimum market rate...) Today I work for a startup that supplies us from the lower end of the Dell catalog. Not that I'm complaining, the hardware is no longer the limiting factor, although I could use a third screen....

      • by mcgrew (92797) *

        No, you'll be avoiding some tech that will be indespensible to modern life that we don't have yet.

        My dad is one of those guys, he's 80. "I lived eighty years without a computer and cell phone and I don't need one now." My maternal grandfather was the same way about indoor plumbing! Even after my uncle installed a bathroom in Grandpa's house, Grandpa still used the outhouse.

        I'm hopeful I haven't inherited the "get off my lawn" gene. At least I don't have a nice yard so far (and don't give a damn if someone w

    • Predicting that the new ten billion core processors will usher in the year of Linux on the desktop and IPv6 finally overtaking IPv4.

    • Hmm probably 80 years.

    • by Kjella (173770)

      Well, at this rate the last die shrink will be in the 2020s as we're down to single atoms, no more magic computers that get faster every year. The rest is a bit like that flying car, even 40 years ago that 4004 would have been a helluva human coprocessor for math but progress on any real cyborg functionality has never materialized. And if it does, it'll have almost nothing to do with CPU development as such.

    • Laughing at pictures with people who are using the huge bulky iPhones. Or... Huddled around the fire trying to keep warm.

    • by Dunbal (464142) *
      Dead
      • http://singinst.org/overview/whatisthesingularity/ [singinst.org]

        The question is, will the "smarter than human intelligence" see any reason to improve human lifespan?

        Visions of the powercells in the Matrix just popped to mind....

    • by Yvan256 (722131)

      We'll be in 2051.

    • Re:Cue Kurzweil... (Score:4, Insightful)

      by TheTyrannyOfForcedRe (1186313) on Tuesday November 15, 2011 @01:16PM (#38062432)

      If we've come this far in 40 years, where will we be in 40 more?

      CMOS process shrinks will probably poop out around 2020. Intel claims to have things figured out until 8nm. When the CMOS process shrinks cease there will be no more massive numbers of "free" transistors every year. Intel and other will likely start playing with gallium arsenide and other stuff to try to squeeze more performance out of stagnated process sizes. Once those tricks are played out it could very well be the end until radical new alternative technology is developed.

      • Although it's nice to call it CMOS, and indeed both N and P channel devices are used, all the fastest silicon processors use N channel devices for the logic path. CMOS runs about 1/4 the speed on NMOS.

        My understanding of gallium arsenide MOS (and I could easily be wrong) is that its speed advantage for logic started running out at about the 0.35 micron (350 nm) node, which is where Vitesse gave up and very nearly went out of business. The future might not be silicon, but there's little change of it being Ga

        • Egads, I've must be more careful. 1/4 the speed of NMOS

          little chance of it being GaAs.

          Sorry

        • by Anonymous Coward

          Although it's nice to call it CMOS, and indeed both N and P channel devices are used, all the fastest silicon processors use N channel devices for the logic path. CMOS runs about 1/4 the speed on NMOS.

          Not any more. Intel's 45nm and 32nm hi-K gate dielectric / metal gate processes delivered two interesting things: a dramatic reduction in leakage current, and PMOS transistors nearly as strong as the NMOS. As a result, everything Intel's designed starting with the Nehalem generation (the first Core i7) uses pure CMOS logic.

          As I understand it, dynamic logic based mostly on NMOS transistors (like fast domino logic) is still somewhat faster than complementary in Intel HK/MG, but insufficiently so to make up

        • My understanding of gallium arsenide MOS (and I could easily be wrong) is that its speed advantage for logic started running out at about the 0.35 micron (350 nm) node, which is where Vitesse gave up and very nearly went out of business. The future might not be silicon, but there's little change of it being GaAs.

          Intel has recently been talking about using GaAs on future processes. Everything old is new again.

          http://nextbigfuture.com/2011/06/intel-talks-about-8-nanometer-nodes-for.html [nextbigfuture.com]

    • by mcgrew (92797) *

      If we've come this far in 40 years, where will we be in 40 more?

      I'll be dead. At least I hope I'll be dead, I have no wish to live to be a hundred. Like my grandmother said at age 95, "I don't know why everybody wants to live to be a hundred. It ain't no fun bein' old."

      But man, look at how far things have progressed. Forty years ago the pot was only 1/4th as potent (and 1/20th the price); there were no flat screen TVs, no microwave ovens, no VCRs, self-opening doors were brand new, there were no cell phones

      • I was four in 1971... you could still get lead in your gasoline, you could dream of being an astronaut when you grew up, you could imagine surviving an all out thermonuclear war, jet travel was still expensive for most people, and nobody had the faintest idea what caused ulcers...

        I mentioned Kurzweil mostly because he's pushing the idea that, due to exponential progress, people will live forever, real soon now. He is kinda quiet about how much money (power, whatever) you're going to need to do it. Persona

    • by sgt scrub (869860)

      Reminiscing on the old chip my pacemaker used to use.

  • by Anonymous Coward

    Wait, would that be Word Wang? The sequel to the off season hit Number Wang?

  • by Megane (129182) on Tuesday November 15, 2011 @10:41AM (#38060228) Homepage

    http://www.4004.com/ [4004.com]

    In particular, that fully-functional 4004 mock-up someone made by using 1G TTL chips on a large circuit board is absolutely awesome.

    • Even more interesting is Faggin's own website where he explains some of the details of his relationship with Intel. This man literally changed Intel into a microprocessor company!! Without his development of the 4004 Intel would have probably died on the vine as a memory manufacturer. http://www.intel4004.com/ [intel4004.com]
  • by Nerdfest (867930) on Tuesday November 15, 2011 @11:02AM (#38060524)
    I still have a Radio Shack EC-4004 programmable calculator floating around that uses one of these. Fun little calculator for its time.
  • Interesting typo* (Score:2, Interesting)

    by hipp5 (1635263)

    From the technologizer article:

    as Intel churned out more powerful chips throughout the rest of the 1970sâ"the predecessors of the ones inside every current Windows PC and Mac.

    Really? I was pretty sure my computer has an AMD inside.

    *Well, not really a typo but more of a poorly considered sentence.

    • by Anonymous Coward on Tuesday November 15, 2011 @11:13AM (#38060672)

      It's not poorly worded. The history of AMD is poorly understood (by you, not them).

    • Re:Interesting typo* (Score:5, Informative)

      by Waffle Iron (339739) on Tuesday November 15, 2011 @11:28AM (#38060834)

      There *is* an unbroken chain of compatibility from the latest AMD processors back to the 8008, which was Intel's first 8-bit microprocessor (the design of which was actually started before the 4004 design, IIRC). So they were indeed "predecessors".

      Not to mention that AMD got its start in the PC business by being an officially licensed 2nd source for Intel's 8086 chips.

      • Re: (Score:3, Informative)

        by tlhIngan (30335)

        There *is* an unbroken chain of compatibility from the latest AMD processors back to the 8008

        Actually, AMD processors are not 100% compatible. There are differences in behavior.

        For example, everyone knows an x86 resets at FFFF:0000. But an AMD processor will throw an exception if somewhere along the line, it doesn't encounter a branch and ends up wrapping to 0000:0000. An Intel processor doesn't generate the exception. (This is because way back when, instead of putting ROM at the end of memory, designers co

      • There is an unbroken chain of compatibility from the latest AMD processors back to the 8008...

        Of course, no discussion about Intel vs AMD compatibility would be complete without some jerk* remarking that with x86-64, Intel is actually cloning AMD's 64-bit extensions to x86. One of the more ironic twists in that race, to be sure.

        * Today's jerk being played by DragonHawk

  • by stuffduff (681819) on Tuesday November 15, 2011 @11:17AM (#38060718) Journal
    Still available, although I believe they are made in Malaysia. The whole chip-set was not very expensive.
  • by Anonymous Coward

    It mostly failed because it was put in a 16-pin package, meaning that all the addresses and data had to be shuffled out and in through a narrow bus. This is a slow process. That also means you had to surround the chip with a lot of decoders and latches and buffers to hold the memory and I/O addresses and shuffle the data in and out. Same downside to the 8008. You needed like 30 chips around the CPU chip just for the very basics of generating an actual memory address and data bus.

    • But, you missed the business model. They already had (and sold) decoders and latches and buffers that "digital designers" were using for other purposes. This was the one chip to rule them all, one chip to find them, one chip to bring them all and in the darkness require them (thus driving sales...)

    • by ChatHuant (801522) on Tuesday November 15, 2011 @08:38PM (#38068872)

      8008, 6800, and 8086

      Eh? While there were a few designs using 8008 and 6800, I don't think any of them was successful; high volume commercially available PCs used Z80s (the TRS-80, the Sinclair ZX-80 and Spectrum, the MSX machines) or 6502s (Apple II, Atari, Commodore). The successor of the 6800, the excellent 6809 was used in the TRS-80 Color Computer; years later, when IBM launched their PC, they used the reduced data bus version of the 8086, that is the 8088.

  • Could it handle at least 16-bit numbers?

    • Wrong question. I've seen 32-bit PCs handle arbitrary precision (with some appropriate library of course), not only 32 bits. Okay, there is a limit of the size of available RAM ...

    • by imsabbel (611519)

      You are thinking the wrong way.

      It was used in calculators.
      4bit is enough to encode 0-9. The rest was done in software (using arbitrary precission math, although for very limited values of "arbitrary", given past constrains...

      • by c++0xFF (1758032)

        Corrrect. It's probably better to describe the 4004 as BCD (Binary Coded Decimal) rather than as "four bit." Storing a number larger than 9 requires eight bits, the first four store the first digit, and the second four store the second digit. The bit patterns 0xA through 0xF were actually special patterns used for various things (like marking negative numbers).

        Since the original purpose of the 4004 was a calculator, this system makes a lot of sense. It might not be the most efficient use of bits (an eig

      • by satuon (1822492)

        I meant native support for numbers. If you wrote an assembler program for this kind of processor, what would be the biggest type you can use to store integers, pointers, etc.? On 8-bit processors it was 16 bit numbers I think, which is restricting but still reasonable. If you can access at most 8 bits for pointers and native integers however, I can't imagine how this would work.

    • There are encoding schemes that allow arbitrary precision on any sized processor. The Atari 800 series running on 8 bit 6502s used a BCD encoding, it is a little slower, but fractional results are more predictable than binary representations. You can implement something like that to an arbitrary number of digits... as many bits as you have memory for (and time to wait for processing).

  • by surfdaddy (930829) on Tuesday November 15, 2011 @12:56PM (#38062180)
    ...that we landed on the MOON before the invention of microprocessors! Now that's scary.
    • by Anonymous Coward

      The R&D needed for the moon landing actually unleashed a whole lot of new stuff which shaped the tech scene for the years to come

    • ...that we landed on the MOON before the invention of microprocessors! Now that's scary.

      Mostly scary if you're the guy in the rocket capsule steering it by hand. Kind of comforting when you think about the accuracy of enemy ICBM targeting capabilities.

    • by gmhowell (26755)

      ...that we landed on the MOON before the invention of microprocessors! Now that's scary.

      Correction: they faked the moon landings before the invention of microprocessors. That's why it's so obvious. If they had today's CGI, we would be far less likely to know the truth [badastronomy.com].

  • I clicked on the link and only saw a 4004 ;)

  • by Pence128 (1389345)
    The 4004 is too expensive, but I was thinking about getting an 8008 and a bunch of ttl to make a little computer... maybe next April.
    • Why not get a Raspberry Pi and be done? If you want to play at making computer systems, I'd recommend getting into the Altera / Nios design software... I'm working on a triple core system right now with each core customized (by our team) to a particular task...

  • They mention photolithography. It was still being used in the mid 80's at Texas Instruments. I went from micrographics to photolithography in 84'ish. You shoot rubyliths with room sized cameras then stack the negatives, positives, or both, on top of each other on a reducer. It was all .001 tolerance work.

What this country needs is a dime that will buy a good five-cent bagel.

Working...