Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Hardware Technology

CPU DB: Looking At 40 Years of Processor Improvements 113

CowboyRobot writes "Stanford's CPU DB project ( is like an open IMDB for microprocessors. Processors have come a long way from the Intel 4004 in 1971, with a clock speed of 740KHz, and CPU DB shows the details of where and when the gains have occured. More importantly, by looking at hundreds of processors over decades, researchers are able to separate the effect of technology scaling from improvements in say, software. The public is encouraged to contribute to the project."
This discussion has been archived. No new comments can be posted.

CPU DB: Looking At 40 Years of Processor Improvements

Comments Filter:
  • An even longer way (Score:5, Interesting)

    by hendrikboom ( 1001110 ) on Saturday April 07, 2012 @11:44AM (#39606799)

    Processors have come an even longer way wince the days when main memory was on a magnetic drum, and the machine had to wait for the drum to revolve before it could fetch the next instruction. That was the first machine I used.

  • by realityimpaired ( 1668397 ) on Saturday April 07, 2012 @01:52PM (#39607561)

    That was in 1962, if I remember correctly. I think the machine was about 5 years old. The next year the university got an IBM 1620, with alphanumeric I/O and 20,000 digits of actual core memory. Change was relentlessly fast in those days, too. THe big difference is that every few years we got qualitative, not just quantitative change.

    We do still get qualitative change in computing today, just that for *most* of what people actually do with computers, they're fast enough that the human is the limiting factor. For anything where human input isn't a factor (think large number crunching operations), there is still a noticeable difference from generation to generation.

    Case in point... I do a fairly large amount of video encoding (DVD rips, and other stuff). I use 64-bit software, with a 64-bit operating system. I have recently upgraded from a first generation i7 to a second generation i5. I did go from 4GB to 16GB of RAM, but the actual usage when doing the transcode operation has remained stable, around 1.2GB in use (there's no swapping happening on either system), and the actual type of memory used is the same (speed and bus). That said, the transcode opertation from the original mpeg2 DVD rip to h.264 has gone from about 20 minutes for a 42-minute TV episode to 6 minutes for the same 42-minute TV episode, all else being equal. The difference... I went from a quad core/ht i7 (8 processes at 1.6GHz) to a quad core i5, overclocked (4 processes at 4.7GHz). I went from a top end processor 1 generation old to a current generation midrange processor, and saw a *huge* improvement in performance for a number-crunching heavy operation. now... I am pushing less than double the number of operations per second (8x1.6 = 12.8, 4x4.7 = 18.8), but there is more than a double improvement in real world performance. This is down to improvements in the architecture of the processor, and how it handles the operations.

    That being said, my Facebook page doesn't load any faster than it did with the i7 (or on my celeron-based laptop for that matter), and my ability to type is still the limiting factor in how quickly I can use a word processor. If you're not doing heavy number crunching, there is almost no reason to upgrade your computer today (power consumption is an argument that can be made, but the difference is rarely enough to make up for the cost of buying a computer).

  • by unixisc ( 2429386 ) on Saturday April 07, 2012 @02:17PM (#39607737)

    Well, CPUs started off mainly as CISC, and after realizing that not all modes of operations are really needed if higher level languages are used, they migrated that to RISC. In RISC, as parallelism concepts kept gaining milege, they tried dumping more of the functionality to the compiler in the form of VLIW and EPIC architectures, but the ROI was simply not there, as Itanic showed. The tragedy of the Itanic's introduction was that it saw to the demise of far superior and well established CPUs, such as PA-RISC and Alpha: yet in terms of market acceptance, the only OSs that embraced it were HP/UX, FreeBSD and Debian Linux.

    Also, once concepts like multiple threading and parallelism - long there in Unixes from Solaris to Dynix - started taking hold in NT based OSs like XP and beyond, turned out that even better than VLIW was multiprocessing, or dumping more cores @ that problem. Actually, even that solution shows diminishing returns after 4 CPUs - you can keep throwing cores @ it, but the performance improvements will be minimal. Ideal solution is to have as RISC-like a CPU as possible, and then have 4 cores of it in a CPU set-up, and one is off to the races.

    Right now, x86 still has to support 32-bit modes, but once it's no longer needed, x64 will be a purely RISC CPU. At which point, performance improvements will undergo a quantum leap. Of course, for general purpose usage, todays processors are more than adequate, so what might happen is that it would be an opportunity to provide the same performance w/ lower power consumption.

  • Re:Not Mel (Score:5, Interesting)

    by hendrikboom ( 1001110 ) on Saturday April 07, 2012 @04:34PM (#39608565)

    I really like strongly typed, garbage-collected, secure languages that compile down to machine code. I've used the excellent and fast Algol 68 compiler long long ago on a CDC Cyber computers, and now I use Modula 3 on Linux, when I have a choice. They compile down to machine code for efficiency, and give access to the fine-grained control of data -- you can talk about bytes and integers and such just as in C, but they still manage to take care of safe memory allocation and freeing.

    Modula 3 is a more modern language design, though I have a subjective preference for the more compact and freer-style ALgol 68 syntax. Modula 3 has a clean modular structure which is completely separate from its object-oriented features. You're not required to force everything into object types. You can if it's fits your problem, but you can still use traditional procedural style if that's what you need.

    And Modula 3 functions well as a systems programming language. It has explicit mechanism to break language security in specific identified parts of a program if that's what's necessary. It almost never is.

    And, by the way, to avoid potential confusion, Modula 3 was *not* designed by Niklaus Wirth. Modula and Modula 2 were, but Modula 3 is a different language from either.

    -- hendrik

  • by cheekyboy ( 598084 ) on Saturday April 07, 2012 @08:09PM (#39609651) Homepage Journal

    It doesnt matter how good any language is, its the available framework tools and libraries which make it useful.

    ie, C by it self, is simple and takes lots of coding to make something useful (if you do not use any libs at all)

    Strongly type langs can be a bitch if you are editing in VI/VIM, as they dont 'know about the language' to auto help you out, besides colors. Today cpus are so fast, the IDE should help you program, and act as if types are free, and the IDE can auto determin types, and fix it for you. Otherwise if you have to spend 20% of the byte space of your language defining types and pre casting EVERYTHING, then its not an efficient and smart human friendly language.

    Its kind of funny, that in assembly language, you only really have 3 types of ints plus floats, and pointers, and are free to interpret values to your imaginations content.

    If you have to reinvent everything because its not there, then youre wasting your time.

MESSAGE ACKNOWLEDGED -- The Pershing II missiles have been launched.