Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Stats Upgrades Hardware

10 Years of Intel Processors Compared 98

jjslash writes to Techspot's interesting look back at the evolution of Intel CPUs since the original Core 2 Duo E6600 and Core 2 Quad processors were introduced. The test pits the eight-year-old CPUs against their successors in the Nehalem, Sandy Bridge and Haswell families, including today's Celeron and Pentium parts which fare comparably well. A great reference just days before Intel's new Skylake processor debuts.
This discussion has been archived. No new comments can be posted.

10 Years of Intel Processors Compared

Comments Filter:
  • by Anonymous Coward

    I wish they also made benchmarks that only use a common base instruction set (SSE2/3), because most of the newer processor superiority probably comes from ISA extensions.

    • by goarilla ( 908067 ) on Saturday August 01, 2015 @05:20AM (#50228695)
      I wish they compared early PIII Katmai or Coppermines to the Duo 6600. Because what the pessimist in me is seeing, isn't a cherrypicked 11 x increase in one bench but overall core performance stagnation.
      • by Jeremi ( 14640 )

        Because what the pessimist in me is seeing, isn't a cherrypicked 11 x increase in one bench but overall core performance stagnation.

        Well, you can't say you weren't warned; there have been about a zillion articles along the lines of "everybody better learn how to multithread, because we've hit the wall on single-core performance and the only way to make use of extra transistors now is to add more cores".

    • Re: (Score:1, Funny)

      by Anonymous Coward

      because most of the newer processor superiority probably comes from ISA extensions.

      Unlikely since most newer computers don't even support PCI.

      • Unlikely since most newer computers don't even support PCI.

        Obviously you were just trying to be clever, but this is a lot of bollocks. Even many machines which don't have any PCI bus connectors have a PCI bus...

    • by Anonymous Coward

      Why would you want to run a benchmark that deliberately cripples your processor by not using new features?
      That's like benchmarking your CPU while running at half the clock speed.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        Because a lot of programs don't use those features because they are compiled to run on a wide variety of hardware.

        • Re: (Score:2, Insightful)

          by Anonymous Coward

          And how many of those programs are computing intensive? And how many of the computing intensive ones don't use libraries for some or all of the intensive parts? A lot of of software that is cpu bound got the message years ago that there are newer technologies and it is worth having more than one code path to take advantage of newer cpus or gpus or using a library that does that for you.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      It wouldn't be a meaningful test then, would it? Hey let's race these cars, but since the old ones don't have turbos, let's disable the turbos in the new ones too.

    • by Anonymous Coward

      They got better because they improved the ISA. That's why you want a new one. Taking that out would be basically saying "let's take away all the new features that were added and see if it's as good." The answer wouldn't be very interesting.

    • by jellomizer ( 103300 ) on Saturday August 01, 2015 @07:30AM (#50228893)

      Benchmarks are hard for comparing computing systems already. Design trade-offs are made all the time. As the nature of the software these systems run change over the time, so does the processor design changes to meet these changes. With more software taking advantage of the GPU there may be less effort in making your CPU handle floating points faster, so you can focus more on making integer math faster, or better threading...
      2005 compared to 2015...
      2005 - Desktop computing was King! Every user needed a Desktop/Laptop computer for basic computing needs.
      2015 - Desktop is for business. Mobile system Smart Phones/Tablets are used for basic computing needs, the Desktop is reserved for more serious work.

      2005 - Beginning of buzzword "Web 2.0" or the acceptance of JavaScript in browsers. Shortly before that most pages had nearly no JavaScript in they pages, if they were it was more for being a toy, at best data validation in a form. CSS features were also used in a very basic form. Browsers were still having problems with following the standards.
      2015 - "Web 2.0" is so ingrained that we don't call it that anymore. Browsers have more or less settled down and started following the open standards, And JavaScript powers a good portion of the pages Display. the the N Tier type of environment it has became a top level User Interface Tier. Even with all the Slashdot upgrade hate. Most of us barely remember. clicking the Reply link, having to load a new page to enter in your text. And then the page would reload when you are done.

      2005 - 32 bit was still the norm. Most software was in 32 bit, and you still needed compatibility for 16 bit apps.
      2015 - 64 bit is finally here. There is legacy support for 32 bit, but finally 16bit is out.

      These changes in how computing is used over the time, means processor design has to reweigh its tradeoffs it choose in previous systems, and move things around. Overall things are getting faster, but any one feature may not see an improvement or it may even regress.

  • Boring (Score:4, Insightful)

    by mcfedr ( 1081629 ) on Saturday August 01, 2015 @05:35AM (#50228713)
    I was hoping to see an article discussing the changes in architecture and how the improvements have been made, not just regurgiting lists of bench marks
  • by Anonymous Coward

    They change the order of the processors in the graphs so it's hard to tell one from another, and they divided the page up into a bunch of tiny little pieces. Why?

    That's
    like
    putting
    each
    word
    on
    separate
    lines.

    Are they mentally challenged?

    • While we are trying to improve the article...

      Where is the 3770? I couldn't be bothered to type all the names-but-not-numbers into Google to see if one of them was the 3770.

      Just because Intel screwed up their naming conventions doesn't mean everyone else has to further such marketing-driven value-removal.

  • by Anonymous Coward

    increase in single thread performance:
    1994 to 2004: x100
    2005 to 2015: x3

    bla.. bla... GPUs...

  • You would think by now Intel would have fixed the design flaws in the memory management unit (MMU) ..
  • by redmid17 ( 1217076 ) on Saturday August 01, 2015 @06:56AM (#50228845)
    "So we compared 8 years of Intel processors....."
    sigh
    • More than that, the graphs aren't clearly labeled, or in some cases they aren't even unit-labeled!

      That kind of amateur bullshit is like nails down the chalk board.

      • More than that, the graphs aren't clearly labeled, or in some cases they aren't even unit-labeled!

        That kind of amateur bullshit is like nails down the chalk board.

        Be grateful for what you get around here. At least it's better than that lame-ass HP smartwatch article [slashdot.org] that we got a week ago. This one at least tells you what products were tested!

        Anyway, most of the graphs seem to have some kind of legend. They're just not very clear or consistent. Sometimes the unit is in the proper legend in the top right and sometimes it's in the subtitle of the graph. If it's a synthetic benchmark it won't have a unit. It's just a "score." Usually higher is better.

    • Well, it's obviously a typo. The headline should have read "010 Years of Intel Processors Compared".
  • by MrL0G1C ( 867445 ) on Saturday August 01, 2015 @08:38AM (#50229091) Journal

    What strikes me the most is that today's processors are barely any faster than the 2011 processors.

    4 years and only a small speed increase in real performance - 4% for games!!!!! FOUR PERCENT over 4 years. Time to ditch silicon and to start using materials that support higher clock speeds.

    • How much of that is the fault of the processor vs the fault of the games which really haven't gotten any more visually stunning thanks to coding primarily for the lowest common denominator, consoles.

  • I haven't purchased a new desktop CPU in at least that long. I know we have great new stuff out there but I just haven't seen anything come in for some time that justifies the cost when my existing stuff still works for what I do.
  • by Anonymous Coward

    I wish they had included Xeons, even though they're considered "server" processors rather than desktop.

    I was running an i5-2500k overclocked on my mobo, and was looking at upgrading to an i7-3770k to get virtualization and hyperthreading, but it would have cost me well over $300.

    Instead, I found a deal on a Xeon E3-1245v2 for $219 and I'm very happy with it. Runs at 77W instead of 95W too.

  • A much better comparison would have been if they'd compared the same CPUs at the same frequency so that IPC gains could be immediately spotted. Also I've never understood the point of all-in-one benchmarks like PCMark which measure everything and nothing because various PCs with wildly different CPUs/GPUs/RAM configurations have very similar results.
  • Deliberately bottlenecking the hell out of the older Dual cores with a GTX 980 and 4GB RAM?

Avoid strange women and temporary variables.

Working...