Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Harpertown (Penryn) Quad CPUs Benchmarked 88

unts writes "The Intel Developer Forum is currently running in San Francisco, and Intel is showing off the up-coming Harpertown processors based on the Penryn core. HEXUS got hands on with a test system and ran some performance tests: 'Harpertown is a better quad-core processor than Clovertown: it's as simple as that. More L2 cache will gobble-up larger application data-sets and a higher FSB, on select models, will ensure that per-CPU bandwidth is less of a concern.'"
This discussion has been archived. No new comments can be posted.

Intel Harpertown (Penryn) Quad CPUs Benchmarked

Comments Filter:
  • Yarr. (Score:2, Interesting)

    by ackthpt ( 218170 ) *

    Throw more core and L2 cache at it. It be having a familiar ring, like when it was all about CPU speed.

    I typed Harpertown into Google and I be finding a lot about Intel's processor. I wonder what the folk of Harpertowns (whar ever they be) and other towns feel about their town names be crowded out on Google searches by a bit of silicon. Yarr.

    • Those towns should do something worthwhile then maybe they'd be like Broadway [google.com].
    • Re: (Score:1, Redundant)

      by shawnce ( 146129 )
      This is the "tick" to be followed by next years "tock"... the same basic core in a new process (45nm, which allows more room for cache, etc.) followed by next years new core that leverages the smaller process (45nm). In other words it has a "familiar ring" because it is essentially the same core. In 2008 (late likely) we will get on board memory controller... which doesn't have such a "familiar ring".

      http://www.intel.com/technology/magazine/computing/cadence-1006.htm [intel.com]
    • Re: (Score:2, Troll)

      by MBCook ( 132727 )

      There is a big difference. While L2 almost always helps (especially with the cores be so much faster than the memory bus), Intel's current designs end up in bus contention if you try to use them to much. While the Opteron's have their own memory controllers, all four cores on this chip have to go through the Northbridge to get to memory, so they have to share those two channels.

      It used to be even worse. When Intel was pulling the dual-dual core thing, to access one core was wicked quick, to access the othe

      • AMD started this (somehow) with the Athlon MP processor (counterpart to Athlon XP). The multiprocessor chipset for Athlon MP had a FSB to each of the processors.
              To this day, Intel is still sharing a FSB between all the processors in a multiprocessor system (to be fair, it should be noted that the 667MHz FSB was not really crowded by two cores of Pentium4).
    • by coopaq ( 601975 )
      "...I be finding a lot about Intel's processor."

      It's actually "processArr" all day you scurvy scaliwag.
  • by Anonymous Coward on Wednesday September 19, 2007 @01:23PM (#20670787)
    I am stunned.
  • by downix ( 84795 ) on Wednesday September 19, 2007 @01:27PM (#20670841) Homepage
    While invariably the comparisons will bemade between this and AMD, let us not forget that Intel is getting stiff competition from left field as well. The arrival of the SPARC Niagra II processor is about to make the realm of high-end computing a lot more competitive than it has been in years. I, for one, can't wait to see a real head-to-head-to-head, AMD and Intel quads vs the 8-core monstrosty that is SPARC.
    • by shawnce ( 146129 ) on Wednesday September 19, 2007 @01:38PM (#20670987) Homepage
      Don't ignore IBM's POWER processors (POWER7 is in the works) and of course Cell.
      • Re: (Score:3, Informative)

        by MBCook ( 132727 )
        POWER is one thing (but I'm not sure how common they are for non-ultra-high-end servers), but the Cell is not a server chip by any means. It would fall flat on it's face. The SPEs would sit nearly idle, and the one general purpose core would be swamped. Cell would only be useful for batch numerical processing work of specific kinds (I'm guessing accounting would be bad, 3D rendering would work very well).
        • ...just about the time Intel Terascale makes it superfluous.
        • by ivormi ( 1106139 )
          Actually, it turned out that 3D rendering with Cell turned out to be pretty mediocre. One of the big reasons that Sony turned to Nvidia for the graphics processor for the PS3 was because Sony wasn't able to figure out a way to make the cell really fly in terms of graphics, when compared to current chipsets from Nvidia and ATI/AMD. Remember that originally, the PS3 was supposed to have 4 cells, and a basic rasterizer. It was actually fairly late that Sony decided to turn to one of the big two to generate
          • by MBCook ( 132727 )
            I've heard that. I was referring to non-real-time 3D rendering. Basically, what Pixar does with their render farm. The Cell can be quite good at that, as well as video processing and other such effects. It's a SIMD beast, but will fall flat if you try to ask it to do DB work.
    • by Z00L00K ( 682162 )
      More cores will provide more fun. Maybe a check of TILERA 64- CORE PROCESSOR TILE64 [xtreview.com] will do?
    • by vlad_petric ( 94134 ) on Wednesday September 19, 2007 @02:00PM (#20671253) Homepage
      The Niagra processor is a great idea for (web, db) server workloads, where you have a lot of inherent parallelism and very poor cache behavior. A while back, the Piranha research project figured out that for such types of applications, it's better to have many "wimpy", in-order cores than a few "beefy", out-of-order execution ones. Niagra is doing exactly this. However, outside this application realm Niagra doesn't do that well.

      Bottomline: The Niagra microarchitecture is meant for a particular niche.

      • You can currently get a Core 2 Quad in a reasonably priced desktop system. $1500 will easily get you a system with a C2Q, and $2000 will get you a nice one with enough memory and disk to make a quad core useful. The cheapest I find a Niagara for is $10,000. That's for the version 1 Niagara, 4 cores 8GB of memory. It goes up very quickly from there.

        Ok well that's a while different price class. Even if you spec the Intel box with a similar amount of RAM it is still under $3,000.

        So even if it was generally fas
    • Niagra is a poorly performing processor that just takes advantage of the fact that on a server, most threads are stuck in disk or net access all the time.

      You're being snowed by SUN.

      Even SUN realizes their days of hardware are nearing an end. They changed their stock symbol from SUNW (SUN Workstations) to JAVA (a software project).
      • SUN just can't make a power saving box if their lives depended on it. And at this point their companies live does !

        Now servers need to be low power and low cooling cost. Very few servers need the big power that sparc offers now. At this mark in time they all want x86 hardware. It's cheap . powerful and cost effective. Just what they NOCs need to cut down on cooling costs, and just what the global warming folks want to cut down on green house gases.

        The problem for SUN comes from the fact that they lost groun
        • by downix ( 84795 )
          Pardon? The US T1 consumes the same power as a Core 2 Duo (70W). Also, don't want a Sun SPARC, roll your own, the source for the T1 is GPL'd. I've made my own SPARC CPU's in FPGA before, that it's a standard makes it rather easy.
        • Ahhh, reprising Intergraph's strategy, only with the added cost of maintaining Open Solaris. It worked so well for them.
    • The fact that they improved the floating point from the old T1 should make it very appealing to high-end computing that requires alot double precision math and the like.
  • by drspliff ( 652992 ) on Wednesday September 19, 2007 @01:31PM (#20670885)
    The article is extremely thin on the promised "benchmark" and looks like a fairly standard press release.

    Information in real CPU benchmarks: http://www.cpubenchmark.net/ [cpubenchmark.net]
    Information in the press release "benchmark": about:blank

    Give me graphs, comparisons with other models in the same series & other CPUs, information about power draw & heat etc. Not adverts, details I can find out anyway and dont really care about etc.
    • Benchmarks were run on 32bit WinXP w/ SP1??

      I'm sure the numbers would have come back different if they had utilized the 16GB of memory. Still pretty impressive to see 8 45nm cores running on one box.

    • by Fallen Kell ( 165468 ) on Wednesday September 19, 2007 @02:00PM (#20671251)
      Even more then that, give me graphs and benchmarks that actually verify what your conclusions are, or at least prove why you think things are the way they are. For all but one of the graphs Hexsus said that they expect different results due to limitations of the OS. HUH? Wait, this is the first time you have seen the chip, and yet because it benchmarked poorly, you state that it is due to the OS? How do you know? Did you put on a different OS to prove that? How do we know the values in Cinemark will be in the 20k range if a different OS was used when it only did 17k? How do we know floating point results were compromised by the OS? How do we know Pov-Ray will increase as well? The only benchmark that showed the CPU as being faster then the previous CPU was the SiSoftware Sandra processor arithmetic test, and even there only by 3.9% in INTs, and 14% in float.
    • Re: (Score:2, Informative)

      by fitten ( 521191 )
      Unfortunately, for some reason, Slashdot is a day or so behind on this news... it was presented at IDF (Intel Developer Forum) yesterday along with a host of other things.

      Visit some of the standard sites (AnandTech, Hardware info, TechReport, etc.) for various reviews. Here's some to get started on:
      link [anandtech.com]
      link [anandtech.com]
      link [anandtech.com]
      link [anandtech.com]
      link [techreport.com]
      link [hardware.info]

      Quote from a poster at another site that I found interesting: What's really sad is that more people have benchmarked harpertown than barcelona, and yet one of these chips has "launched",
  • Because of the article [er review...] I decided to check around for quad core 775s. Found the 2.4Ghz equiv of what I have already [except mine is a dual] for like 316$ or so [CAD]. Not bad. Then I realized, wtf do I need that for? Even with all the build jobs I do, rendering music (go lilypond!) and what not, the cpu already sits idle most of the time. If the chip was $150 I'd be more willing to shell out for it on a whim just for kicks. But 316 plus tax is around $360 or so. That's nearly a car paym
    • by Luyseyal ( 3154 )
      Save the world? [worldcommunitygrid.org]

      I'm on Team Slashdot, FTW,
      -l
      • Requires windows ... why would I buy a quad-core processor and then choose to run windows?

        Odd
        • by Luyseyal ( 3154 )

          World Community Grid does not require Windows, though I admit their website is a little confusing in that regard. If you run debian, "apt-get install boinc-client boinc-manager". Then, set it up with the BOINC instructions on the WCG website [worldcommunitygrid.org].

          I'm running it on a dual-opteron amd64 debian box. You don't even have to run it in 32bit mode.

          Cheers,
          -l

    • Re:cool I guess... (Score:4, Insightful)

      by LWATCDR ( 28044 ) on Wednesday September 19, 2007 @02:11PM (#20671367) Homepage Journal
      "Damn it, I want my fast, multi-core, and CHEAP processors already ;-)"
      Pick up an AMD 3800 X2 or 4400X2. Last time I checked they where the cost of a good meal.
      People these cpus are still bloody fast for what most people use a PC for. Just about the only thing a home user would ever notice the difference is in video trans coding and or super high end gaming.
      Get an X2, more ram, and a better video card for your best bang for the buck.
      • As I mentioned I have the E6600 [dual core 2.4Ghz 4MB L2]. It's a good CPU ... a little too good. :-)

        Which is why I can't justify buying a Q6600 even though the nerd in me wants a quad at the desk (again ... ).
        • Yeah, my desktop is an E6600, and got a Q6600 for my server, and to be honest, I was half tempted to swap the two, as either is overkill for my needs... but the Q6600 is a better server choice, as I sit idle mostly on my desktop.. and my server can grow (using vmware server with a handfull of vm's now, instead of multiple slower servers)...

          Honestly, 4 cores is pretty good for a small end server.. I bumped up what I could for server-side options, maxed the http compression level, etc... because there's pl
          • by LWATCDR ( 28044 )
            Yea I bet they are. I am about to retire a database server at my office that has finally gotten to slow. It is running Postgres 7 and is supporting about 30 clients. It gets hit pretty hard doing transactions and is now getting painfully slow. I probably could tune it a bit more but why bother it is way over due to be migrated to a new version of Linux and Postgres. Oh and the box it was running on? An old PII 450 with an IDE drive and 250 Megabytes of ram.
            The computer and hard drive have got to be close to
            • Should run pretty sweet on that... only suggestion, is if the budget permits would be to use 147gb SAS 15000rpm drives... that should give you HD speed at least closer to being able to keep up with the rest of a modern server... Part of my upgrades included a 2TB raid 5 (750gbx4) on another system, so couldn't spring for the faster drives for the new server, with the space I needed... The Q6600 is definately a sweet spot for price/performance though.
      • by nateb ( 59324 )
        If you're going to spend a couple hundred bucks anyway, moving your temporary files, binaries, and data to separate hard drives on separate buses.

        People wonder why my machine is loud, I wonder why they wait for programs to run while they're watching movies and folding proteins and copying files and torrenting and browsing the web and ... well, you get the picture.

    • "Damn it, I want my fast, multi-core, and CHEAP processors already ;-)"

      For what you use it most, you really want something as fast that consumes about 10W or less at the plug. Better for you, the environment, your electricity bill, and your peace of mind when in 5 years time some component is destroyed from heat because you weren't there when a fan died.

      It's only a matter of time before someone like Via builds it anyway. And the CEO of Intel who builds it can look like a god for about 5 years until the mark
  • by Null Nihils ( 965047 ) on Wednesday September 19, 2007 @01:46PM (#20671087) Journal
    http://www.tgdaily.com/content/view/33929/135/ [tgdaily.com]
    This article goes into some of the juicy technical details about Penryn/Nehalem and covers a lot of ground about what Intel had to show at the IDF.

    The article is also relevant to this discussion [slashdot.org], "End of Moore's Law in 10-15 years?". FTA:

    Otellini provided an overview of the history of the insulating layer which, in modern CPUs, is only five molecular layers of silicon dioxide (SiO2) thick. He explained that as far back as 15 years ago, Intels engineers saw this layer as problematic. The continued scaling of the insulating layer could not continue forever. And, we found out later in the day with Dr. Gordon Moores keynote, that five molecular layers is about the lowest you can go in practice. Its a form of wall, and Intel was right up against it.
    • Just make a regular die, deposit more silicon on top, put your second processor on top. Interconnect with through silicon vias. Repeat. Now we're scaling in 3 dimensions and Moore's law is safe for 50 years or more.

      no worries.

      • Except for one problem. How the heck do you get the heat out? i.e. I can see this working for exactly 2 layers -- a front and back interconnected through the insulating layer. IF --and this is a big one for me [as someone who understands thermodynamics in the macro world not the micro world] -- a penetration (circuit connection) through the insulating layer doesn't just give one side of a chip a heat path that will basically just burn through the 2nd payer on the other side... Thoughts?
        • Is not to put it in.

        • by Ajehals ( 947354 )
          I just thought I would say that the two previous posts are nothing short of artistic, the emotion, tone and energy in each one is perfect, as you read them you can visualise the two people in the discussion, there is apathy, over-confidence, apathy, contempt and a bright spark of intelligence. I shall call these two posts "The Dreamer and the Engineer" Series. I suggest the parent and GP get together and license their use as ornamental wall hangings.

          (Strange post I know - but seriously those two posts in
      • by AuMatar ( 183847 )
        We already scale in 3 dimensions. Processors have had multiple layers of circuits for decades.
        • by slew ( 2918 )
          Processors have had multiple layers of interconnect for decades.

          Transistors, however, have generally been on one layer since the avent of the planar integrated circuit. Although there have been some advances in putting passive components capacitors and floating gates (for dram and flash, respectively), on top of active transistors, or orienting transitors themselves vertically instead of planar, a general 3d circuit is very much a future technology that's only presently being researched.

          As a hack, people ha
  • by EconolineCrush ( 659729 ) on Wednesday September 19, 2007 @01:48PM (#20671107)
    A much more in-depth review is available at The Tech Report: http://techreport.com/articles.x/13224 [techreport.com]
  • Saw this in the Firehose yesterday and voted it down because there's nothing worth looking at in these benchmarks. The test systems aren't even comparable. I'm looking forward to a complete review of this platform.
  • by Joe The Dragon ( 967727 ) on Wednesday September 19, 2007 @02:31PM (#20671605)
    Where is the new chipset with DDR 2/3 ECC ram? The high power and heat cost of the them looks bad next to the ECC ram in the amd systems.
    • It's rumoured that Intel will release a new server platform/chipset which will use DDR2 memory later this year. Do a google search for "San Clemente chipset" or "Cranberry Lake platform". It's supposedly going to be an entry-level server platform which uses DDR2-667 registered memory.

    • Re: (Score:1, Informative)

      by Anonymous Coward
      Nehalem, the next CPU, uses DDR3 RDIMMS with ECC.
  • Eight cores (Score:4, Funny)

    by Matt Perry ( 793115 ) <[moc.oohay] [ta] [45ttam.yrrep]> on Wednesday September 19, 2007 @02:42PM (#20671755)
    I'm waiting on Intel to take me to Funkytown.
  • by Emetophobe ( 878584 ) on Wednesday September 19, 2007 @03:52PM (#20672567)
    Here's an old image which shows Intel's current roadmap: http://img366.imageshack.us/img366/5313/1775largelongtermroadmap7fs.png [imageshack.us]

    Basically, intel releases a new architecture every 2 years and in between that they release a die shrink/derivative.

    Penryn is mainly just a die shrink of Merom (codename for the laptop version of the Core 2). Merom was a 65nm chip and Penrym is a 45nm chip using the same architecture. Next they will release a new architecture using 45nm (codename Nehalem), then they will release a die shrink of Nehalem using 32nm, and so on and so forth...

    Here's a quick rundown:

    2006: Core 2 architecture released at 65nm
    2007: Die shrink of the Core 2 architecture from 65nm to 45nm
    2008: New architecture (code name Nehalem) released at 45nm
    2009: Die shrink of the Nehalem architecture from 45nm to 32nm
    2010: New architecture (code name Sandy Bridge, formerly known as Gesher) released at 32nm
    2011: Die shrink of the Sandy Bridge architecture from 32nm to 22nm

    • Only issue I see with this is that ITRS has a road map for a die process upto 16nm. That means by 2013 Intel will be at 16nm if they can keep that kind of pace up and then hit a dead end because they cant shrink any more, ITRS decides to make the road map include smaller processes because we can make transistors smaller, or they avert it all together and find a new way to make processors with means other than transistors.

      The other issue is that, reconfigurable and soft processing(the opensparc T1 is just
  • He blames the poor performance of the Harpertown on the fact that it's running 32-bit Windows xp. But that can't effect any of the tests that were run (all of them easily fit in less than 1gb memory). The real reason is because Harpertown is running on a slower clock frequency. Penryn is only a minor core update, so it won't run much faster than conroe clock for clock. The real advantage of Penryn is the 45nm High-k + metal gate process, which gives lower power consumption and allows faster clock speeds.
  • by Phatmanotoo ( 719777 ) on Wednesday September 19, 2007 @06:14PM (#20674479)

    Just look at thre STREAM benchmark numbers and you'll see clearly that AMD has been way ahead of Intel when it comes to RAM bandwidth. I just benchmarked a dual-Quad-Xeon myself (Dell 2900) and I could not believe the poor results I got. One app running in the system can get up to around 3,500 MB/s. Put just two tasks running together (taskset'ed to different chips), and they will each get around 2,600 MB/s. From there on, total aggregate bandwidth tops at 5,200 MB/s and stays there, no matter how many simultaneous tasks you run (it will of course degrade if you run more than eight tasks, you get the point).

    Dual-socket Opteron machines from two years ago can get to 15,000 MB/s aggregate, easily.

    So, I'd really like to know if Intel is planning to improve things in this department.

  • Intel CPUs have to have a lot of L2 cache to make up for the fact that they are still using a decades old shared bus architecture, where the memory controller is on the northbridge and memory transfers have to go through the FSB. AMD's overall motherboard architecture, having a direct line from each core to RAM separate from the FSB and having that on-die memory controller, is lightyears ahead. The fact that the Athlon 64 CPUs, the architecture of which has remained relatively unchanged for the last 4 year
  • More credible benchmarks from Anandtech: http://www.anandtech.com/IT/showdoc.aspx?i=3099&p=1 [anandtech.com] Harpertown is the clear leader in performance. Barcelona leads the performance/watt bencmarks.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...