Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel's Penryn Benchmarked 124

Steve Kerrison writes "Intel's keen to show off its up-coming 45nm Penryn Core 2 CPU. HEXUS had some hands on time with the new processor to get an idea of how well it will perform once its released: 'Intel's new 45nm Penryn core adds more than just a clock and FSB hike, so much so that even a dual-core Penryn is able to beat out a quad-core QX6800 under certain circumstances.'"
This discussion has been archived. No new comments can be posted.

Intel's Penryn Benchmarked

Comments Filter:
  • But the combination of trying to find how to easily get to the real article while also fighting "Intellitext" ads proved too much for me. I am a weak weak man.
    • by Tom Womack ( 8005 ) <tom@womack.net> on Wednesday April 18, 2007 @09:17AM (#18781881) Homepage
      Firefox NoScript is the answer to all this kind of stupidity; I think it's worth using Firefox for NoScript alone.

      Also, remind your hosts file that intellitxt.com is a synonym for 127.0.0.1

      Yes, this is depriving hexus of advertising revenue. If they want advertising revenue, they should produce adverts which do not deeply infuriate their readers. Intelligently-targetted intellitxt might be actually usable, but to have every occurence of 'computer' hyperlinked to Dell's store is of no use to anybody.
      • by TheThiefMaster ( 992038 ) on Wednesday April 18, 2007 @10:11AM (#18782709)
        Or just use the AdBlock Plus extension, which blocks all the advert scripts without disabling any of the other scripts on the site.

        It also has the advantage of blocking gif image ads on other sites that NoScript misses.
      • Re: (Score:1, Insightful)

        by Anonymous Coward
        Not just the infuriating ads... look at the frigging article content.

        It's utterly clueless. The interpretations are irresponsible guesswork, and often go flat out wrong. Wish I had the time right now to list the major factual errors there -- fortunately they outright glare at any (even remotely informed) reader. Outside the mistakes, this "review" boils down to just a mix of buzzword bingo and PR handout paraphrasing, the most gaping holes in the author's compherehension carpeted over with tiring "hip and c
      • by Ant P. ( 974313 )
        I would use the hosts file thing but then my webserver logs get filled with urlspam and 404 errors. I really wish there was an IP range reserved for blackhole routing stuff.
        • You could always use one of the class "E" reserved addresses. Most IP stacks will return 'invalid argument' or similar to a process trying to connect to one of these addresses, so there won't be a huge timeout. You could also tell your webserver not to listen on 127.0.0.1.
    • by pkulak ( 815640 )
      Really? I didn't have any problems at all [privoxy.org].
  • Quick summary (Score:5, Informative)

    by Anonymous Coward on Wednesday April 18, 2007 @08:58AM (#18781595)
    If your app benefits from SSE4 optimizations, the gains compared to the current Core 2 can be giganormous (DivX encoder: +85% at equal clock). Otherwise, expect a per clock advantage of about 10%.
    • by plasmacutter ( 901737 ) on Wednesday April 18, 2007 @09:05AM (#18781713)
      arent most app developers still working their way into SSE3? (for instance, mplayer only mentions sse2 in configuration and initialization, and from what i remember even macos intel doesnt fully utilize sse3)

      what's the point of even trying for SSE3 or even SSE4 when theyll just plunk down SSE5 within the next 6 months..
      • Re: (Score:3, Insightful)

        by Kjella ( 173770 )
        Simple:
        Sale in generation n: Xn%

        Market share of instruction set introduced in last generation: X1%
        Market share of instruction set introduced two generations ago: X1%+X2%
        Market share of instruction set introduced three generations ago: X1%+X2%+X3%

        Sure you can go for SSE3 today... or wait for SSE5 which will come in 6 months + several years to get actual market share.
      • by LWATCDR ( 28044 )
        If you are running simulations or other custom heavy number crunching applications then yes you will take the time to recompile your code for SSE4. Not everyone runs off the self software.

        • Re: (Score:3, Informative)

          by TheSunborn ( 68004 )
          And you would ofcause first need to add auto SSE4 support to your compiler.
          • by init100 ( 915886 )

            I guess that you never heard of the Intel C/C++/Fortran Compiler? They will surely include such support almost in sync with their processor releases.

          • by LWATCDR ( 28044 )
            As a few people have already pointed out. Intel probably already supports their own extensions. Also a lot of places that develop these types of custom applications will take the time to hand optimize critical path code.
          • Any compiler that can automatically use SSE3 and SSSE3 is probably good enough that adding SSE4 shouldn't be too hard. However, taking full advantage of the SSEx instructions will continue to require use of a well-written math library. Too many of those operations are too specialized, and the programmer would have to write the code in a certain style for the compiler to recognize the opportunity for using one of the more obscure instructions.
      • by shawnce ( 146129 )
        1) SSE4 adds instructions that makes auto-vectorization by a compiler easier for a larger set of code.
        2) High performance code is often written to pick, at runtime, the implementation that works best on the processor it is running on (one tuned for SSE3, one tuned for SSE4, etc.).
      • by init100 ( 915886 )

        Why is that a problem? It isn't like SSE2 is going away just because SSE4 is coming through the door. And it isn't necessarily the case that SSEx+1 is "better" than SSEx, since the different SSE instruction sets may have slightly different target applications. One project may mostly benefit from SSE2, while another may mostly benefit from SSE4. Some projects may not receive any benefit from certain SSE revisions, and thus wouldn't care to implement them.

    • Re: (Score:3, Insightful)

      by Trelane ( 16124 )

      expect a per clock advantage of about 10%.
      If my calculation takes 9 days instead of 10, I'd call that a win.

      The other question is power consumption.

      • The other question is power consumption.
        i think (could be wrong) a smaller chip inherently uses less power.
      • Penryn is supposed to be lower-power at higher clock speeds than the current C2D for the same core count.

        10% more IPC + 10% higher clocks + same (or possibly smaller) power budget = a formerly 10 days job becomes a 8 days job without any CPU-specific tuning or extra cooling/power costs. Sounds like a good deal to me - even more so considering that I am still using a 3GHz Northwood as my primary PC.
    • by jimicus ( 737525 )
      If your app benefits from SSE4 optimizations, the gains compared to the current Core 2 can be giganormous (DivX encoder: +85% at equal clock). Otherwise, expect a per clock advantage of about 10%.

      Particularly bloody awkward, then, that hardly anyone codes anything for general-purpose PC usage in assembler any more, and compilers don't get updated and optimised to take full advantage of new instruction sets that quickly.
  • by segedunum ( 883035 ) on Wednesday April 18, 2007 @09:02AM (#18781665)
    I wonder what under certain circumstances, means because I couldn't gather it from the article. Also, they simply looked at some systems pre-configured by Intel. Not great.
    • Re: (Score:1, Insightful)

      this reminds me of the core2 duo upgrade to the macbook (and MBP) lines claiming leaps and bounds greater performance from the cpu... when in reality that performance is because the C2D series have double the cash of the CD series (at least on mac platforms).
      • Re: (Score:1, Funny)

        by Anonymous Coward
        I find that double the cash also improves my girlfriend's performance.
      • by fitten ( 521191 )
        So... was the performance better or not? Why would it matter how that performance was gained? Is not 20% faster, well... 20% faster?

        Damn! This computer is 20% faster because of X and not Y! I think I'll throw it away! Those bastards!
        • the point is theyre overcharging for that arguably negligible increase in cpu power. they could have easily just upgraded the cache on the core duo series and left it at that.

          it's comparable to requiring you buy leather in order to get the 4 door variant of a family sedan.
          • by maxume ( 22995 )
            Being able to charge people what they are willing to pay(rather than their cost) gives them good reason to build new fabs, which generally pushes the bottom of the market down faster than charging cost would. The notion that it isn't possible for it to be win-win is silly(but people get technology based on their willingness to pay for it rather than its availability...boo freaken hoo).
      • Re: (Score:3, Insightful)

        by tomstdenis ( 446163 )
        That's bullshit, the core2 is a different design, not just larger cache when compared to the core. First, the core is a pentium M, which iirc has 2 pipes for ALU, one of which does load/store as well. The core2 has 3 pipes for ALU and dedicated pipes for load/store.

        The core2 is faster because fundamentally the IPC of the core is a lot higher on average. The larger cache does help but the benefits decay exponentially. So from the 1MB and 2MB parts to the 4MB part the benefits are not as high as you'd thin
    • by Ngarrang ( 1023425 ) on Wednesday April 18, 2007 @09:15AM (#18781863) Journal
      As with all computing benchmarks, YMMV.

      There are applications where CPU speed is a marginal component of the speed. Some apps require large memory to run correctly, or fast disk access, or fast graphics access.

      Will this new processor benefit the tasks that 95% of do each day like e-mail, web browsing, word processor and slashdot posting? More speed will certainly allow me to open more windows at once, along with a increase in RAM. The performance should be a boon for gamer and science communities, though. Optimized your app for this processor and watch the simulation fly! Is there anything in most OSes that could benefit from these advanced optimizations?

      I wish we could faster advances in the performance of memory and drive access to match all of this CPU wizardry. With the growing presence of solid-state disk drives, I wonder if we will see a new SATA/SAS version that can support the rates a RAM drive is truly capable of.
      • Of course this new processor won't help you read faster, think faster or type faster; and e-mail, web browsing, word processing and slashdot posting have been constrained by human fingers and brains rather than by inadequate computation since the Pentium. If you're using gnome, put a System Monitor and a Frequency Monitor in your panel and see just how rare it is that the Frequency increases above bare-minimum or the load average bar leaves the very bottom of the window.

        You can already open, on a five-year
        • by dbIII ( 701233 )
          No - a 1% increase in speed on a job one guy in the office is running would save him more than four hours of run time per node. Some completely CPU bound stuff still takes weeks with 8 CPUs at 2GHz. We need more power!

          As for home computers - digital video manipulation needs as much cpu power as it can get and is becoming popular.

      • Well AMD has already solved that problem, AMD's DSDC is not designed for using 2 cpu's at once. The DSDC is designed for connecting RAM and other system resources as closely to the CPU without any latency. With amd's setup the ram connects through the DSDC, plus amd is shortening the distance between cpu's on DSDC for decreased latency. Oh an AMD is expecting 12.4GBps bandwidth for System Memory. If you want data Access go amd. You dont have SSE4, but you really dont have any apps that have SSE4 use, b
    • by dave420 ( 699308 )
      You didn't read the article very well, then! There was a graph that showed exactly what happened when the dual-core beat the quad-core, and they explained exactly what they were doing to get those results, and exactly how they achieved them. SSE4 is the answer, coupled with DivX's codec being re-written to utilise SSE4. Encoding to DivX is what beats the pants off the quad-core, which is rather interesting, as the quad-core chips are frequently sold as the heart of video-editing.
  • by bcmm ( 768152 ) on Wednesday April 18, 2007 @09:06AM (#18781735)
    According to Wikipedia [wikipedia.org], Penryn is intended as a laptop processor.

    Does it seem odd to anyone else for Intel to launch a new instruction set on a laptop CPU? Are portables that dominant these days?
    • doesn't matter... (Score:5, Insightful)

      by plasmacutter ( 901737 ) on Wednesday April 18, 2007 @09:11AM (#18781801)
      They finally applied some common sense, and are actually pursuing their performance per watt optimization path.

      by engineering their chips for portables first, this means they can integrate the same chips into desktops and have the same kind of power conservation from desktop units.

      additionally, by investing their r&d straight into laptop chips they dont end up having to spend extra later to re-engineer the chip for portables.

      IMHO this is the first smart move from a lumbering corporate giant i've seen since toyota shipped compacts to the us in the mid 70's.
    • Yes, the portable market has been growing faster and I believe has been larger then the desktop market for quite a while.

      Go into Best Buy, CompUSA, Apple Store or any other major retailer. You will see a lot more floor space given to laptops then to desktops.
    • by 644bd346996 ( 1012333 ) on Wednesday April 18, 2007 @09:17AM (#18781887)
      Yes, laptops are dominant, or more precisely, power efficiency now matters. That's why Intel threw away the NetBurst/P4 architecture and developed Core from the Pentium M architecture. Laptops are more profitable, and people are starting to care about noise and power consumption in desktops and HTPCs as well.

      This seems to be a new pattern for Intel. The Core processors were all mobile oriented, and the Core 2 introduced desktop processors, too. The mobile processors are now being treated as the flagship products. And for good reason, too. Intel seems to be the best when it comes to laptop chips.
      • Intel's switch from NetBurst/P4 to Pentium M architecture has more to do with performance than noise and power consumption. P4's maxed out at 4ghz due largely to heat constraints, and when benchmarks started making it obvious that the Pentium-M outperforms it at half the clock rate, it became apparent that Pentium-M was a superior architecture with room to grow, and thus emerged core 2 duo.
        • Power consumption and fan noise are the symptoms of an inefficient processor. The P4 was abandoned not because it couldn't perform, but because it put out so much heat that the cooling solutions necessary to get the P4 past 4Ghz were too expensive and/or loud for the majority of the market. Essentially, the P4 architecture could only be clocked up after a die shrink. The P-M architecture was the only alternative intel had, and it was not at all obvious at first that it would be able to clock up significantl
    • No, but the people who BUY portables are that dominant. Its the same logic that MS used to get Win95 into business - it runs on people's home machines OK, and Mr PHB wants his work machine to be equally cool, so it is Declared that they'll switch over.
    • by Kjella ( 173770 ) on Wednesday April 18, 2007 @09:26AM (#18782015) Homepage
      Does it seem odd to anyone else for Intel to launch a new instruction set on a laptop CPU? Are portables that dominant these days?

      Over the desktop? Not really, given how much the market has increased in recent years. The price of a laptop is not far over the prices of a comparable PC, hard disk space and GPU power is enough that most people don't have to compromise.

      There's a few reason to have desktops:
      1. Large monitors
      2. Large diskspace
      3. Better graphics cards
      4. You want to tinker with it, upgrade etc.

      But if you're not really falling into either of these four, there's not really much of a reason to go with a desktop, unless you know it'll be fixed in one location 90% of the time. Many people don't have a dedicated "computer area", they sit down at a suitable desk, use it then afterwards pack it away. Many people want to take it places, school, work, friends, cabin, road trips, whatever. Most people want that over the three 5 1/4" bays (DVD-burner and ???), four 3 1/2" bays (2-500GB disk + ???), 7 PCIe expansion slots (GFX card + ???) and all the other empty space they get in a desktop.
      • additionally, if all you want is large diskspace you can always buy a detachable set of firewire drives (single disks on up through raid arrays).

        I have a desktop and a portable atm, but my next hardware upgrade (a long time in the future) will probably have such a configuration, especially since they upped the resolution on the 17" macbook pro to 1680x1050.
      • There's a few reason to have desktops:
        1. Large monitors
        2. Large diskspace
        3. Better graphics cards
        4. You want to tinker with it, upgrade etc.


        In the first two cases, there's even less benefit; you can plug a laptop into a monitor and external hard disk when at your desk, and do without them when you're travelling.

        Personally my next computer purchase will be a laptop because portability is a more compelling benefit than a speed increase I'm unlikely to notice in everyday use.
      • There's a few reason to have desktops:
        1. Large monitors
        2. Large diskspace
        3. Better graphics cards
        4. You want to tinker with it, upgrade etc.


        1. You can do this if you have a laptop or not.
        2. External drives, and 200gb internals, allow laptops to equal desktops
        3. Some laptops allow you to swap, there are many good laptops with high end video ability
        4. ok, you got me, but you don't have the majority of buyers and I think thats why laptops and semi-laptops will pull away with the market
      • by ponos ( 122721 )

        But if you're not really falling into either of these four, there's not really much of a reason to go with a desktop, unless you know it'll be fixed in one location 90% of the time. Many people don't have a dedicated "computer area", they sit down at a suitable desk, use it then afterwards pack it away. Many people want to take it places, school, work, friends, cabin, road trips, whatever.

        First, I'd like to add another reason: for the same performance, desktops are way cheaper. Then, even though your rea

        • by Baki ( 72515 )
          I bought a "desktop replacement" laptop with docking station 2 years ago. It was intended to be on by desk for 99% of the time. I use it with external monitor and keyboard.

          Yes, desktops cost less for the same performance, maybe 60 or 70% from the price of the laptop. However:
          • the laptop is absolutely quiet. I work in a silent environment and cannot stand the noise from even 'silent' fans. silencing a desktop does add quite some extra cost.
          • it uses less power, so you earn some of the extra cost back over the
      • by julesh ( 229690 )
        There's a few reason to have desktops:
        1. Large monitors
        2. Large diskspace
        3. Better graphics cards
        4. You want to tinker with it, upgrade etc.


        5. You want the performance of a desktop hard disk, which generally is substantially faster than an equivalent laptop one
        6. You want multiple hard disks, because you need a RAID array (either for speed, reliability or both)
        7. You want to perform an application with it that requires add-on hardware that isn't supported by the laptop, and the hardware you want to use isn'
        • 1, 2,8, 10 aren't valid.

          My 12" Powerbook is routinely hooked up to my 23" HDTV. yes It can play 720p videos just fine.

          I have a firewire external drive so my laptop has 200gb's of possible storage.

          Multi monitors? My powerbook will span both display's at the same time.

          10) laptop's with poor cooling overheat. usually they run windows, because windows has inconsistent power, and fan controls. Laptop's setup to run their fans poperly don't overheat unless under load for long periods. As in a server, but you
          • multi-monitors *can* mean more than 2 you realize. My current tower has 2 19inch samsungs, I'll be adding a 24inch dell very soon. 3 monitors (19,24,19 set-up) is something my macbook pro certainly can't do, and I've yet to see a laptop that can...

            As for external drives, not many laptops have esata, or even firewire 800, so if you need performance on the discs (10k rpm really do need a sata or fw800 connection).

            Now, I have a tower and a laptop. The tower is at my desk, my laptop comes with me. I could I s'p
            • by mabinogi ( 74033 )
              If the only users that use desktops are the ones that want 3 or more monitors and 10k RPM drives with Firewire 800 support, then that pretty much proves the original point - that it's no surprise that Intel is focusing on the notebook chips first.

              It's funny how often Slashdotters get so carried away with showing us all how special and ub3r l33t they are that they completely lose site of the point that's being made.
              • Look, if a person is using 2 monitors, has several external disks, and doesnt have the need for portability, they may as well get a desktop was my point. I'm an extreme example, sure, but desktops are cheaper for $$/performance, and the added space they take up is irrelevant if you already have a pile of disks and another monitor...
        1. Put in 4x1GB sticks for about $300.

        Running Linux, this means you can cache all of /usr/bin and much of /lib and /usr/lib in RAM before you fire up your desktop. Then all your apps start up straight out of RAM, and pretty much all other disk access as well (see my similar comment [slashdot.org]).

        The wet dream of inexpensive, high-density RAM is now a reality. But none of us suspected that the limiting factor in improving desktop performance would be, of all things, finding an inexpensive desktop motherboard that t

    • by julesh ( 229690 )
      According to Wikipedia, Penryn is intended as a laptop processor.

      Does it seem odd to anyone else for Intel to launch a new instruction set on a laptop CPU? Are portables that dominant these days?


      The Wikipedia article is misleading. The chips that were tested were Yorkfield and Wolfdale, which are listed under the Desktops section. Penryn is the generic name for the entire 45nm Core 2 family, as I understand it.
    • Re: (Score:3, Informative)

      by Dwindlehop ( 62388 )
      Wikipedia is wrong.

      Disclaimer: I am an employee of Intel, but I do not speak for Intel. This post reflects my opinions and not those of Intel Corporation.
  • Poor AMD (Score:3, Insightful)

    by xBOISEx ( 1089557 ) on Wednesday April 18, 2007 @09:09AM (#18781777) Homepage
    I feel bad for AMD, it seems like they're really taking a thrashing this round. This surge in processor technology is just the kind of thing I like to see though. Now to actually harness all that power...
    • The technologies are staggered between the two companies. So every other year the other company leaps ahead. It's not that AMD is behind, it's that their last release is behind intel's new release. The same thing happened to intel when AMD leaped ahead..

      I'm just thankful AMD can compete. If they were still making crappy intel clones, we'd be paying quite a bit more for hardware.
    • Re: (Score:2, Interesting)

      by RockoTDF ( 1042780 )
      In a few years when everyone starts hitting the RAM ceiling for 32 bit CPUs, 64 bit will have to take off. Right now AMD has the lead on consumer priced 64 bit processors, as well as the patent on the x86_64 architecture which they have licensed to Intel. It is entirely possible that with the next mass jump (like to Pentiums 12 years ago) that a completely new architecture altogether will take over, although people love their legacy apps so much that x86_64 still has a good shot at it. But as we have seen
      • Re: (Score:3, Interesting)

        by tomstdenis ( 446163 )
        There aren't a lot of applications which can truly take advantage of 64-bit integer registers. In fact, bignum math is about the only that really comes to mind.

        What does matter is the address space. It isn't even the memory [as in physical memory], but virtual address space. As more and more mapped memory is used by applications like databases, it is nice to be able to just logically access it via a mmap.

        For example, you can mmap a 10GB file to memory, then poke at it like you would a C array, even thoug
        • For example, you can mmap a 10GB file to memory, then poke at it like you would a C array, even though you may only have 512MB in the system. That's something you just can't do in a 32-bit process even if you had the memory.

          Sure about that? Intel processors have had, for a long time, features to window very large address spaces into the 32-bit addressable region. "Windowing is a pain in the ass" you say? Well, I say that doubling the size of every pointer from 4 bytes to 8 bytes is more of a pain in the

          • Re: (Score:3, Informative)

            by tomstdenis ( 446163 )
            Um, ia32 could never address more than 4GB per process (and often it was 2GB). Even though with PAE you could put segments anywhere in a 36-bit address space. Most modern C compilers have no idea about "far pointers" [think back to the 16-bit days] so you're still stuck to at most 32-bits of address.

            As for pointers being twice the size, yeah that's a pain. You can code around that if you know you'll be indexing something smaller than 4GB in size (hint: x86_64 can still efficiently use 32-bit registers).
            • by faragon ( 789704 )
              AFAIK, the 36-bit address it is not intended for a *single* process, but for the OS; i.e., the OS could manage the 36-bit address range, but with 32-bit processes.

              P.S. I lived the 16-bit era, and I hate it absolutelly, specially those weird "far calls" (seg:off, as, (seg4 | off)), I hated, and still hate, to program 8086/V20/V30 in a multi-segment approach (the tiny (64K)/small(64/64K) models were ok, but compact(64/XKB) and huge(X/YKB) were terrible to deal , being hard to get a nice code design without
              • I'll raise you my two penn'orth: Would sir like a (48-bit, I think) flat memory model in the AMD64 architecture?
                • by faragon ( 789704 )
                  As soon as you can address all your phisical memory with a single register, I'll give the OK; because the opposite, i.e., multiplex registers for addressing more memory is both *slow* and painful.
            • After a bit of investigation, I found that it is possible to do somewhat like bank switching (AWE [wikipedia.org]), again, a weird thing like back in time the EMMS and XMMS bank switched memory. It could be useful for spare/temporal data, but out of the scope of the compile, i.e., via explicit memory handling.
        • by init100 ( 915886 )

          There aren't a lot of applications which can truly take advantage of 64-bit integer registers. In fact, bignum math is about the only that really comes to mind.

          You forget the extra 16 general purpose registers that AMD64 introduced, as well as the extra eight SSE registers. Doing more operations on registers is always nice, though the extra registers also take a longer time to save and restore in a context switch.

          • Whoops yeah, of course. Well the extra SSE makes a flat file FPU possible (e.g. -mfpmath=sse) which is nice as it avoids the stack dancing that is required with the x87 stack.

            I meant in terms of it being "64-bit" though.
      • I thought Intel had the lead on consumer-priced 64-bit chips with the Core 2.
      • In a few years when everyone starts hitting the RAM ceiling for 32 bit CPUs, 64 bit will have to take off.

        I find that statement laughable. There are other ways to utilize more than 4 gigabytes of address space without having to move to full-blown 64-bit addressing. Doesn't anybody remember the days of DOS? Under 16-bit real mode, the CPU could only directly address 64 kilobytes! Yet this was not a fundamental problem. Segmented addressing expands the range to a full megabyte. And doesn't anyone recollec

        • Re: (Score:1, Interesting)

          by Anonymous Coward
          "I find that statement laughable."

          Ditto to your statement. Segments, 'extended memory', 'EMS', etc were necessary, but a bad idea. We all got a lot happyer when everything became 32-bit. Let's not make the same bad mistakes over and over again.
      • by julesh ( 229690 )
        In a few years when everyone starts hitting the RAM ceiling for 32 bit CPUs, 64 bit will have to take off.

        Current average memory on a new PC = 1GB.
        Memory ceiling for 32 bit CPUs with PAE = 16GB.
        Number of doublings required to hit ceiling = 4
        Length of doubling per Moore's Law = 18 months
        Total = 6 years

        I think we're farther off that limit than you think.

        Right now AMD has the lead on consumer priced 64 bit processors

        I'm not sure where you get that idea from. My current Celeron D was cheaper than any AMD machi
        • The Celeron is not a 64 bit processor, nor is the core 2 duo, etc. They have Intel64 virtualization, but are not true 64 bit CPUs. I was a kid in Xmas 95 when we got our first pentium 1, so thats my frame of reference as to when the switch occured, and I remember everyone else getting computers like mine around that time as well. The next jump will probably not occur for a while. There is an article out there somewhere (I think it was on here a few months ago) on why Linux will have an advantage when th
          • scratch that, the virtualization tech is just the term for using the two at the same time apparently. But I still stand by my statement that AMD by owning the patents for x86_64 will still help since Intel is licensed to use their architecture.
      • as well as the patent on the x86_64 architecture which they have licensed to Intel
        Correct me if I'm wrong, but I'm pretty sure x86-64 is an open standard that AMD doesn't charge royalties to use (if they even own patents on it, which again I don't think they do).
  • by GonzoTech ( 613147 ) on Wednesday April 18, 2007 @09:16AM (#18781877)
    The Penryn core is just the first. Wait for the Teller core to come out. It's slight of hand techniques tricks you into thinking it has actually out performed other chips.
    • I wonder who all will get this? Probably the best comedian/magicians of all time!
    • by snp-7-3 ( 777049 )
      Sounds like BULLSHIT to me! :-D
    • The Penryn core is just the first. Wait for the Teller core to come out. It's slight of hand techniques tricks you into thinking it has actually out performed other chips.


      It's important to note that Intel's chipset for the Teller core isn't going to have audio support. OEM's will need to add additional audio output hardware if you want to hear anything from a Teller based system.
  • ... and so is Hexus Webserver, apparently.
  • by Tom Womack ( 8005 ) <tom@womack.net> on Wednesday April 18, 2007 @09:23AM (#18781957) Homepage
    www.anandtech.com has a presumably very similar review (since these are lists of benchmarks which the journalists observed being run by Intel on Intel-provided systems), and enough bandwidth that you can actually get through to it.

    It's a little annoying that these chips require different voltage regulators from the ones on current motherboards, since the chipsets are the same and changing the motherboard adds £80, some hours of fuss and an inordinate number of screws to what should be a trivial CPU upgrade, whilst bare motherboards, and even motherboard+CPU pairs, don't seem to sell well on ebay.

  • The real question is... Will it blend?
  • Great! However... (Score:3, Insightful)

    by Bullfish ( 858648 ) on Wednesday April 18, 2007 @10:05AM (#18782633)
    All we need now is software that will take advantage of all these cores.
  • Re: (Score:2, Interesting)

    by u0berdev ( 1038434 )
    It's faster, yes. But I can't wait to see how much less power it uses. The main benefit I see from Intel moving to 45-nm should be getting speeds => Core2 but using less power. As everyone continues on the path to 'greener' tech, this will be one of the biggest selling factors for the Penryn family.

    And let's not forget that when this comes out in '08, the Core2's will get even cheaper! Heck I'm still excited about the next price drop for the Core2's this 22nd ( http://www.anandtech.com/cpuchipsets/inte [anandtech.com]
    • Well, the article pointed out that TDP (typical dissipation power, or something like that) ratings are likely to remain the same. They paraphrase Intel as saying they used the reduction in process size to pack in my transistors instead of pursue the power-savings route.
  • Enhanced Dynamic Acceleration Technology is supposed to be for the Santa Rosa platform, but I'm having trouble thinking that they would limit it to just a mobile platform? Would desktops not benefit from this? Intel only mentions its for mobile core 2 processors.. although the Dynamic Acceleration is present in all the chips now, just not as "enhanced".
  • Kinda Pointless (Score:4, Insightful)

    by walmartshopper67 ( 943351 ) <jtp0142.rit@edu> on Wednesday April 18, 2007 @11:19AM (#18783639)
    Well, I RTFA, and it was pointless. FTFA: "and we're absolutely adamant that the benchmarks were chosen to show the two Penryn-based CPUs off in the best possible light." -and- "Further, it's not an apple-to-apple comparison as both 45nm processors were clocked in at 3.33GHz and the QX6800 at 2.93GHz. Our requests for clock and FSB parity were politely ignored. " ...I appreciate the disclosure that it was in fact ruled by Intel and your requests were ignored, but with that, why did you do it then? If the whole thing is skewed by the manufacturer, you've just become part of their advertising campaign. Intel set it up, they weren't gonna set themselves up to fail. Besides, isn't benchmarking supposed to at least resemble a scientific-like process? If you were going to benchmark you're own machines for whatever reason, would you set it up like this?
    • by tknd ( 979052 )
      Because if they didn't do it they wouldn't get on slashdot!
    • One could argue that a report on misleading comparison is useful merely due to the fact of pointing out the mischievous techniques.
      This can teach us a thing about the ways some manufacturers choose in attempt to capture another part of your mind.
  • by The Real Nem ( 793299 ) on Wednesday April 18, 2007 @12:29PM (#18784897) Homepage

    Where's the multi-page version you insensitive clod?

    I Kept searching for a multi-page option but I couldn't find one. After years of being conditioned to read articles over 12 pages or so, this layout just freaks me out. I couldn't find the combobox that let me jump to the conclusion. The page seemed way too long and daunting for me to process. And I kept expecting next links that never came!

    Take me back to the good old days where you could read a 12 page article and actually feel like you accomplished something.

  • is the comment
    The real question is, we suppose, how well will AMD's Barcelona perform in comparison. We now know Penryn's potential, but AMD keeps us guessing.
    Despite the fact that Barcelona is supposed to be shipping at the end of Q2 and Penryn is not due until Q4, Intel seems to be showing stable, polished systems running their 45nm product whereas AMD seems to be holding back early Barcelona silicon.
    WHY IS THIS?

I'd rather just believe that it's done by little elves running around.

Working...