Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Lindenhurst Xeon DP Platform Discussion 111

Steve from Hexus writes "Hexus.net has a article looking at Intel's latest Xeon platform: Lindenhurst, discussing the Paxville dual-core processor, E7520 core-logic, where it could go right for Intel, and where it could all go wrong." From the article: "If you're I/O bound by your threads in any way, you can hit problems (all threads touch the MCH, then there's a 266MiB/sec bus link to the I/O processors to cross, then the data hits disks or network hardware). If you're memory subsystem bound in any way, especially on a majority of compute threads, performance is likely gone. There's just too much resource sharing for it to all conceivably work well, especially compared to Opteron. I can forsee many a scenario where dual-core Opteron will give Paxville Xeon DP a beating."
This discussion has been archived. No new comments can be posted.

Intel Lindenhurst Xeon DP Platform Discussion

Comments Filter:
  • by schon ( 31600 ) on Thursday November 03, 2005 @11:46AM (#13941211)
    there's a 266MiB/sec bus link

    Wow - that's a *LOT* of Tommy Lee Joneses and Will Smiths!
    • MiB clearly stands for something else here... ...I don't think the real MiBs would ever take the bus!
    • Re:Men in Black? (Score:5, Informative)

      by swillden ( 191260 ) <shawn-ds@willden.org> on Thursday November 03, 2005 @01:06PM (#13942030) Journal

      Wow - that's a *LOT* of Tommy Lee Joneses and Will Smiths!

      :-)

      Looking past the joke, for anyone who may be wondering why that 'i' is there, they're just being accurate. "MiB" is the abbreviation for "mebibyte", which is 2^20 bytes. The more "common" notation, "MB", is the abbreviation for "megabyte", which is 10^6 bytes.

      The terms "gibibyte", "mebibyte", "kibibyte", etc. were defined in 1998 by the IEC to disambiguate "megabyte", etc. The "giga", "mega", "kilo" prefixes from the SI units have always referred to powers of 10. With the advent of computers, it became convenient to use them to refer to powers of two that are close to powers of 10. So, "kilo" was used to mean 1024, "mega" was used to mean 1048576 and "giga" was used for 1073741824. The context was generally sufficient to disambiguate those usages from the standard powers-of-ten usages. Basically, everyone figured that if you were talking about computers, the prefixes referred to powers of two.

      But there are plenty of computer-related contexts where the prefixes have their traditional meanings. Hard disk drive storage sizes, for example, are measured with powers of 10 by drive manufacturers, but file systems generally use binary prefixes This is why your 80GB drive shows up as only 74.5GB "formatted". It's not that lots of space is wasted by the formatting; the issue is that 80*10^9/2^30=74.5. The two measurements are using different units. Data rates are also traditionally specified in powers of 10. RAM sizes are powers of two.

      So, to disambiguate the prefixes while not disturbing the traditional meanings, the IEC coined a new set of binary prefixes, along with corresponding abbreviations. The new prefixes all end in "bi", for "binary".

      • Mebi-bytes... does that mean they are sometimes bites and sometimes not? maybe bytes
      • The more "common" notation, "MB", is the abbreviation for "megabyte", which is 10^6 bytes.

        Also known as "marketing megabytes" in the storage and networking industries, because they let a bigger-looking number represent the same number of bytes.

      • "MiB" is the abbreviation for "mebibyte", which is 2^20 bytes

        hehehe, maybebytes... I'll stick with megabytes, TYVM.

        So, to disambiguate the prefixes while not disturbing the traditional meanings, the IEC coined a new set of binary prefixes

        Too bad they didn't get much community buyin.

      • Right, except that nobody but pedantic dweebs uses the terms because the current usage has been ingrained in computer culture and the IEC failed to take into account that there is no reasonable way to actually pronounce "words" like gibibyte.
        • except that nobody but pedantic dweebs uses the terms

          I resemble that remark.

          there is no reasonable way to actually pronounce "words" like gibibyte.

          Huh? Try GI' BI BYTE. The i's are short. Works fine. Actually it's close enough in sound to gigabyte that people who don't know the term understand what I mean (though not the 10^9 vs 2^30 distinction, of course), but it's just distinct enough that those who know the term hear the difference. And the shorthand "gibs" works just as well as "gigs", too

    • Because timings and MHz are ALWAYS 10^x, never 2^3x.
      And this datarate is obviously 266Mhz*8Bit or something compareable.
  • Lindenhurst? They ARE running out of names. I spent a couple months in Lindenhurst, Illinois when I was about 1. It's a sprawl-barf located just outside the doors of Six Flags.
  • by waspleg ( 316038 )
    anyone else read this as a sublimedirectory.com tag line?

  • gooooo Intel! (Score:5, Interesting)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Thursday November 03, 2005 @11:56AM (#13941308) Homepage
    cost 3 times as much as the 820D ... it's a copy of the 820D ... see where I'm going with this?

    The dual-core intels may cost half as much as the dual core Athlon64s but they still suck twice as bad. What you save in initial purchase cost you lose in electricity bills and time doing work.

    The fact they're STILL making Netburst based processors just sickens me. Give it up already and go P6 or something new. I mean if they put half the money they put into the netburst into the P6 designs of late they'd already have a 2.5Ghz P6 core that would give AMD a run for their money.

    I think the cats out of the bag for the most part. And not like you're gonna sell a lot of dual-core based Dells to grandma so she can write emails.

    Times like this make me feel proud I'm an AMD whore :-)

    Tom
    • Re:gooooo Intel! (Score:3, Insightful)

      by IAmTheDave ( 746256 )
      The fact they're STILL making Netburst based processors just sickens me. Give it up already and go P6 or something new. I mean if they put half the money they put into the netburst into the P6 designs of late they'd already have a 2.5Ghz P6 core that would give AMD a run for their money.

      Agreed. What ever happened to Intel leading the pack? Their processors are bloated, slow, and quite unfortunately behind the curve.
      • Re:gooooo Intel! (Score:3, Insightful)

        by tomstdenis ( 446163 )
        They invested too heavily on the Mhz-myth of the Netburst. To turn around and say "whoops, we're wrong" is hard. That and they have partners that ALSO invested in it.

        What does Dell use? "Dell uses Intel Pentium four processors (cue P4 sound theme)" ...

        It's probably not easy to say "Dell uses Intel P6 processors because the P4 sucks ass, we're sorry, we lied all this time." There is also a huge cultural gap between the engineers and marketters/VPs. I'm sure if any of the engineers escaped and bought an
        • Re:gooooo Intel! (Score:4, Insightful)

          by cowbutt ( 21077 ) on Thursday November 03, 2005 @12:34PM (#13941700) Journal
          They invested too heavily on the Mhz-myth of the Netburst. To turn around and say "whoops, we're wrong" is hard. That and they have partners that ALSO invested in it.

          Saying 'MHz-myth of the Netburst' is a bit harsh. There was a time when it made sense - if it allows Intel to sell processors that perform faster than AMD's and retail for similar prices, who cares about the clock speed required to do this? Heck, this was pretty much DEC's strategy for the Alpha - design an architecture that's easily scalable to ever-faster clock speeds, and ramp up the performance by aggressively increasing the clock speed.

          But it was short-sighted of Intel to over-invest in such a strategy without any guarantees about power consumption, consequent heat output, or the growing importance of those issues to its customers.

          In the long run, though, this won't kill Intel, and they'll be back. I'd also expect them to learn from the experience, the same way that after the infamous Pentium FP bug, every processor has had field-upgradeable microcode to (hopefully) eliminate the chance that they'll need to perform a recall of that size - and expense - ever again.

          • The netburst design as you mention was certainly not thought out for the long run. If you told me when the Athlon Slot A came out that a 2.4Ghz part would be available that had two cores... I'd laugh my ass off.

            So would most anyone else.

            Yet AMD stuck with the basic design and kept improving the process. Because underneath it all ... the Athlon is actually a very good general purpose processor. It features [what seemed like at the time] a lot of redundant computing power that is quite easy to take advant
          • Re:gooooo Intel! (Score:4, Informative)

            by imroy ( 755 ) <imroykun@gmail.com> on Thursday November 03, 2005 @01:13PM (#13942090) Homepage Journal
            Heck, this was pretty much DEC's strategy for the Alpha - design an architecture that's easily scalable to ever-faster clock speeds, and ramp up the performance by aggressively increasing the clock speed.

            Except the Alpha was a RISC processor (and a pretty clean one at that), so its short pipelines didn't lose as much performance to branch miss-predictions as the P4/Netburst does. IIRC, both the P4 and Athlon CPU's had to get up to around 1.4-1.5GHz before they beat the performance of the 800MHz 21264, the last and fastest Alpha produced. *sigh*


            • Well actually DEC/Compaq/HP cranked the Alpha handle a little further than that. You can still buy Alpha servers with up to 64 1.3GHz 21364 (EV7z) chips in them.

              Had Compaq stuck to the product roadmap instead of snuggling up to Intel over Itanium, then the EV79 would have been out in 2004, shrunk to 0.13mics + SOI, and available at speeds of 1.6GHz and 1.7GHz.

              God only know what an EV8 on todays fabrication technology would have been capable of. What a total waste of ingenuity. And all thanks to a bunch o
              • That's right, the Alpha did eventually get over 1GHz. Thanks for the correction, it was early in the morning when I posted. And well said about the idiots at Compaq/HP. The Itanium is a pretty big failure from what I can tell. Intel and HP sunk billions of dollars into it and what do they have to show for it? A big, expensive, and hot processor that really only performs well on scientific number-crunching applications. I don't see it lasting much longer. The Alpha sure would have been an awesome processor

                • Unfortunately I'm a consultant who's specialised in Alpha and Tru64/TruClusters since their inception. I'm still getting regular work in this field but its on the decline now and I've been breaking out into HP-UX (PA-RISC/Itanium) and Linux to make up the slack. It would actually be better for me if Itanium and HP-UX succeeded rather than failed, so I'm keeping my fingers crossed that Intel don't screw up completely. HP-UX (though I don't like it as much as Tru64) is especially dependent on Itanium. If
            • Amen. I love my 333 MHz 21164. And by 333 MHz, I don't mean the crappy Intel 329 MHz clocks, but the quality DEC clocks that really run at 333.33333333334 MHz
            • AMD uses the DEC Alpha bus architecture on all the latest athlon chips. And it works very well.
    • I just put together a Xeon based server. It was a rare case where a Xeon solution met my needs better than an Opteron based solution.

      My company is _very_ sensitive to power consumption. So, I picked a very new motherboard from Tyan, and a Xeon that supported Enhanced Speed Step. I figured that I'd install cpudyn, like I did with all of our AMD boxes, and save a few bucks on electricity.

      So, cpudyn doesn't work... because Speedstep isn't supported by Tyan's BIOS. I email Tyan, and I find out two things:

      * Tyan
      • Perhaps, in the future, you should check to see that the features you wish to use are supported on the platform/hardware you will be using :)

      • Better though is that the D series can only clock down to 2.8Ghz whereas the AMD64s can go down to around 1Ghz [depending on your part]. Clocking from 3.2Ghz to 2.8Ghz doesn't save you that much power [maybe 10W at most ...].

        My AMDX2 is sitting here running Linux and is clocked when idle to 1Ghz ... at 32C with a copper heatsink. The processor draws around 20-30W when idle compared to the Intel processors which draw nearly double that at idle.

        In no way is a Netburst based processor a wise decision over the offerings of AMD.

        Tom
        • My AMDX2 is sitting here running Linux and is clocked when idle to 1Ghz ... at 32C with a copper heatsink.

          When you say "with a copper heatsink", you're implying "without a CPU fan", right?

          That's what amazes me about my AMD64. I use "fancontrol" to adjust the CPU fan speed in order to regulate the temperature and keep the machine as quiet as possible, and if I'm not working the processor the temperature sits at 89F with the fan turned off. And it's not like my case is some sort of wind tunnel, either;

          • When you say "with a copper heatsink", you're implying "without a CPU fan", right?

            No, sorry, I meant it's a huge honking copper heatsink with a low RPM fan on it. The noisiest thing in my box is the case fan which is a huge 80mm running at like 2500RPM, I opened a 3.5" slot in the front and the airflow through the case is fairly nice. Keep the entire case relatively cool.

            Tom
        • Just a clarification:

          The power improvements in the Pentium D at idle using Enhanced Speedstep are not the trifle you seem to think they are.

          The reduction in core speed (12.5%) also come with it a reduction in voltage (1.4 -> 1.2v).

          Just do the math.

          Your total power reduction is 12.5% (for frequency) + the reduction in voltage. Since power is related to the voltage squared, you get the following reduction from the voltage:

          ( 1 - 1.2^2 / 1.4^2 ) * 100 ~ 26%

          So, you get a nearly 40% decrease in power usage.
      • How exactly would the xeon have met your needs better? I`ve not seen any such situations on servers for quite a while, AMD seems to have them beat in every area.
    • Re:gooooo Intel! (Score:3, Insightful)

      by Jeff DeMaagd ( 2015 )
      "The fact they're STILL making Netburst based processors just sickens me."

      So?

      The reason that Intel is still making Netburst processors is because chip development is a lot slower than the "speed of internet". Figure two to three years from concept to production. AMD took that long or longer to put out their A64 line. This is why Intel can't make large architecture shifts in a month.
      • Ok granted. But why did they bother with the 6xx and 8xx lines?

        If they put that money into porting the P6 first to 64-bit then next to dual-core they'd be behind AMD ... *BUT* would have a quality product.

        It sucks being second to the party but it sucks more being second AND spending a lot of money along the way. You think the 6xx and 8xx lines were free? Hell no. And now they're stuck trying to offload them. I have an 820 processor and I know for a fact it's shit [I bought it to run benchmarks on]. I
    • The fact they're STILL making Netburst based processors just sickens me.

      It's not like they haven't tried. It's just that they outsourced that particular project, and things didn't work out [theregister.co.uk]. More work for Portland, looks like...
    • Smart ass answer: Google for the words Merom, Conroe, or Woodcrest
      Somewhat useful answer: Wait for the second half of 2006 - your wish will be granted.
    • Well, Intel truly is the hardware equivalent of Microsoft. They'll only change when forced by the market from the tried-n-true formulas that made them zillions. Problem (for Intel)is chip product design cycles are very long and (very) expensive to turn around. I'm astounded that the Opteron systems from Sun/HP/IBM have not absolutely buried the Xeon market, but Dell still sells boatloads of slow, cheap (hot) Intel servers. You folks in IT, WAKE UP! Intel's in the server dungeon till at least 2009..
    • I mean if they put half the money they put into the netburst into the P6 designs of late they'd already have a 2.5Ghz P6 core that would give AMD a run for their money.

      They did and they have, and they sell shedloads of them. It's called the Pentium M, dual-core 65 nm versions of which will be available next quarter. The currently-available single-core Dothan version performs pretty awesomely [tomshardware.com], matching FX-55 and P4 EE even at gaming, and all at less than a third the CPU power consumption of the Pentium 4.

      • which is all good and said. I'll wait till I can buy P6 based desktops again [e.g. new cores] before I applaud their efforts. If their dual-core 64-bit processor costs 900$ next year ... they will have missed the mark unless it's a really fast core.

        My main reason for wanting a P6 based desktop is mostly just to test out "yet another architecture" but if they can also beat the new AMD64s [e.g. the 0.09um parts] in terms of watts per MIPS that would be impressive and useful.

        Tom
    • Seems like they'll be making more NetBurst CPU's with the Presler 65 nm core.
  • Terrible naming (Score:2, Insightful)

    by Killjoy_NL ( 719667 )
    Lindenhurst, Paxville.

    Who takes these names serious these days?

    Pentium, Athlon, those are good names, just keep on following this pattern.
  • by pointbeing ( 701902 ) on Thursday November 03, 2005 @11:58AM (#13941330)
    Got a plain old dual processor 1GHz box that with video and hard drive upgrades is still competent. It does everything I need it to do, although processor- or memory-intensive processes are getting a bit sluggish. Rendering video takes a little time, but that's more because the application I use renders in a single thread - but I can play games and render video at the same time ;-)

    I still believe if you could remove all the latency from I/O subsystems in a modern PC you'd have more processor than you could use by a longshot - IMO high-end PCs just wait for data faster than older machines, and a lot of the performance boost you see with a new machine is simply masking latency in other subsystems.

    PCI-X and improved memory bandwidth will solve some of these problems, but it's a bandaid at best. I do tend to chuckle at people buying the newest/fastest peripheral, not understanding that a lot of the time the peripheral will talk faster than the nine(?) year-old PCI bus that's feeding it.

    When troubleshooting performance issues the component that's working at 100% capacity is *always* the bottleneck - and with most home and business users, that bottleneck is almost never the CPU itself.
    • although processor- or memory-intensive processes are getting a bit sluggish

      Sorry for being pernickety, but unless your install is getting a bit old and crotchety, oryou have a dying hard drive, I fail to see why it would be any slower now than when it was new...

      On a wider point, I still use a laptop with a 366mhz processor (G3) and upgraded RAM for many everyday tasks, resorting to my 1.853 Ghz Athlon workstation for photo editing and other media work as well as file storage.

      At what point do we
      • Sorry for being pernickety, but unless your install is getting a bit old and crotchety, oryou have a dying hard drive, I fail to see why it would be any slower now than when it was new...

        Simple. Bloatware ;-)

        You're correct, though. If I ran the same OS and applications I used when I built the machine it'd run like a rocket.

      • Hey, my primary laptop is a P-III mobile at 600Mhz that I bought for 100€. Slammed an extra 256Meg into it that I had lying around (total is now 512Meg, alas it didn't take the 512Meg module that I had lying around, otherwhise I would have had more) This setup runs WinXP like a charm (with W2k theme of course) I recently exchanged the harddisk from 6Gig to 80Gigs which cost me 117€. So for 217€ I have a laptop that runs *everything* I need and cost not much at all. Yeah, photo manipulati
        • One thing I doubt a bit is DVD viewing on a P-III, especially the lower-clocked ones. I tried VLC on a P-III 800MHz and that's almost impossible.

          I remember first playing DVDs on a PII clocked at 366MHz IIRC. It's not very CPU intensive at all.

          If you can't do it with your 800MHz PIII, the software is bloated, or the cheap videochip is offloading lots of video processing to the CPU.

          First make sure you've got the latest drivers for your video chipset, and that all possible hardware acceleration options are tu

      • While we're adding stories of old systems, here's mine. My first motherboard and CPU when I left home was a K6-2 266MHz (with the 66MHz bus). For a while it served as the family server and I've recently resurrected it as a file server. It has a RAID 1 mirror using two oldish hard drives (80G+60G), 192M of ram, a 100Base-T network card, and runs Debian GNU/Linux. It serves my home directory from the RAID-1 volume via NFS (after a few drive crashes over the years, I want my data safe), a Cyrus IMAP store for
        • Am I the only one who feels old? When I left home for college in 1991, the computer that my parents gave me was an already-ancient Apple //c (the pseudo-portable of its day because it had a handle). I actually used a graphical word processor with it to write papers along with its serial mouse.

          The fall of my second year, I purchased a Mac IIsi with 9 megs of ram and a 40 meg hard disk for about $1200 (a real steal through the campus store). It ran a 68030 at 20 megahertz, with no math coprocessor. It w

  • Pointless? (Score:4, Insightful)

    by plumby ( 179557 ) on Thursday November 03, 2005 @12:01PM (#13941366)
    I'll admit that I'm no great expert on the details of multi-core, hyper-threaded CPU design, but from what's in the article isn't the memory access bottleneck a rather fatal, and obvious, flaw in the whole design? Unless I'm missing something, I'm really struggling to see how this got off the drawing board. What is it's point if the only applications that can ever take advantage of it are the very few that rarely need to access main memory?
    • Re:Pointless? (Score:4, Informative)

      by mprinkey ( 1434 ) on Thursday November 03, 2005 @12:22PM (#13941571)
      I've thought the same. I have racks of single core 3.0 GHz Xeons that strain the memory bus to the limit. Adding more cores to that mix is a waste. So, the new cluster is dual-core AMDs. The Intel architecture is generally good for the codes that we run, but I couldn't justify not buying AMDs. Price, thermal footprint, and performance all went that way.

      Protip to Intel: Stop trying to feed your users this crap.
      • We've been building out a lot of systems for our (web) apps. They've ALL been based on Xeon processors despite the fact that the dual-core Opteron is clearly the way to go. The catch has always been availability of a solid dual socket board (we try to get as much raw power in a 1U case as we can so dual chips are kind of company culture around here).

        If you've been using the Opteron, and it sounds like in production, I'd love to hears some details about good/compatable/stable hardware. I really, really, real
        • If you've been using the Opteron, and it sounds like in production, I'd love to hears some details about good/compatable/stable hardware. I really, really, really don't want the next system I purchase to be another hot, slow Xeon.

          It's amazing to me how many people on /. claim to be building "servers" for their "companies". Time to wake up and smell the market. It's not all that much cheaper on the front end to build your own server (in small quantities) and it's certianly not as reliable and won't have
          • You speak with absolutely NO idea of how, what, or why I do what I do. I'm glad you can afford this [sun.com] and if I had the final say we'd be running enterprise level hardware all around.

            But guess fucking what? Thats not the way it works for a lot of us in the *gasp* real world.

            As far as you tidbit goes I agree 100%. Frankly I think you're just being an asshole to A) brag about your leet warez B) just another blow-hard who likes to try to cut people down who has neither the attention nor capability to grasp th
            • Thats not the way it works for a lot of us in the *gasp* real world.

              I wasn't aware that my company operated in some sort of imagainary fairy universe.

              And if you can't afford to spend $800 on a server, you're doing something a.) very wrong or b.) that doesn't actually require a real server.
              • Maybe you need to talk to your finance department about those prices. $800 for a "real" server is a fairy tale. Or maybe your just a basement dweller using refurbished dells to run p2p out of your mom's closet.

                Here's what it looks like in the real world:

                Xeon processor: $299.95 (your buying at least two, even if you used this [sun.com] vendor).
                1 GB low-end ECC memory (2 to 4 sticks depending on the load the server will be under): $124 (that [sun.com] vendor doesn't use low-end, I just added a 512 kit to a 350 for > $200,

                • Maybe you need to talk to your finance department about those prices. $800 for a "real" server is a fairy tale. Or maybe your just a basement dweller using refurbished dells to run p2p out of your mom's closet.

                  OK...so a SunFire X2100 isn't a real server? And I have no idea where your coming from on software licensing.

                  Look up in the thread. This is about how it's moronic to build a real server out of parts when so many servers with actual support from a real company are available for similar or poss
                  • With options? Ya, its a real server with a price-tag to match. As far as software, maybe you've never done an IT budget but you can't spec prices on one without the other. Otherwise you've got no budget, and no approval.

                    Anyway, this is your tangent. If you can hit that funny back button a couple of times you'll see I was asking someone else a legitimate question before you decided to drop you tidbits.
                • Back in the pre-dot.bubble days we wasted oddles of money on "real" servers. 350's, 250's and a couple of Spark 5 workstations

                  I went back and read your comment again. "Spark" workstations? That explains it all. You have no idea what you're talking about.

                  Have a nice day.
    • Re:Pointless? (Score:3, Informative)

      by tomstdenis ( 446163 )
      They wanted to get their Netburst cores into the DP world as quickly as possible.

      Where AMD uses the HT bus for their 757 and 939/940 parts Intel was still using the good ole 64-bit FSB of yesteryear.

      Most of what Intel does nowadays in the processor world is entirely market driven. The Netburst is a good example. High clock rate, low efficiency processor. Sounds good on paper but works poorly in practice. The EMT64 extensions are another example. A lot of code on the P4 in 32-bit mode takes roughly the
    • Re:Pointless? (Score:3, Insightful)

      by magarity ( 164372 )
      isn't the memory access bottleneck a rather fatal, and obvious, flaw in the whole design? Unless I'm missing something

      What you're missing is that Intel's PC CPU business is all about the CPU. The chipset and all that other tedious little stuff is just there only because it has to be for the CPU to function. Their entire focus is CPU, CPU, CPU. Look how fast it runs through clock cycles! Look how many cores and pseudo-cores (HT) it has! They've been doing this for ages. Recall the first genera
    • The article gets the point of Hyperthreading... backwards.

      Yes, the memory interface gets congested, so the processor takes a stall. But, instead of just leaving the ALU idle, it has another thread in reserve to schedule on it. Thus improving the utilization of the ALU subsystem.

      And THAT'S the point of this "Hyperthreading" thang...

      The rest? Well, if the local L1/L2 cache isn't big enough, you are going to suffer. Yes, a bigger pipe to memory would help, but you are STILL several times slower than you could
  • by faqmaster ( 172770 ) <jones.tm@NOSPAM.gmail.com> on Thursday November 03, 2005 @12:13PM (#13941479) Homepage Journal
    Yep, I'd say that if both her input and her output are busy, she's DP.*

    *See, kids? This is why you should avoid too much pr0n, it just totally warps your mind.
  • by Anonymous Coward
    Where it could go right It's not all doom and gloom, though. Think of a scenario where compute threads rarely touch system memory , doing most of their work on the CPU with small working sets and you've got yourself something that Xeon should do well at. While those compute threads would have to be HyperThreading friendly to have HT be a performance win, Intel has spent good time making sure HT gets focus by application developers. If you read the last benchmark results for the dual core xeon's here [blogspot.com] you c
    • I'm typing this on a system with two 3.6Ghz Xeons with Hyperthreading enabled. The system uses two 300GB ltra-320 scsi disks set up in a mirror and has 4GB RAM installed. When I run Nescape the performance is about as good as on my other system which is a 2.4Ghy P4 with 1GB RAM and one SATA drive. However the Dual Xeon system runs my DBMS querries blazingly faster, __much__ faster then the P4 based system. Many DBMSes work like Apache and "fork" a new server process for each client, so when 12 process e
      • Your Xeon system with the SCSI disks is hugely faster doing DBMS than the system with the SATA drive in large part (probably larger than the other reasons you've listed, although those do matter) because DBMSs tend to throw a heck of a lot of disk IO commands at the disk subsystem all at once. The SCSI disks and their controller are simply better able to handle the barrage. I'll be that a test with the drive subsystems reversed shows that while the Xeons are still faster, the P4 is only somewhat behind, n
    • There is no system task in existence that will not interface with memory somewhere along the line. AMD's shifting of the memory controller to the CPU was incredibly astute - memory is one of the most used components in any system, and one of the components most accessed by the CPU. We've all seen the huge benefits AMD CPUs have reaped as a result of this move and the restructuring of the low-level I/O buses, especially compared to Intel's paltry "more megahurts!!!1111oneoneone lollerskates" approach.
  • by tayhimself ( 791184 ) on Thursday November 03, 2005 @12:35PM (#13941719)
    GamePC has real benchmarks showing the Paxville Xeons getting blown away by Opterons. http://www.gamepc.com/labs/view_content.asp?id=pax ville&page=1&cookie_test=1/ [gamepc.com]

    The Hexus article is just a summary of their results along with several inaccuracies.

    If you're I/O bound by your threads in any way, you can hit problems (all threads touch the MCH, then there's a 266MiB/sec bus link to the I/O processors to cross, then the data hits disks or network hardware). If you're memory subsystem bound in any way, especially on a majority of compute threads, performance is likely gone.
    This is misleading. First off, the MCH is a 6.4 GB/s link so I dont understand how it could bottleneck I/O even if you're compute bound. The 266 MB/s IO bus is for legacy peripherals (USB/serial/SATA). Considering SATA-I (what the ICH5R supports) is 150 MB/s per channel, and USB is 400 Mb/s I cant see how this is a big problem. If you want fast (SCSI/FibreChannel/SATA-OII HW raid) disks and network, there are PCI-X 64bit and PCIe x4, x8 slots that you can have your important I/O subsystem hanging off of.

    Here is a link to the intel datasheets for the chipsets which shows 3 x8 PCIe interfaces for the 7520 and 1 for the 7320. http://www.intel.com/products/chipsets/E7520_E7320 / [intel.com]

    All that being said, the CPU itself is a dog.

  • Are there real world benchmarks of these things? All I see is a lot of Intel-bashing - which does not excite me much.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...