Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Software Linux

Linux SMP Round-Up 154

Dual Minds writes "LinuxHardware.org is at it again and this time they cover three of the finest boards on the market. This review covers three dual processor Xeon boards and they are the only site that ever does Linux reviews (at least on a regular basis). Here's a peak: "First thing is that all E7505-based boards are basically the same on the surface due to the basic features of the chipset. They all have dual processor support, support for dual channel DDR, and support for PCI-X up to 133MHz (to name a few). Once a manufacturer gets their hands on the board though, features can be added or it can simply be left as is." Very in depth and some sweet hardware."
This discussion has been archived. No new comments can be posted.

Linux SMP Round-Up

Comments Filter:
  • by LightningBolt! ( 664763 ) <lightningboltlig ... m ['aho' in gap]> on Thursday April 10, 2003 @05:59PM (#5706205) Homepage
    I didn't even realize there was one TV channel that featured Dance Dance Revolution, never mind two! Sweet!

    I'll go read the article now.

  • Sort of on topic... (Score:2, Interesting)

    by Suicide ( 45320 )
    Since these types of motherboards are aimed at people rolling their own servers, as opposed to buying a prebuilt one.

    How many people actually build a server from the ground up, and why, other than price, is it advantageous to do so, instead of buying a complete box? Price shaving shouldn't be a huge concern for a server, since so many other factors figure in more.
    • Beowulf Cluster
    • Uhm... Some of us just want an SMP workstation, for the extra punch. I'm not going to pay permium for a server-class machine when I can get a motherboard and a case and assemble it myself (or let it assble by a small shop).
      I myself have a Dual AMD Athlon MP 2400+ with a Tyan Tiger board. Works fine, really... It's just a bit, uhm, loud...
      • Yup, SMP just rocks, and can really extend workstation lifetime, which is why I built the system I did. My dual P2-450 machine is still going strong, and actually "feels" faster than the single 933 P3 I have on my desk at work (Slackware on both).
        • SMP is the way to go for futureproofing.

          Back in the 20th century, I built a dual Celery 400 box with an Abit BP6 (must be the best MB ever for bang/buck). I think I built the thing in mid 2000 for around 500 bucks. I'm still using it as my primary workstation.

          That's 3 years for 500 bucks. Unreal, computing wise. For most things, it's still better than most of my Uniproc machines, though my uniproc AMD 2000+ is now making an impression on me.

          I'm looking to upgrade, not because the machine usually feel
      • I have two dual processor machines ...

        My original: Dual Pentium Tyan Tomcat IID board, purchased in Oct 1996 - still running strong with a pair of P-166MHz chips (cause I can't find a pair of 200MHz *non* MMX P-200s). [Running Slack, for those that care, which does mail/web/routing/firewall for me.]

        My current: Dual Athlon Tyan Tiger board running a pair of 1500MP Athlon chips. (now if I can only get the cash to upgrade the CPUs :)

        Why dual CPU? Because I play with graphics and code and occasionally want
    • I think for a corporation, support is a larger factor then anything.

      A good support plan can save lots of money, and frankly, having someone in house build large servers gets expensive after awhile. That's why Dell does so well :). Good support.
    • I have never ever bought a system. I have always (since the '80s) built systems myself. Some of the advantages are as follows:

      More bang for your buck - you get superior parts than the run-of-the-mill system

      Choice - there are A LOT of good parts to choose from

      Get what you want - since you're picking and choosing, you can get features you really want and not get features you don't want.

      Cheaper - the systems i've built have been comparable to one's sold by dell, etc but at a fraction of the cost

    • Support Issues (Score:4, Interesting)

      by peatbakke ( 52079 ) <peat@noSpAM.peat.org> on Thursday April 10, 2003 @06:50PM (#5706522) Homepage
      Support is an argument for and against buying prebuilt systems ...

      If you're colocating a server, having a pre-built machine with a tight support contract is pretty crucial. For example, Dell offers a 24/7, 2 hour on-site support guarantee for servers almost anywhere in the continental United States. That's pretty darned handy if your servers are spread around.

      On the other hand, if you're able to service the machine yourself within a reasonable time frame, I think it's always better to build your own servers because you have:

      - Intimate knowledge of every hardware component in the box. You researched every piece, right? Lots of manufacturers put in weird devices and what-not, and you can never really be sure of what's under the hood when you buy from someone else.

      - Spare components on hand. If you're spending the cash on some nice servers, having an extra hard drive, DIMMs, and a network card on hand is pretty invaluable.

      - Better upgrade path. Feel free to swap out a motherboard, processor, or SCSI system. No worries about proprietary motherboard or case standards. .. there are other issues than support, of course, but this is just my two cents. :)
    • I think the issue of whether to self-build or buy premade comes down to leveraging one's areas of expertise. If you or your staff can build your own servers, you get the brand names on the inside.

      If you buy premade computers, you get the brand name on the outside, and service and support and an easier way to figure out your IT budget.

      If you can roll your own, your costs CAN be lower, in-house service and support CAN be better, faster and cheaper. For my money, computer science is a lot more fun and the r

    • by Junta ( 36770 ) on Thursday April 10, 2003 @07:31PM (#5706799)
      For a *business*, building a server if almost always the wrong path. When buying a prebuilt system, that support and QA is vitally important. Even in popular combinations, the amount of testing in a home-brew system is nil. Even if the IT *knows* what they are doing, the staff can be shuffled around, quit, whatever and leave the business in a difficult situation. Even if the staff is static, dealing with a defective, warrantied part is occasionally difficult, as the hardware company may try to blame other parts in your system or the software being ran before offering repair or exchange, whereas Dell, Hpaq, IBM, and the like will bend over backwards to kiss the asses of business customers and really have no one else to blame if the whole package comes from them. As the complexity of a system increases, the more vital it becomes to have a vendor ready to stand by the product as a whole, as the added complexity gives individual hardware vendors more things to blame. Servers are certainly a significant step in complexity, with multiple processors, multiple mass storage busses and devices.

      Plus, there are just some things you cannot do when you roll your own system that server vendors provide, *particularly* in the rack environment. Blades are great for racks, but you certainly can't build your own. The health monitoring and management software with servers from the big names is very nice and not possible in your home system. I know IBM 1U servers knowadays come with a built-in kvm-like functionality where you just have a plug from one 1U server to the next and one to the previous server and all the systems in the chain understand if they receive a certain key sequence on the keyboard, that they switch to the appropriate system. KVMs for racks full of servers are typically a nightmare for cable management, so this is a nice resolution...

      Now for home use, home built is pretty much fine. Slight downtime while you fight it out with the vendors is no big deal. The savings and intimate knowledge of your system has more value (unless you are going to fire yourself...) than it does in a business where the extra cost is negligible compared to the budget, and where the guy who builds it may be gone next week. And the bonuses don't matter as much in a standalone system as it does in the middle of a lot of other racks.
      • Just a minor point, but you can indeed build your own rack mounted systems. I was seriously considering buying a rackmount case (don't remember the three brands I was looking at, as it was about 18 months ago, and I've slept since then) and building a system to colocate at the ISP I was working at.

        True, building a rack system is more difficult, as there are some *serious* space limitations, but it can definitely be done if you do your research and take the time to buy the right parts. (one MB I saw actual
        • I know you can build your own rack system, but you can *not* build your own blade. 14 systems in 7U is very nice.
          • You almost can, actually. Perhaps not 14 systems in 7U, but I'm pretty sure you can find used/refurb/surplus CompactPCI chassis/backplane/power supply units that support multiple processor cards. Probably not worth the time/money to build your own, though.
    • In the company I work for it is difficult to build your own server. I could do it but time and hardware support are factors. We run mostly Dell Power Edges in a 2U rack mount, Linux runs great on these. I get them from the refurb site with full warrenty and no OS. At home is another story, compiling Gentoo on a dual P-Pro 200 as I write...
  • by Billly Gates ( 198444 ) on Thursday April 10, 2003 @06:01PM (#5706227) Journal
    If your throwing enough money around to afford dual Xeon's then hyperthreading support be included.

    More information about it is here [aceshardware.com]and you can have virtual dual cpu's per processor. In theory you can have the performance of 4 cpu's with a dual processor setup.

    For databases and ERP this could be a very nice and cheaper alternative to expensive IBM and Sun boxes.

    My question is does Linux currently support hyperthreading? If not then it may be wise to put off the purchase or buy dual Athlon MP's which are alot cheaper and offer similiar benefits.

    • Yes.

      May I suggest at least taking a peak at Google [google.com] before asking silly questions?
    • yes, linux 'supports' hyperthreading - that took no changes at all, since they just show who up as more CPU targets. 2.5 kernels, and (I think) some of the 2.4 scheduler patchsets, also have some special tuning to avoid some of the worse behaviors hyperthreading can cause (when processes hop back and forth between physical images cores, or end up overcrowded on one virtual image).

      So linux support for HT is pretty good :-)
      • In fact, a lot of work recently has gone into NUMA support (links here [slashdot.org] and here [kerneltraffic.org]). HT can be seen as a form of NUMA. Because of the effects of a CPU's cache, some processors are better suited than others for handling a particular thread. Like you mentioned, moving between logical processors on the same physical processor is much better than moving between physical processors. So, the scheduler must have some clues as to the nature of the CPU's it's working with.

        As you can see, not only does Linux sup

    • by mindstrm ( 20013 ) on Thursday April 10, 2003 @06:23PM (#5706340)
      4 cpus for the price of 2? No.. that's not what hyperthreading is about.

      At least, not from what I've gleaned from all the documentation out there.

      Hyperthreading is about optimizing the pipelining features of the processor... wheras normally. If the processor knows that 2 instructions are independent of each other, it can run whatever stages of them it has roon for in the pipeline, concurrently. Normaly, preduction and whatnot have to be done, and this is only somewhat effective.

      By forcing the OS to treat ti as 2 processors, it now has a clue as to which instructions are definately unrelated, as the higher layer OS has already decided they go to separate processors.

      So Hyperthreading is really using 2 virtual processors to better use up the resources of a single processor.. so for some operations it may yield near double the perforamnce, but overall, there is no way this is going to give you the same boost as the equivalent number of processors will.

      Yes, linux currently supports hyperthreading. You will see that 4 processors show up on a dual processor xeon system.

      • I'm definately still suffering from the flu. I just re-read that and it's got way more than my average number of mistakes.

      • As a matter of fact, I believe hyperthreading involves two processor cores.

        I've heard about up to 30% improvement in performance, if you're CPU-bound AND highly concurrent. (I am, so I'm looking forward to benchmarking one of these babies that one customer bought)
        • You would be wrong. The grandparent poster had it exactly correct.

          The number of pipelines in a HT CPU is exactly the same as a non-HT processor. The instruction decode, prefetch, etc. is modified, but the bulk of the pipeline is the same. This is merely a way to extract extra parallelism from the code with hardware. By having two completely separate threads of program execution running in "parallel", you cut down on interdependencies between them to near zero. This is, of course, assuming a fully r

    • Yes, It Does (Score:5, Informative)

      by peatbakke ( 52079 ) <peat@noSpAM.peat.org> on Thursday April 10, 2003 @06:30PM (#5706385) Homepage
      Linux does support hyperthreading. 2.4.20 recognizes four processors on my dual Xeon servers, without any tweaks. I think it's pretty nice -- I'd say there's between a 5% and 25% pickup in performance, depending on what you're using it for (generic vs. optimized integer code).

      According to a geek.com article [geek.com], Linux was actually the first operating system to officially support hyperthreading, and that was in late 2001.
      • Wow, official support!

        Linux needs to "officially support" something thats transparent to the OS, since it overrides BIOS settings.
        • Re:Yes, It Does (Score:3, Informative)

          by peatbakke ( 52079 )
          Hyperthreading is not fully transparent to the OS. The scheduler needs to be aware of the processors capabilities to take advantage of it. It's not a very difficult situation to adapt to, but it's not transparent.

          And yes, it was official, because it was rubber stamped by Intel.

      • In particular, there is still only one cache per CPU. Maybe 2.5.x knows the difference, but I don't think 2.4.x does yet. Swapping needs to know hyperthreaded CPUs share their cache, so you don't unnecessarily migrate a process from one CPU to another and lose the cache commonality. Consider a dual Xeon system, each Xeon having two hyperthreaded CPUs. Two tasks, A and B, each having two threads. Better to have both A threads on the same Xeon, ditto for B, so they share the cache.
        • Yup, I'm aware that it's not actually several CPUs on the same die. I'm pretty certain that 2.5 is aware of the difference in the scheduler: Ingo's queueing system is still per-physical CPU, with internal hints for hyperthreading processors. 2.4 just goes along with the idea that they're all distinct processors.
    • I wouldn't know I won't buy intel but tell me something. Is the Athon XP chip also an MP chip?

      Kernel says:
      Intel MultiProcessor Specification v1.4 Virtual Wire compatibility mode.

    • As others have already said, Linux does support HT. Does anyone have experience from both HT and "real" SMP machines for desktops? I mean, people constantly rave about how nice SMP:s are for desktop work with low latency etc. How does a single HT processor compare to a real SMP box in the interactivity department?
    • I don't know about the overall systems, but the Xeons meant for dual processor machines cost about as much as the equivalent Athlon MP chip.

      I think any multi-processor x86 OS can use the hyperthreading feature, but some performance gain can be had by optimizing a scheduler for it that is HT-aware in a way that makes best use of it.

      From what I've heard, the _maximum_ improvement you can theoretically get is 30%, typical improvement is 10%, which is pretty good as I think the HT-specific section of the die
    • Support for hyperthreading is in the latest kernels for certain. However the perfcormance increase you get, if any depends on the workload.

      I had occasion to install a beowulf style cluster a while back, and performance was worse with hyperthreading on than off. What seemed to be happening was that two jobs dispatched to a single node, ran on the same CPU, leaving one idle.

      We may have got better performance if we had configured the dispatched to schedule 4 jobs per node, but didn't have the time to tes

  • FreeBSD 5.0? (Score:5, Interesting)

    by cpeterso ( 19082 ) on Thursday April 10, 2003 @06:02PM (#5706231) Homepage

    I would like to see a comparison of Linux 2.4, Linux 2.5, FreeBSD 4.8, and FreeBSD 5.0 on the same hardware. FreeBSD fanatics like to toot their horns, but where are the benchmark results?

    btw, LinuxHardware.org is nearly slashdotted, so their Linux server knowledge must not be so great after all.. ;-)
  • by dWhisper ( 318846 ) on Thursday April 10, 2003 @06:05PM (#5706247) Homepage Journal
    An actual comment on the story...

    When reading through the review, I noticed that they only list standard benchmarks, and then a kernal compile benchmark. They never list the actual distribution of Linux used for testing the system. In my experience, the actual performance of a system is dependant on that. I know I had a system that just dragged running Mandrake, but loved Debian to no end. I'm not sure if it's just the kernal base of the system, but most of the actual distributions have some sort of performance optimization (I think) for the overall system performance. I mean, kernal complilation time is great, but what I'm more curious about is the day-to-day operation.

    I guess I've just read too many reviews over the years that focused on benchmark numbers and didn't give any information about performance under everyday use. If this is something geared for Linux, I'd be more curious about numbers like Networking performance, data-access numbers and things like that.

    My other curious question is how accurately does UT2k3 and Quake 3 show the power of a Dual Processor Xeon system? Quake 3 supports MP systems, but it has never been shown to make much difference except on large server environments. They give us video-benchmarks, and for Quake in particular, there's a limit that was hit long before these processors and chipsets that was somewhere next to overkill.

    I guess I'm just being nit-picky, but I think a Linux Review for a system should concentrate on strengths, and not benchmarks that would be similar on a Windows system made to run games.
  • by Anonymous Coward on Thursday April 10, 2003 @06:05PM (#5706248)
    I just dropped $5000 for an engagement ring this afternoon, and now everyplace I look I see things where I could have spent that money.

    Before this, someone pointed me to Dell's Finacial Services' page of good deals (and no OS tax!) on lease-return laptops [dfsdirectsales.com]. After that, a friend of mine called to tell me that a Ford dealership nearby is selling a 2002 convertible Mustang GT for below invoice with 0% financing over 4 years. And don't get me started on what I could do with a Fry's or a Best Buy right now... Oh, the agony of being such a consumer whore...

    It'll be a kick-ass ring, though. I highly recommend browsing this thread [slashdot.org] before making decisions on engagement rings -- good info even if, like me, you want to go with a diamond regardless of the fact that you're getting ripped off.

    (posting anonymously to avoid my girlfriend seeing this post a la Murphy's Law).

    • Dude, I feel your pain.

      In shopping around, I too was thinking "Man, I could buy a righteous iMac and a bunch of wireless gear.".

      So I made an epic journey to The Diamond District [47th-street.com] and had enough left over to buy some righteous gear.

      OBtopic: Has anyone done any SMP speed comparisons of various distros (they all patch their kernels with tons of various patches)? I'd also be interested in seeing if all these patches make any difference compared to Linus' default kernel.

    • Why on earth would you drop $5k on an engagement ring?

      Ever read statistics on divorce? Most couples cite financial problems as the beginning of the end.

      Besides, who needs someone that vain to deal with for a lifetime? :)
    • Come on -- I too spent +$5K on the engagement ring a little over a year ago. Now you're seeing all you could have gotten with the same amount?

      Would Dell, Ford, or Fry's do you proper? Do they swallow?

      I just finished my taxes today. First time in a decade and I owe and owe big time. $5,704 to be exact -- talk about getting fucked (!)
    • $5k for a single ring system? Instead of getting married, you could have gotten dual girlfriends instead. More bang for less money.
    • 5000$ on an engagement ring??? Wow.... I've never in my whole life spent more than 3000$ on a computer so I'm not going to spend so much ever on an engagement ring.
      If my girl wants a 5000$ engagement ring, she has two choices: help me pay it, or go see elsewhere. Love is not about money... If she's not happy to get *you* along with a budget engagement ring (let's say a 1000$ engagement ring), then she doesn't love *you* but your money...
      Oh, and don't worry... My girl knows how I feel. So don't call
    • $5K for a ring? Damn!

      I spent way less than that, but I was only 23 when I got engaged. Lower income = lower expectations regarding ring price.

  • what would i give me that most responsive user experience.

    a single cpu systesm at 3gigs, or smp system or an smp with 2 1.6 gig chips

    this assumes same chip fammily.

    i normmmlay run X, kde 3,1, apache(small home www site + php+ mysql), and some times i run a lil tux racer.

    • 'Most responsive user experience'? Switch to SCSI. The major bottleneck in any PC is the crappy disk access. I get better app start times on my 400Mhz U2W SCSI system (80MB/sec max) than my Athlon 1400 with ATA-133. The SCSI theoretical speed limit might be lower (in the example above), but real-world performance favors SCSI.

      Go get an Adaptec 29160 and a 36GB 10K Cheetah drive for your / and /usr partitions. Put /home on your IDE drive. Get the best of both worlds. When you recover from the investment you
      • This is a serious question, not a flame. Are you running Linux (and, if so, which kernel) or are you running Windows (and, if so, which version)?

        I ask because on MY system, disk access is very slow in Windows XP but very snappy in Linux. In both cases, I have DMA enabled and so I am not quite sure what is going on.
        • I'm running Gentoo Linux 1.4rc1. Everything is built from scratch with optimizations so it's as fast as can be on both machines. I'm running kernel 2.4.20 on both machines.

          I should note that the SCSI performance boost is still huge in Windows, but less profound than in Linux due to the way Windows aligns frequently used files on the disk.

          As for your performance issues, try updating the drivers for your chipset (Intel INF and Intel Application Accelerator / VIA Hyperion 4-in-1) to make sure you're getting
        • I had the same problem: had to download the W2K drivers from my mobo manufacturer to get the speed on an acceptable level. (No, I don't run XP...) I actually had this problem on three systems: two with a ASUS motherboard and one with a Tyan motherboard.
          Once the drivers are there, everything works fine. Heck in one case, for the primary HD it switched from PIO mode to Ulta-DMA mode, so nuff said...
      • Better yet, use SSA or FibreChannel. Far faster.
        • Is it for workstation use? We're not talking about arrays of disks here, just replacing your IDE drive with a SCSI drive. AFAIK the limiting factor in the disk subsystem is the disk itself.

          In a single-disk system it makes (almost) no difference if you use Ultra2Wide, Ultra160, or Ultra320 busses, because the most you're going to get out of the disk is about 60MB/sec. I can't see how SSA or FC would help at all unless you had enough disks RAIDed to hose the bus.

          The same is true for IDE busses. UltraATA-66
    • I trade in my hardware on a regular basis due to the R&D I do for the company I work for. Recently I turned in my dual AMD 1800 MP and have a stop gap single P4 2.4.

      The systems use the same hardware other than the motherboard and CPU. They include RAIDed U320 SCSI Cheetah's, GF4 TI4400, etc.

      With only 32bit/33Mhz PCI, the P4 can't keep up with the RAID, so obviously disk performance is much worse. I expected this.

      The strange thing to me, was how much worse X "feels" than on the slower but dual CPU
    • My dual PIII 733's give me a nice responsive GUI when I am compiling software or doing other CPU intensive jobs (provided that I am only using one CPU and not 'make -j 2' or similar). One CPU does the grunt work while the other ensures that my DVD/oggs/other remain smooth.
    • That depends a bit. I would guess the single cpu system. That way, you can probably buy more RAM and faster harddisk for the price difference.
  • The Sun Dilemma (Score:5, Insightful)

    by Gothmolly ( 148874 ) on Thursday April 10, 2003 @06:12PM (#5706290)
    If you need hardware like this, then you need Support. That's what attracts people to Sun (and now Dell, for instance). And if you need support, you'll take whatever board your System Integrator uses in their boxes.
    To wit:
    If you need this, you'll buy it from someone.
    If you buy it from someone, you have no choice of HW.
    Thus, this review is useless.
    • Except, perhaps, for those who are "system integrators", or the curious (yes, we still exist.)
    • And a few of us out there who build our own systems hate buying crap and watching it break (especially since if you build it, getting repairs on parts is a pain in the ass). So we buy the high-end stuff less often. I do not need support, I need hardware that isn't crap.

      -- Bob

    • Rant Mode (Score:3, Insightful)

      by Bios_Hakr ( 68586 )
      Ok, don't think I'm going off on you, cuz I'm not:

      I am so tired of people telling me what I need as opposed to what I want. You know the type. "You don't NEED a SUV, just buy a minivan." "You don't NEED a 500w power supply, 350w is more than enough." "You don't NEED dual procs, a single, faster, proc is more economical."

      I have some requirements about my home PC. One of those is that I should never like the machine I use at work more than the machine I use at home. I like the snappiness of dual procs
    • " If you need hardware like this, then you need Support."

      Correct.

      "And if you need support, you'll take whatever board your System Integrator uses in their boxes."

      Wrong. You choose your vendor based on what they put into their box. Being the customer, you also get to provide input as to what they put in their box in the future.
  • I have a dual P4 machine at work that I'm going to be installing Linux on soon to use as a mail server. IIRC, it's an Intel-branded board, though. But the performance I see here looks nice.

    As for myself, I have a dual proc machine, but it isn't good for much (SS10).

    And I wonder how Linux would run on one of these [apple.com]. Anyone? Anyone? :-D
    • And I wonder how Linux would run on one of these [apple.com]. Anyone? Anyone? :-D

      It's a dual 1.4Gz configuration on a non-segmented 133MHz bus. Until compilers are better at using the G4's unique instructions, for general purpose software you'd be better off with a single 2GHz P4. Even with hand-crafted assembly, you'll still be better off with a dual 1.8GHz Xeon: You'll save a few bucks and have a much, much, much faster bus. And for even money, you can probably go for a quad 2.2GHz Xeon configuration.

  • Here's a peak

    Here's another one. [wgbh.org]
  • for a new computer. I am debating whether to shell out the extra cash for a dual CPU system. How much will 2 CPUs extend the usable life of my computer? Any comments?
    • by Anonymous Coward
      Most of the time dual CPU's are a waste of money.

      What makes the difference is how much ram you have and how well tuned your OS is.

      For instance for years FTP.cdrom.com was run on a singe PP200 with 1 gig of ram - something like 3600 simultanious ftp connections were being served from it!!

      Now lets see you can build a server using a Nforce2 board with dual channel ram - say 1gb (2x 512meg) and a Athlon XP 2500 (barton core). This setup would be ideal - you can get it in microatx format with everything on bo
      • I have used this case to build a system and would warn potential buyers to check their components very carefully, this is a very small case and you will need to make sure things will fit. I had problems with my CDROM drive, the cable bundle comming out of the power supply fouls the audio and power cables. You have about 8cm of space for heatsink and fan on the processor. I have had to under-clock the processor to compensate for the poor cooling. Make sure your motherboard has the power connector on the righ
    • I'm quite happy with both of my SMP boxes. The older one has dual Pentium Pros with 1Mb of on-die cache (per cpu) and 128 Mb of EDO RAM. Thanks to the throughput of of onboard Adaptec 2940 SCSI, the end-user experience running mozilla and mplayer is similar to my Dad's brand-new 2.4 GHz P4, single processor.

      My newer workstation has a Gigabyte mobo with dual P3 Coppermines at 1 GHz, 1 Gb PC133 SDRAM, and two 80 GB IBM Deskstar drives. (among others) I built it specifically for linux about a year ago, and sa
  • hmm.. They didn't do an intel board with that chipset, would have been interesting, they're really good boards
  • A quick comment on the toss-away statement in the article that 2.4.20 supports 7505 based systems out of the box.

    Be Careful(TM).

    The AGP3 stuff requires a patch to stock Marcello/Linus kernels for the 7505 chipset.

    I had trouble getting an AGP4x card to work on a Supermicro X5DAL-G board (baby brother to the reviewed X5DA8 board; but at ATX size instead of EATX and able to support unregistered memory) without applying this patch [iu.edu]. Once patched, it works fine.

    I'm not sure if 7505 support has made it into Ma
  • by questionlp ( 58365 ) on Thursday April 10, 2003 @07:03PM (#5706617) Homepage
    Blockquoth the article:
    First thing is that all E7505-based boards are basically the same on the surface due to the basic features of the chipset. They all have dual processor support, support for dual channel DDR, and support for PCI-X up to 133MHz (to name a few). Once a manufacturer gets their hands on the board though, features can be added or it can simply be left as is.
    There are some boards out there that don't match the template found in the three boards reviewed. Tyan has a board, the Tiger i7505 [tyan.com] to be exact, does not include PCI-X slots but rather has the normal complement of 5 PCI slots.

    The PCI-X controller used in almost all of the E750x workstation/server boards is really expensive and adds to the complexity of the board layout and design. It seems that Tyan decided to forgo that chip in order to keep the cost of the board down while making up for it by adding Serial ATA (but no FireWire like it's larger Thunder i7505 brother).

    One board that I would like to have seen reviewed is the Supermicro X5DAL [freebsd.org] (with or without Serial ATA RAID) as it does include PCI-X slots, but it is also a standard ATX-sized motherboard. It only has four memory slots, so that may have changed some of the memory timings and possibly have improved some of the scores by a small amount.

    One a side note, FreeBSD 4.8-RELEASE users will also benefit from the newly added support for HyperThreading found in all P4-based Xeons and the 3.06GHz P4. More info can be had here [freebsd.org]. I'm not sure if that feature is also available in 5.0-CURRENT (I would think it would be MFC).

  • ...and are probably the best price/performance on the market at the moment. You really pay a serious premium for intel hardware. Just because it costs >3x as much doesnt mean its >3x better...

    I have two Tyan S2460's with dual 1200mhz Thunderbirds in each, rock solid in W2K and Linux, and excellent performers. They were also very cheap to build.

    Maybe someone should do a review of budget Linux SMP setups...
    • Alas, I've seen no Athlon boards with PCI-X. And the only dual-memory-channel boards seem to be single-processor. Not that those things are necessary...

      I wonder if the soon-to-come Opteron is why the board makers have been ignoring the Athlon MP in the last few months.

      • Theres a reason i said budget Linux SMP :-)

        PCI-X is $$$, not to mention not exactly a lot of cards for it yet. Not exactly for the budget minded Linux SMP'er.

        I am unaware of any dual-memory-channel SMP chipsets at the moment...
  • Here [cox.net] is the summary of a dual Xeon system I am thinking about building. It has links to more information about each part, and where cheapest to buy them. I have done a lot of research into this since last weekend, and am still not sure if I am going to do it or not.

    For $1300, you too can build a kick ass system like this too. Follow the links.
    • Don't buy a case badge. Your Xeon boxes will have them.
    • You are killing off much of your advantage by buying a dual system with 32bit PCI.

      Unless you have absolutely no disk access, get something else with 64bit PCI for a good SCSI setup in the future.

      If you are that tight for cash, get a dual AMD with real 64bit PCI. Don't get the MP chipset, those boards have only slightly better PCI and top out at much lower CPU speeds, get a MPX chipset with full 64bit PCI and better CPU support.

      Don't get a crippled Xeon simply for bragging rights. You'll be cheating you
  • I used search on their page and didn't find one article for AMD.

  • I'm really happy with my Celeron 466s on a BP-6. but lately roll-your-own SMP has been taking a turn for the corporate. :( Where are the dual Durons roundups?! Quantity over quality is the only way to go.

Do you suffer painful illumination? -- Isaac Newton, "Optics"

Working...