Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Software Linux

Linux SMP Round-Up 154

Dual Minds writes "LinuxHardware.org is at it again and this time they cover three of the finest boards on the market. This review covers three dual processor Xeon boards and they are the only site that ever does Linux reviews (at least on a regular basis). Here's a peak: "First thing is that all E7505-based boards are basically the same on the surface due to the basic features of the chipset. They all have dual processor support, support for dual channel DDR, and support for PCI-X up to 133MHz (to name a few). Once a manufacturer gets their hands on the board though, features can be added or it can simply be left as is." Very in depth and some sweet hardware."
This discussion has been archived. No new comments can be posted.

Linux SMP Round-Up

Comments Filter:
  • by dWhisper ( 318846 ) on Thursday April 10, 2003 @07:05PM (#5706247) Homepage Journal
    An actual comment on the story...

    When reading through the review, I noticed that they only list standard benchmarks, and then a kernal compile benchmark. They never list the actual distribution of Linux used for testing the system. In my experience, the actual performance of a system is dependant on that. I know I had a system that just dragged running Mandrake, but loved Debian to no end. I'm not sure if it's just the kernal base of the system, but most of the actual distributions have some sort of performance optimization (I think) for the overall system performance. I mean, kernal complilation time is great, but what I'm more curious about is the day-to-day operation.

    I guess I've just read too many reviews over the years that focused on benchmark numbers and didn't give any information about performance under everyday use. If this is something geared for Linux, I'd be more curious about numbers like Networking performance, data-access numbers and things like that.

    My other curious question is how accurately does UT2k3 and Quake 3 show the power of a Dual Processor Xeon system? Quake 3 supports MP systems, but it has never been shown to make much difference except on large server environments. They give us video-benchmarks, and for Quake in particular, there's a limit that was hit long before these processors and chipsets that was somewhere next to overkill.

    I guess I'm just being nit-picky, but I think a Linux Review for a system should concentrate on strengths, and not benchmarks that would be similar on a Windows system made to run games.
  • The Sun Dilemma (Score:5, Insightful)

    by Gothmolly ( 148874 ) on Thursday April 10, 2003 @07:12PM (#5706290)
    If you need hardware like this, then you need Support. That's what attracts people to Sun (and now Dell, for instance). And if you need support, you'll take whatever board your System Integrator uses in their boxes.
    To wit:
    If you need this, you'll buy it from someone.
    If you buy it from someone, you have no choice of HW.
    Thus, this review is useless.
  • by thesadjester ( 87558 ) on Thursday April 10, 2003 @07:22PM (#5706335)
    I think for a corporation, support is a larger factor then anything.

    A good support plan can save lots of money, and frankly, having someone in house build large servers gets expensive after awhile. That's why Dell does so well :). Good support.
  • by spoonist ( 32012 ) on Thursday April 10, 2003 @07:31PM (#5706393) Journal

    I have never ever bought a system. I have always (since the '80s) built systems myself. Some of the advantages are as follows:

    More bang for your buck - you get superior parts than the run-of-the-mill system

    Choice - there are A LOT of good parts to choose from

    Get what you want - since you're picking and choosing, you can get features you really want and not get features you don't want.

    Cheaper - the systems i've built have been comparable to one's sold by dell, etc but at a fraction of the cost

    Cheaper - i can scavange / salvage old parts from old systems for new systems. Video card still decent? Use it! Network card still state-of-the-art? Use it! Harddrive still going strong? Use it!

    No floppy drive - :-) i haven't used stupid floppies in YEARS. just relatively recently have systems made floppy drives optional.

    Quiet - i'm able to build quiet / silent systems because i can pick my parts

    Intimacy - NO, not THAT kind! since i built the system, i am intimately familiar with it. i know what to try/fix if something goes wrong.

    Linux/OpenBSD - since i'm picking parts, i can ensure that they'll work out-of-the-box with my OSes of choice

    No Microsoft Tax - i have been 100% microsoft free for, geez, like 8 years now... (see Cheaper)

    Others - i'm sure there are other reasons, but those are the ones i can think of off the top of my sleep deprived head

    sure there are lots of downs to building your own (support, warranty, whatever), but i've found that the reasons above more than outweight the downs.

  • by Anonymous Coward on Thursday April 10, 2003 @07:48PM (#5706506)
    Most of the time dual CPU's are a waste of money.

    What makes the difference is how much ram you have and how well tuned your OS is.

    For instance for years FTP.cdrom.com was run on a singe PP200 with 1 gig of ram - something like 3600 simultanious ftp connections were being served from it!!

    Now lets see you can build a server using a Nforce2 board with dual channel ram - say 1gb (2x 512meg) and a Athlon XP 2500 (barton core). This setup would be ideal - you can get it in microatx format with everything on board. This means you could actualy fit two machines in a 1RU case :)
    Oh and IDE hard drives with 8meg cache on board are now cheap and offer great performance. Or you could use a case like the H340 from Aopen - have two servers - one as a hot backup :)
  • by Junta ( 36770 ) on Thursday April 10, 2003 @08:31PM (#5706799)
    For a *business*, building a server if almost always the wrong path. When buying a prebuilt system, that support and QA is vitally important. Even in popular combinations, the amount of testing in a home-brew system is nil. Even if the IT *knows* what they are doing, the staff can be shuffled around, quit, whatever and leave the business in a difficult situation. Even if the staff is static, dealing with a defective, warrantied part is occasionally difficult, as the hardware company may try to blame other parts in your system or the software being ran before offering repair or exchange, whereas Dell, Hpaq, IBM, and the like will bend over backwards to kiss the asses of business customers and really have no one else to blame if the whole package comes from them. As the complexity of a system increases, the more vital it becomes to have a vendor ready to stand by the product as a whole, as the added complexity gives individual hardware vendors more things to blame. Servers are certainly a significant step in complexity, with multiple processors, multiple mass storage busses and devices.

    Plus, there are just some things you cannot do when you roll your own system that server vendors provide, *particularly* in the rack environment. Blades are great for racks, but you certainly can't build your own. The health monitoring and management software with servers from the big names is very nice and not possible in your home system. I know IBM 1U servers knowadays come with a built-in kvm-like functionality where you just have a plug from one 1U server to the next and one to the previous server and all the systems in the chain understand if they receive a certain key sequence on the keyboard, that they switch to the appropriate system. KVMs for racks full of servers are typically a nightmare for cable management, so this is a nice resolution...

    Now for home use, home built is pretty much fine. Slight downtime while you fight it out with the vendors is no big deal. The savings and intimate knowledge of your system has more value (unless you are going to fire yourself...) than it does in a business where the extra cost is negligible compared to the budget, and where the guy who builds it may be gone next week. And the bonuses don't matter as much in a standalone system as it does in the middle of a lot of other racks.
  • Rant Mode (Score:3, Insightful)

    by Bios_Hakr ( 68586 ) <xptical@gmEEEail.com minus threevowels> on Thursday April 10, 2003 @10:59PM (#5707523)
    Ok, don't think I'm going off on you, cuz I'm not:

    I am so tired of people telling me what I need as opposed to what I want. You know the type. "You don't NEED a SUV, just buy a minivan." "You don't NEED a 500w power supply, 350w is more than enough." "You don't NEED dual procs, a single, faster, proc is more economical."

    I have some requirements about my home PC. One of those is that I should never like the machine I use at work more than the machine I use at home. I like the snappiness of dual procs I like the ability to play a game while I rip a DVD. I like it when Gentoo slams through an emerge.

    If someone has the money to pick up a Mobo, dual Zeons, and an assload of RAM, either be happy for them or shut the hell up.

  • by Sloppy ( 14984 ) on Friday April 11, 2003 @01:57AM (#5708413) Homepage Journal
    Alas, I've seen no Athlon boards with PCI-X. And the only dual-memory-channel boards seem to be single-processor. Not that those things are necessary...

    I wonder if the soon-to-come Opteron is why the board makers have been ignoring the Athlon MP in the last few months.

What is research but a blind date with knowledge? -- Will Harvey

Working...