The Ultimate Linux Box 2001 401
savaget points to this Linux Journal article which covers building a superior personal computer for general usage. See if you agree with the choices that Rick Moen, Daryll Strauss and Eric Raymond made in building their dream box.
Cheap Linux box (Score:4, Interesting)
My budget doesn't allow ultimate boxen... I'd be more interesting in seeing information on ultra-cheap (but still decent and reliable) systems. An older guide [ls.net] exists, but it hasn't been updated in a long time.
Re:Cheap Linux box (Score:3, Funny)
Re:Cheap Linux box (Score:2)
That 20-year old keyboard probably cost a lot more than one of Fry's $12 specials. Its a steel cased, tactile-feedback (clickety click) IBM-PC model that probably cost about $100 when new. But if you do a lot of writing like Eric Raymond does, the moderate additional cost is undoubtedly worth it.
Re:Cheap Linux box (Score:2)
Re:Cheap Linux box (Score:3, Interesting)
My approach to the ultimate Linux machine is quite simple: I buy a new machine every two years, but keep my 21" monitor across upgrades (and my keyboard now!) Backups are handled by buying a new disk every 9 months (capacity has doubled, I just mirror everything and then throw out the smallest disk on my machine: 160G now) If I ever hear swapping, I upgrade immediately (512M now.)
This obviously isn't the most economical solution, but it is the most efficient if I assign a $/hour number to my time.
Re:Cheap Linux box (Score:2, Funny)
Pr0n and MP3's. What else is there to do with 160GB?
Re:Cheap Linux box (Score:4, Informative)
God Box [arstechnica.com]
Hot Rod [arstechnica.com]
Budget box [arstechnica.com]
SCSI: why? (Score:4, Insightful)
I realize the SCSI disks, especially the close to "SCSI 3" mentioned in the article, would decrease disk latency, but is it really that much different than 7200 or even 10000 rpm ATA100/ATA133 drives? An unless you have onboard SCSI, you have to go through the already busy PCI bus. As far as I'm concerned, it's not worth the price difference.
Re:SCSI: why? (Score:3)
It's not just that; Even with the new ATA100 and ATA133 systems, IDE still slows down your system, especially when it chokes, even under linux. It's also just a lot more likely to choke. At least, that's been my experience, and it's not like I've been using crappy hardware.
Re:SCSI: why? (Score:2)
Re:SCSI: why? (Score:2)
Re:SCSI: why? (Score:2)
whether or not you have the money to spend. If you're on a tight budget, SCSI is probably out.
IOW, I think budget requirements dictate whether
or not it's worth getting SCSI. Given a $1000-
budget, SCSI is almost certainly out. Given a $2500 budget, SCSI becomes a good option. It would
be interesting to see machines configured for different price targets (eg like Sharky Extremes guides, only
for a Linux machine)
Re:SCSI: why? (Score:5, Informative)
1) For random access to a cdrom, SCSI kicks IDE's but (on my system). When ripping a CD, espcially with overlapping sector reads, my 24x scsi cdrw kicks my 52x IDE cdrom's butt.
2) I get better performance from my IDE cdrom when using linux's SCSI emulation. That was quite a surprise.
3) SCSI drives typically have 5 year warranties, whereas IDE drives typically have 3 year warranties.
4) My IBM Ultrastar (SCSI hard drive) is much quieter and cooler than my IDE Maxtors and IDE IBM Deskstar. However, new IDE drives may have caught up.
5) You have to be really careful with IDE drives in order to get good performance. For instance, I've seen an IDE drive unable to sustain more than 2MB/sec when attached to the middle of an IDE cable, but sustain 6MB/sec when attached at the end (these speeds are for writes, not reads). With SCSI, once it works (which can be a pain if you skimp on cabling and termination), it goes *fast* and is *robust*.
6) Processor overhead: transfering data between my SCSI devices requires far less cpu help than transfering data between my IDE devices (I believe I have all the right DMA stuff configured for my IDE devices -- it helps, but doesn't make things as nice as with SCSI). The implication is that writing CDs on a SCSI system is more robust than on an IDE system. I've never had a buffer underflow, even when writing CDs while the system had a sustained load over 2.0.
And once you use SCSI, you can be a SCSI snob! You almost have to in order to justify the price for a home machine (unless you work at home like I do). SCSI really is the right way to do things. However, SCSI is doing its best to kill itself off. In that way, SCSI verus IDE is a lot like OS/2 versus Windows.
-Paul Komarek
No.. (Score:2)
If you look at single-drive systems, IDE can be arguably just as good as SCSI. Certainly, it's an order of magnituded cheaper.
IF you go to multiple disk systems, right away, you start getting increases in perfromance on scsi, and drastic decreases in IDE. Anyone who says otherwise hasn't tried it.
Now.. I don't know how well IDE performs under such things as the 3ware IDE raid controllers... that may be a different story... (separate channel for each drive, etc).. but that's not that common.
Remember, part of this guy's goal was future-proofing his system. SCSI is more expandable in the long run.. he can add a new drive later. SCSI also has longer-range... so external high-speed devices are not out of the question.
Most poeple building a new 'killer box' will probably opt for some big, fast, cheap IDE drives than the scsi setup given the huge price difference.
Re:SCSI: why? why not firewire? (Score:2)
Re:SCSI: why? (Score:5, Interesting)
IDE is *only* good in a single drive / single controller situation; but at that time (from most drive manufacturers websites) you are only able to push maybe 35MB/sec. So your so called controller latency is NOT an issue. Agreed IDE will perform the same on a single drive system, but as soon as you add another drive onto that channel you've possibly halfed the performance of those two drives, you could add another controller, but really starts getting rediculus (I've got one systems with over 300 drives connected to it, I'd like to see an IDE system keep up with that)
There also are quite a bit of things in the SCSI protocol that you are looking over. Command Tag Queueing is a very big one, I can send multiple commands down the SCSI chain and the drive can re-order them so that the drive can streamline where it's going to be getting data off of the drive (setting this gives a significant performance boost on our arrays). Along with the fact that IDE is completely and totaly CPU driven, try really pushing your CPU and you are either going to have to give up CPU cycles to your app or give up performance to your drive.
Could you please provide a link to Google's use of IDE drives for all their storage, I can't seem to find a page saying that their Linux are all running on IDE only.
http://www.acc.umu.se/~sagge/scsi_ide/#comparis
http://www.dell.com/downloads/global/vectors/at
http://www.adaptec.com/worldwide/product/marked
http://www4.tomshardware.com/storage/01q1/01012
Google drives are IDE (Score:4, Informative)
Do a Google search on google cluster ide. The third result [intel.com] is an Intel customer profile on Google:
I like two IDE drives (one per channel), plus SCSI for the CD-RW and/or DVD.
Re:SCSI: why? (Score:3, Informative)
Basically, the Escalade cards make a bunch of cheap IDE drives look like a big SCSI drive. What could be better? You get the intelligence of SCSI, the protection of a RAID, at the price of IDE. With just a few IDE drives, you get scalding performance that more the beats the most expensive SCSI drives.
Sadly, 3ware has decided to get out of the controller card business. I've bought a couple of cards that I'm going to keep until I need to build some more file servers; they say that they are going to keep selling the cards until December, but only until then.
thad
Re:SCSI: why? (Score:2)
Re:SCSI: why? (Score:2)
With an IDE RAID card, like 3ware's Escalade controllers. Great if you need a lot of storage, and want something that performs better than software RAID (which is *terrible* on 4 IDE disks) but cheaper than SCSI.
Re:SCSI: why? (Score:2)
You also agree with my statement that the performance of just one drive the controller is also not an issue.
But you might want to think out of the box sometimes; most of the the machines I run on, have 2x 1.6GB XIO bus supporting 6 cards each. On my smaller Sun/SGI boxes they have a PCI bus that does a sustained 200MB/sec.
So to out push the bus speed using drives that have a sustained rate of 35MB/sec, I'd have to have 6 IDE drives and 6 IDE controllers at a minimum. If I'm not using 1 constant read or write, and doing multiple transactions I'm going to need to add more drives & controllers to fill up my bus.
SCSI controllers are not really anymore expensive than IDE controllers these days (price watch has SCSI3 for $24).
Correction: the limiting factor for IDE storage is physical space to fit drives in the case. That single server with 300 drives goes out to my 46 terabyte EMC 8730 frame running SCSI over switched fabric; and I can push the XIO bus to a full 1.6GB so my bus speed is NOT a limiting factor.
http://www.sgi.com/origin/2000/numa_tech.html
http://www.sgi.com/Products/PDF/1150.pdf
http://www.sun.com/servers/midrange/e4500/detai
http://www.sun.com/servers/workgroup/220r/featu
Re:SCSI: why? (Score:2)
Normally you have this type of configuration in a HA environment where one system can mount the drives of another; he can see them he just doesn't mount them until the other node is down, often by doing a "shoot the other in the head" (I don't get to use that phrase enough) type of procedure, by just turning off the power of the other system, when a failure is noticed.
The other time that you would use this is when you have a clustered file system i.e. Veritas VCS, SGI's CXFS, Linux's GFS, etc. So everybody gets to agree who's talking to which inodes when. This does add some overhead, but if you need to share data realtime and need something at a bit lower level than nfs this is where you might look.
You do not want mount the same drive to two different systems at the same time if they are not running some type of software cluster management (either HA or filesystem). The filesystem *will* get corrupted.
Sorry I don't have any links, but you might do some searches on CXFS, VCS and GFS they'll probably give you all the info you'll need.
Re:SCSI: why? (Score:2)
2) Cable length... okay. You win that one for sure.
3) Again.. not sure what you mean. You aren't supposed to hot-plug scsi devices unless the ports are specifically built for it.
Great box - for a Millionaire like Raymond (Score:3, Insightful)
Okay, Raymond isn't a millionaire any more, either. But he does have corporate backing, which is a hell of a lot more than I've got. When I feel like dropping 15 large on a personal computer, I think I'll go for an OS a bit more upscale than Linux. Solaris, maybe.
Anyway, "dream" is the key word in the title of the article. No real Linux users (mostly college students, AFAIK) can afford a PC like ESR has designed. And I'm not sure what they'll accomplish by "dreaming" about the "ultimate" Linux box when the whole point of Linux is to be able use whatever old, junk hardware you can scrounge.
Re:Great box - for a Millionaire like Raymond (Score:3, Interesting)
Incase you are wondering why he is building two, it clearly states that one is for him, and the other is for Linus Torvalds!
Lucky dog.
Re:Great box - for a Millionaire like Raymond (Score:3, Funny)
the whole point of Linux? Erm... (Score:4, Insightful)
Maybe that's YOUR whole point in using Linux, but it sure as hell ain't mine! If that's the way you feel, you'd be better off getting some nice DOS 3.11 disks somewhere.
Re:Great box - for a Millionaire like Raymond (Score:2, Insightful)
Linux is far better than Solaris for a desktop.
I concurr (Score:2)
Hash: SHA1
I work with Solaris (7.0, 8.0, etc.), FreeBSD, and GNU/Linux (debian, Mandrake 7.2, and DEC Alpha Red Hat) every day and can firmly attest that both FreeBSD and GNU/Linux are far, far nicer systems on the software side than Solaris for anything one would want to do on a desktop system, and for most things one would want to do on a server.
Fifteen large would probably mean for me a 50" plasma screen, a $4k Pioneer DVD
authoring system, and recycling my existing hardware (since I've just blown the
$15k budget on the other two items), but then that's me.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org
iD8DBQE7yZ+O2TX54E1iXfYRAjn6AJ9wF2n0cYtC/EPmcv2
bVXjcydCUeGpc55UTZkZ6r4=
=dZ6z
-----END PGP SIGNATURE-----
Plus Raymund doesnt even know what hes talkn about (Score:5, Informative)
Check these examples out:-
- "Do get a pure PCI-bus machine (not a hybrid PCI/ISA design, you sacrifice about 10% of peak performance with those)."
This is pure humbug - you do not get 10% greater performance by buying a motherboard that has ni ISA slots (like those Asus KT boards). Because the fact is that even if they have no ISA slots, they still have a ISA bus built in the southbridge to support legacy stuff like the printer/parrallel port, the serial port/s & the PS2 mouse & keyboard ports. Now as far as the USB ports are concerned, I'm not sure whether they use the ISA bus or the PCI bus.
- "For the power supply, the three of us easily agreed on a vendor: PC Power & Cooling"
Bloody typical. Yet the reality is that the PC Power & Cooling mob are just 'badge engineers' - they re-sell other manufacturers products with their own own brand markings & inflated prices.
For example their full tower case [pcpowercooling.com] is just a California PC full tower case with a custom bezel on the front [pcpowercooling.com].
Now as far as their power supplies are concerned. I remember when they used to sell a 'Silencer' model 275 watt power supply. In fact all it was was a generic 300 watt power supply, de-rated down to 275 watts so it was understressed, so it would cope with retro-actively fitted low speed 'silencer' fan.
As far as powersupplies are concerned I recommend the Enermax 350 watt EG365P-VE(FC) or 450 watt EG465P-VE(FC) power supplies [enermax.com.tw]. They have a push/pull dual fan design (a 80mm exhaust fan at the back & a 92mm intake fan at the bottom), which means the fans can run at a much slower (therefore quieter) speed, without losing any cooling performance. The Powersupply comes with a standard motherboard 3 pin senser connector cable, so you can blug it into a spare motherboard fan header, which means ifyou can see what revs one of the power supply fans are running at in you PC monitor applet in you system tray (& it can warn you with an alarm if it fails). Also the powersupply comes with a thermastat on a connector which can be somehow attached to the heatsink or against the CPU core if its a exposed flip-chip type core (as long as it has no heatspreader like the AMD K6 series has), this controls the fan underneath the powersupply & it only runs when necessary. Consequently these power supplies are so bloody quiet you sometimes think its not running.
- They also recommend the Thunder K7 (S2462) Motherboard [tyan.com], which is a huge waste of money as you can buy a very similar motherboard made by the same manufaturer at a much cheaper price (the Tiger MP (S2460) Motherboard [tyan.com]). Also the 'Tiger' has a standard ATX connector, rather than the propietry connector that the 'Thunder' has. Which means you can use normal ATX powersupplies, rather than the inflated priced propietry powersupply that the 'Thunder' uses.
- Also, even though this is s'pose to be a 'Ultimate Linux Box', they fail to mention that both IDE floppy drives(if you are using the IDE bus) & SCSI floppy drives (if you are using a SCSI BUS) are avaliable. Even better one can get the LS120 variety which are compatible with both 120MB 'SupperFloppies' & standard 1.4MB standard floppies.
- They spend 4 paragraphs talking about 'Noise Control and Heat Dissipation' without really saying anything. When all they really needed to say that it's best using bigger fans at slower speeds - such as 12 volt 120mm fans running at 7 volts (positive hooked up to the 12 volt line while the negative is hooked up to the 5 volt line). The quietist fans (all other things being equal) by brand are the Papst Simtec bearing fans, the Sanyo Denki fans & the L1A1 versions of the Panaflo fans.
- They recommend a pretty well generic (though above average) Antec case, but this is s'pose to be a ultimate Linux box.
Therfore I recommend the Addtronics [addtronics.com] 'Server Cases' (their full tower cases) - the 7890 & the 7896. They are great cases with their great cooling options, filtered intakes, butterfly doors & slide out 'mainboard & I/O backplane tray'. Supermicro sell their own badge engineered version of this full tower case.
Other good full tower cases are the all alloy ones made by Lian Li [lian-li.com]. Such as the Lian Li PC-70 aluminium full tower computer case [dansdata.com] & the Lian Li PC-76 server case [dansdata.com]
If a mid tower case is more your style, both Lian Li & Coolermaster maker great alloy ones. They are great for LAN parties. In this regard I recommend the Lian Li PC-60 computer case [dansdata.com] & the Coolermaster ATC-201SX [coolermaster.com]. Both cases are unbeatable as mid-tower cases - they have everything. I Personally thing a midtower case must have 4 5.25inch drive bays; so you can have both a CD burner & DVD drive, plus 2 HDDs in removable HDD pullout caddies.
For a ultimate box it should have the all alloy (better heat dissapation) twin fan caddies that agains are made by Lian Li [lian-li.com]. The 3 best models appear to be the RH-620 [lian-li.com], the RH-600 [lian-li.com], & the RH-29 [lian-li.com]
For the motherboard, I'd recommend one with the SIS 735 'chipset' [sis.com]. Preferably it would have a AGP Pro slot, 6 PCI slots, one shared with a ISA slot at the bottom. It would have BOTH 2 DDR slots & 2 normal SDRAM slots. It would have a integrated RJ45 network connector above the 2 rear USB ports, plus integrated 'hardware' 5.1 sound (IWill have brought out a couple of boards of late with integrated 'hardware' 5.1 sound, they have the 3 standard female jack ports under the midi 'D' plug at the back, plus the extra connects hook up via a ribbon cable & a slot backplane cover). The board would also have integrated SCSI & Firewire like some of the MSI Pro or Turbo or whatever boards have. Plus an extra IDE controller (Promise, Highpoint, etc) so there's the potential for 8 drives (HDD, CD, DVD, LS120, ORB, etc) rather than the standard 4. The extra IDE controller will also have RAID 0,1 & 1+0 options (most have this built in, though its sometimes disabled). All the integrated stuff must have the capability to be disabled, either via jumpers or in the BIOS.
Twin AthonXP/MP CPUs would be the go (the XPs work fine in SMP setups, they just are not certified/supported for such configurations - that's the main difference between the XP & MP, the MPs are certified/supported for SMP use.
That's enough raving for now.
Re:Great box - for a Millionaire like Raymond (Score:2, Interesting)
I run Linux and Windows on these computers. Because that's what I need for my software needs.
I don't see why I would run Solaris on PC equipment - if I need a Solaris box, I should probably get some cheap UE220R or UE250 for that (try dotcom sales for cheap sun hardware). Same with other OS'es - if I want a MacOS, I'll get a Mac. If I want OpenVMS, I'll get an Alpha. And I don't want HP/UX, so I'm not going to get one.
What's most expensive? Storage subsystems. Diskspace is just plain expensive. Even with IDE disks, a rackmount 4U computer case with 7 IDE drives in removable bays and an IDE RAID controller cost about 2700 USD. And that's the absolute cheapest way to get about 240 visible, redundant, fast, reliable gigabytes of storage.
Next most expensive thing? Networking. A 100BaseT switched LAN just doesn't cut it - it's gotten slow. Of course I could just get enough diskspace for ALL computers, but that's expensive, too, so I use a lot of network storage. Which puts real strain on the network.. Let alone trying to do anything serious over the network..
Of course everyone has different needs. Don't ever even think of AV work as a hobby.. Digital video and audio equipment (the pro-grade) costs an arm and a leg, and make-do equipment has serious performance bottlenecks.
All in all - a decent new computer would cost me about 10-20k USD. However, if I'd just want to play the latest, gratest 3D FPS games, the dream setup would be a lot cheaper, coming to perhaps 3-4k USD. And that's what most people consider the "expensive computer needs" category. However, that's because I already have about 15k USD in my AV rack. "Dream" gaming station should, IMHO, include good quality audio hardware, which probably costs much more than the computer.
Now, I know that I may not be a "Real Linux User", having used Linux only from kernel 0.98[some letter], only using it for work and hobbies, not having written more than three kernel drivers (subcontracting, and for custom hardware that most of You have ever seen or will ever see). But, for me, the ULB is much more than for ESR, as I care about my storage subsystem's reliability and speed more than most. It's usually the worst bottleneck, and there's never enough of it.
And yes, two of my Linux boxen are indeed old junk that wouldn't run newest WinME/2k/XP with any speed that we could talk about. They're external network connection gateways and thus don't need to be fast.
linux box (Score:2, Informative)
you can pick up some bargains thanks to the current recession.
kinda sad, actually...
Ars Technica (Score:4, Informative)
SCSI Optical drives? (Score:3, Informative)
Re:SCSI Optical drives? (Score:2)
Re:SCSI Optical drives? (Score:2, Insightful)
> wouldn't the new plextor 24x be way faster even if it is ATAPI?
First of all, you can't realistically burn media at 24x. The current media technology doesn't support it. The 24x drives only READ at that incredible spin rate, so don't get stuck on the spin rate. The focus here is bus-type
To illustrate the point, with the buffering capability and the resulting sustained throughput of even a mediocre SCSI flavor, you can read directly from a SCSI ROM and write directly to a SCSI writeable (CD-R, RW, etc.)
The numbers are looking better for ATA, but it's still not there.
--
Allen Gray
why no RAID? (Score:4, Insightful)
I know RAID is overkill for most workstations, but so is a DDS drive and seperate home and system drives. If you want fault tolerance, (the stated reason for two drives) having one system drive and one home drive with no RAID means you spend your money only to become twice as vulnerable to downtime due to drive failures.
If you want to avoid downtime, especially if money is no object, get a RAID controller and have a single filesystem mirrored over two physical disks. Not only will it be more reliable, it will be faster too.
Re:why no RAID? (Score:2)
I use 2 drives, but one holds temporary backups until they can be burned onto CDROM.
If I wanted a high end station, that one they designed (with a few mods) would be good. However, that is not the case-- I am interested in budget hardware for my home PCs.
and at 15 grand... I could buy an entry level RS/6000 Workstation for less than that...
Re:why no RAID? (Score:2)
Re:why no RAID? (Score:4, Insightful)
Re:why no RAID? (Score:2, Insightful)
it's the fundamental reason that, despite our money-is-no-object premise, we're not going to relatively exotic technologies like liquid-cooled overclocking or RAID disk arrays for a performance boost. Sure, they may initially look attractive; but overclocked chips and banks of disk drives require massive cooling with lots of moving parts, and it's not good to be trying to do creative work like programming with anything that sounds quite so much like an idling jet engine sitting beside one's desk.
Which is fair enough if you are talking about a seperate drive enclosure with 12 drives in it. But 1 RAID capable SCSI controller and 2 drives mirrored isn't going to be any hotter or louder than 1 non RAID capable SCSI controller and 2 drives with no mirroring.
So my original point still stands - getting 2 drives without RAID is gives you no benefits over a single drive or 2 drives with RAID.
Re:why no RAID? (Score:2)
Agreed. And even with the system he built, there is no reason not to use Linux software RAID. With RAID 1 (mirroring) he would get half the data storage, but he's using big enough disks. He didn't say if he was using the 18GB version or the 36GB version, but even 18GB is enough for a nice Linux system.
I recently built a system with Linux software RAID 1. I used IDE drives because I like the price/performance. The system boots from one disk, but I have a boot floppy ready to go if that is the one that dies. This was so easy to do, and it works so well, that I am surprised he didn't try it. (Of course he still can, if he changes his mind.)
steveha
Re:why no RAID? (Score:3, Interesting)
You either do RAID 0 Mirroring or RAID 1 Striping (with 2 ATA drives) or RAID 0+1 Mirroring and Striping (this takes 4 drives). Full striping plus Parity-Checking is RAID 5 but (someone correct me if I'm wrong) this isn't available for inexpensive ATA disk arrays. It would be nice if it were, but it would be slower than using a couple of SCSI disks and taking regular backup images of them. (What's best for backup is for yet another discussion.)
RAID 5 can be had for SCSI disks, at impressive prices, at which point you're better off with Gb Ethernet or Fibre Channel NAS or SAN storage. To do RAID 5 right, you need (some multiple of) at least 9 disks (8 for data, 1 for parity, with data and parity stripes randomly assigned across the array). The RAID 5 stuff gets rather complicated and expensive (have you priced SAN storage lately? I have, and it runs to 5 or 6 figures to just get started).
I like their approach for a high-end Linux machine for personal use. I'm using something similar as I write this (Tekram SCSI adapter with two 10K RPM Quantum 9GB non-mirrored disks). They're right to focus on I/O speed as more important than CPU power. Net bandwidth is the real limiter.
In this, they're just following what was learned long ago on mainframes: tune the I/O subsystem first because that's where you find large delays, then make sure you have enough memory (since Virtual Storage impacts Real/Expanded Storage, which impacts Auxiliary Storage - back to I/O), then tune CPU allocation and capacity last. It's well known that when you finally run out of CPU power (having tuned in this order) it's time for short-term triage (favoring "loved ones" at the expense of discretionary workloads) followed by an inevitable configuration upgrade. This is how it's done, folks.
Re:why no RAID? (Score:2)
The 3ware cards support RAID5. I'd think twice about betting the farm on IDE RAID, but it is supported.
In this, they're just following what was learned long ago on mainframes:
A problem here is that what we know about mainframes could well be wrong for desktop systems. Many services (eg http, nntp, email) are
IO intensive, while many desktop uses (eg compiling software, gaming) are CPU intensive.
Some of it is silly (Score:2, Informative)
But most of it seems to be dead on. The thing I really disagree with is the statement that "SCSI CD-ROMs are a generic item" - A crappy CDROM is a crappy CDROM no matter what interface it uses. At this point, the only brand of CDROM I'm even willing to buy any more is plextor; Even my 40X Toshiba Ultra-SCSI sucks horribly. There are tons of discs it won't read (or will require retries on) that seem to work everywhere else. I find myself using my plextor cd burner as a cdrom all the time in spite of the fact that I have a cdrom specifically to prevent adding up unnecessary runtime hours on it.
My next CDROM will be plextor's highest-speed CDROM drive. They extract CDDA faster than anyone else's drives, read more media, and are just plain faster. My second choice is still toshiba, but I'm less enamored of them than I once was. As a side note, both toshiba and plextor's drives can be jumpered to use 512-byte blocks for use on legacy unix workstations, which can be a nice feature. While I don't actually have any of those systems any more, if someone offers me a Sparcstation 10 (or better) for cheap enough, I'll probably buy it, and I'll want a fast CDROM.
updating an old project (Score:3, Informative)
The old guide is at
http://www.double-barrel.be/linux_web/clone_hw_
Keyboards (Score:2)
I type around 90 WPM if I'm on the Model E. Anything Else, I'm lucky to get 50. The other's just don't give you the necessary feedback when the key is down for your brain to realize it is ok to push the next key.
Also important is the fact that some of the really really cheap newer keyboards have problems where the keys all don't trigger at the same point in their downward stroke. Since I type fast enough that I actually (subconciously, mind you) "overlap" my keystrokes - that is one key is actually going down milliseconds behind the next one - I have seen some really bad keyboards this way that will actually reorder my keystrokes because even though I pushed key B after key A, key B shows up first. Needless to say, this causes some inaccuracies.
Re:Keyboards (Agreed.) (Score:2)
What's interesting is that there are two very different schools of thought on this. I have friends who absolutely love the Model M's and wouldn't dream of typing on anything else. I have other friends who prefer more silent keyboards. (The Model M does tend to keep SO's awake when you're typing late at night, but those of us who use it understand that we have to make certain sacrifices to use the keyboard of the gods. ;)
Re:Keyboards (Agreed.) (Score:2)
Re:Keyboards (Score:2)
Model M's are pretty close, but not quite reach the perfect feel of the keyboard that came with the original IBM XT. The XT model, however, won't plug into modern AT/PS2 keyboard ports, due to an incompatibility in the controller chips (?).
I thought all was lost until I found a huge monster of a keyboard - apparently off of an original IBM AT, this thing weighs even more than the XT keyboard and has that perfect mechanical feel to it. A couple keys are in strange places; only 10 F-keys in two columns down the left side, the Capslock is stuffed directly under right Shift key, and the inverted-T arrow keys are missing. But once you get used to it, it's a great keyboard. =]
Don't expect to find them easily though... This one I recovered from a back-room cabinet in the local high school where I work, sitting on top of a stack of about 25 eight-inch floppy disks from an IBM Displaywriter (no I'm NOT making this up).
noise vs. performance is dead-on (Score:3, Informative)
I'm not sure I agree with the eventual decision to go with PC Power & Cooling--they are occasionally ridiculously overpriced and some of their "quiet" is really just achieved by underpowering the fans--some of the Antec PSs will perform just as well. Also, anyone know if PCPC's power supplies are like their cases (i.e. just CalPC [calpc.com] cases relabeled and marked up)?
Also, I've heard arguments that a large case is not necessarily a boon for good case cooling w/ low noise: large cases require more fans to move the air effectively within--it's not the fact that there's lots of space in a case that makes for cooling, it's moving the air over and away from the components. Seems like having a mid-tower (given the low-moderate drive bay requirements) with a low-rpm 120mm intake and outake fans might have been better.
Re:noise vs. performance is dead-on (Score:2)
Check out PCP's full-tower and compare to CalPC's. When I was looking into getting a case 3 years ago, I spoke w/ PCP and CalPC on the phone--no one was willing to give me a definitive yes/no answer to who made the PCP case, but the answer was definitely more like "it's not our policy to discuss that" rather than positively confirming or disconfirming. You might even do a search on the arstechnica case forum--this has come up before as I recall.
As far as PCP PS's go, as I said in my original post, I was just questioning based on what I knew of their cases. I did a lot of poking around a couple of years ago. The various case/cooling forums (ars, anand, hardocp) while they have their PCP fans, seemed to me, on the whole, to rate them as overpriced for what you get. But ultimately that's hearsay. But, for instance, last time I was in the market for a 300w PS (2 yrs ago), PCP claimed a great db rating on their best PS, but the fan overall was rated at a pretty conservative CFM. It wasn't as though PCP came up with some revolutionary sound dampening mechanism that no other PS mfr came up w/. At best it was temp. regulated (but so are a number of PS's) plus stepped down (which would obviously lower the noise, but also the cooling ability). I'll prolly be looking at getting a new 400w soon, if you're really happy w/ your PCP, I'll check that off in my anecdotal reference store.
Got it all wrong re: flat panels... (Score:5, Insightful)
"Today's flatscreens also have a really coarse dot pitch with sharp square pixels. As far as I'm concerned, that puts them out of the running for the ULB. I do a lot of writing and, not infrequently, my own typesetting; I want to be able to preview two pages of Postscript at actual size and have the fonts look good."
I'm sorry, but has this guy ever seen a high-end flat panel? I personally own an SGI 1600SW [sgi.com], and not only do you not see the pixels, but you can also preview two Postscript pages side-by-side with its 1600x1024 widescreen aspect ratio. Of course, SGI stopped selling it (*sigh*). But there are other excellent flat panels out there, like the Samsung line that lets you run a TV signal in and do picture-in-picture. I've seen the Samsung ones up close, and they have wonderful image quality. Apple also makes some excellent flat panels (does anyone know whether there is an adapter to run them on PCs yet?)
All I'm saying, is while there are still plenty of reasons to run CRT's, in a "cost-is-no-object" type of article, you should at least consider the high-end flat panels.
P.S. I've seen the dual 1600SW setup, and it is STILL, to this day, the only monitor setup that ever made me speechless with its absolute beauty.
Re:Got it all wrong re: flat panels... (Score:2)
Re:Got it all wrong re: flat panels... (Score:2)
I have one. well, 2, actually [slashdot.org]
so of course I agree that a dual 1600sw setup is a very fun system to have. I'm even thinking of bringing a pair into work to replace the pair of sony tubes they gave me to use.
and dot pitch? what dot pitch? the 1600sw is finer than any crt, in terms of dot spacing, that I've ever seen.
for solid colors and crisp text, the contrast from a pure digital video card + this monitor is pretty unbelievable.
Re:Got it all wrong re: flat panels... (Score:2)
Re:Got it all wrong re: flat panels... (Score:2)
AFAIK, there are 2 adapters to let you go from a DVI to ADC connector:
I also think Eric is mistaken about high-end flat panels, at least as far as finding a two-page screen with excellent dot pitch/image quality. He's still right that there really isn't anything in the 2x1k resolution range that's remotely practical, but 1600x1024 has been available for some time now. Samsung has the 24" 240T and Sun announced a 24" at Siggraph that should be available RSN.
Some of the more valid reasons for not going with lcd that Eric didn't mention: some lcd's have issues displaying fast moving images (e.g. first-person shooter 3d games, dvds) and accurate/consistent color reproduction at all angles (even the Apple Cinema Display has some quibbles in that arena). A lot of the GeForce cards that sport DVI-out also don't really support 1600x1024/1200 via DVI (the Hercules GeForce2 Ultra and GeForce3 being notable exceptions). Not sure about the new Radeons. And getting above 1600x1200 via DVI isn't possible right now AFAIK. I think that's at least one reason why the super hi-res $15k IBM flat-panel needs that custom vid card.
Check out the coffeehaus wiki [coffeehaus.com] for more on getting wide screen lcd support on pc's.
Clueless about CRTs also (Score:2, Insightful)
The "resolution" of a CRT is given by (veiw size)/(dot pitch). Any more pixels than that is literally wasted because the screen can't resolve one from another. The Mitsubishi 21" CRT he suggests has a view area of 20.3 inches and a dot pitch of 0.24 mm which works out to
20.3 in * 4width/5diag *25.4 mm/in / (0.24 mm
And he suggests running this at 2048 pixels wide? Sure memory is cheap, but bus bandwidth is teeny on PCs. Display what your monitor will do and no more. Also if back off the resolution a bit you could bump it up to 85Hz.
On an LCD a pixel is a pixel, and they're sooooo crisp compared to a CRT. They say the pixels are blocky, the rest of us call that clarity. Awesome clarity compared to a CRT.
I hate crap like this because these guys are supposed to be authorities, but they're spoiled brats whose hardware visions are 5 years out of date. Sure I'd like to use SCSI for everything, but get real. Looked at HD prices lately?
And apparently these guys haven't used a Contour keyboard (don't have a link off hand). I've put my hands on one, and you meld wih these babys, no stretching for keys or shifting your hands around, it's just BAM!
SGI isn't the pinnacle anymore (Score:2)
Actually, no... (Score:2)
I just followed your link and found the FAQ [apple.com] for the Apple flat panel monitors. According to the FAQ, the monitors can only run with a Power Mac G4 or G4 Cube.
It is incredibly disappointing to see a company come up with great technology and then not devise some sort of adapter for the majority of computers (PCs and older Macintoshes.) I am honestly surprised that Apple wouldn't sell some sort of Apple --> DVI/USB adapter. Guess I'll have to stick with the PC digital monitors. What a shame. :\
Re:Got it all wrong re: flat panels... (Score:3, Informative)
my xf86config file [grateful.net]
There are ways to do IDE right (Score:2, Insightful)
Well designed controllers like the escalades provide out of order execution, scatter gather, etc at the controller level, and offer a fully switched bus for all data. The 7000 series also have 64bit PCI support (and actually utilize it).
Forget the HPT36x and 37x controllers, as well as most Promise controllers, all the smarts is in the driver software and they suck performance-wise. High-end Adapted controller appear to be ok, but they are pricey compared to the 3ware controllers last time I looked.
One controller with one or two drives may be faster with SCSI, but dollar for dollar, 3ware and IDE walk all over them (particularly with database servers where you want a few spindles to minimize blocking seek activity.
Re:There are ways to do IDE right (Score:3, Informative)
Ultimate Network Connection? (Score:2, Insightful)
Re:Ultimate Network Connection? (Score:2, Insightful)
Idle? (Score:2)
my PC is never idle! [distributed.net]
Old style keyboards (Score:2)
but good luck finding any.
I can imagine this article causing more religious arguments than almost anything else recently
Mostly right, but a few nitpicks: (Score:5, Informative)
2) We've had IBM Ultrastar SCSI drives break down within weaks on our server at work (emphasis on drives, plural). Granted, this is under a severely punishing workload, but Seagates have been more reliable. Under saner workloads the IBM drives are probably fine.
3) SB Live! series cards are bad news on Athlon systems (as ESR found out), especially if you have other heavy DMA I/O tasks on the PCI bus. They've fixed this with the Audigy, but it doesn't have Linux support yet (AFAIK?). The Turtle Beach Santa Cruz is supported; that's what I replaced my Live! X-Gamer with. Now my AccessDTV HDTV PCI card doesn't cause BSODs (Win2000 SP2). Recommended.
4) Modem? Got cable modem. Don't need no steenkin' POTS modem
5) Microsoft Intellimouse Optical. Scratch off the name if you must, but they're GREAT!
6) Word is that the Tyan Thunder motherboard likes Corsair memory best. Dunno why, the board's just picky.
7) An ultimate system should have Sony's 24" widescreen FD Trinitron. Wish I had $2K to spare to buy one. 1080i HDTV would look great on it.
8) Get a tube of Arctic Silver II thermal compound for the CPU heatsinks. Yes, it matters.
For a cheaper config: substitute a Tyan Tiger MP motherboard, PCP&C 400W Silencer (no need for an oddball power connector), IDE drives, and an Ethernet card (Intel or Linksys, I have one of each in my Linux server). Note that faster Athlon MPs are supposed to be announced next week (Tuesday?).
For a way cheaper config: as above, but with a VIA KT266A uniprocessor motherboard (I have a Shuttle mb inbound; newegg.com was out of the Epox 8KHA+ boards that were my first choice) and Athlon XP CPU.
I'm a PCP&C fan too. Antec's no slouch either, but my Silencer 400W keeps the 5V and 3.3V rails hooked up to my 1.4GHz Athlon Thunderbird within 1% of perfect, which is pretty impressive. Dead-on commentary on the P4. It pained me to spec a P4 for a new engineer because Dell refuses to sell Athlons and stopped selling P3 desktops.
Re:Mostly right, but a few nitpicks: (Score:3, Informative)
No complaints on the quality, but the Silencer 400 is not as quiet as I'd hoped. Also, the airflow is suboptimal: one fan, with a grate on the front, rather than the bottom. I'm thinking Enermax or Antec, next time, with two temperature-controlled fans.
for sound, only spdif will do (Score:4, Informative)
for about $25 (really), you can get a cmedia 8738 chipset that outputs REAL literal 44.1k spdif, suitable for piping into an outboard DAC (digital to analog converter).
go to ebay and pick up a used audio alchemy DAC ($100 or so) and the 8738 card and you'll be 99% of what a pro audio card setup should be.
and never, never choose 'soundblaster' for audio quality.
DVD-ROM - ethically imperative? (Score:4, Funny)
A waste of money (Score:3, Informative)
MoBo: ABIT KT7A-RAID
PROC: 1200MHz Athlon
MEMORY: 1GB (high density, cheapo stuff)
STORAGE: 2 IBM 60GXP 20GB IDE drives in RAID 1 (mirrored) configuration.
GRAPHICS: ATI XPERT 2000 (32 MB)
CASE: Antec Premium line case w/ 300W PS.
ETC: Sony floppy drive and Creative CDROM drive.
NETWORK: 3Com 3C905 10/100 card.
I know this machine isn't as fast as the ULB, but it's a heck of a lot cheaper, and I would rather have 5 of the above machine ($700 * 5 = $3500, only $100 cheaper than the ULB w/o the "extras") than one ULB. I might even decide to make a Beowulf cluster out of them.
As I've heard other Slashdotters mention many times before, it's not the performance of your hardware, but the performance of your hardware per dollar that matters.
P.S. I would like to know what Tom (from Tom's Hardware Guide) would consider the Ultimate Linux box.
X-Box (Score:2)
Haven't these guys heard of pricewatch? (Score:3, Informative)
frightening. It is simply ridiculous to pay
that much for a desktop.
Besides, most things in the worl (computers fall
under the umbrella) are priced on a logarithmic scale, meaning after a point drastically increased price gives mediocre return, and vice versa going behind a certain earlier point. I always like to build a machine that has it's cost efficiency at a maximum, sitting at a very healthy point in the curve. Buying a Geforce3 card, for instance is ludicrous. Geforce2 MX 400 (Abit siluro for instance) with 64 megs ram is 69 bucks. excuse me? That is cheap as dirt.
IT's always very satisfying, also, to get a
slightly cheaper machine like this and it performs
within 10% of a machine 5x as expensive.
-mateusz-
I'm going to go practice my violin more now
Re:Haven't these guys heard of pricewatch? (Score:2)
Pizza and beer on me guys, Accounting bought it!
forget the 21" monitor (Score:3, Informative)
Having 2 screens, if you've never worked that way, is wonderful. One screen for preview, one for tools has saved much wear and tear on my fingers switching consoles, windows, and desktops. Plus two good 19" screens are about the same price as a 22": $1,000. Lots of money, yes, but the screen is one part that you can't incrementally upgrade. Plus you can always buy one now and save up for the next
Importance of CPU (Score:2, Informative)
While the statement has some truth, it uses a bad rationale.
How long, during "typical job"s, do you wait for a modern PC usually? 500ms? At most 2s, right?
But |top| typically gives you one value every 5s or so, and only averages. If |top| would show you the peak of CPU usage during the last interval, you would see that during the times you wait for the PC, the CPU almost always has a load of 100% at some point.
Which means that part of the time, you indeed wait for the CPU during typical usage. (Often, that's only milliseconds, but with Mozilla, it can be 1s
If you are interested, I suggest, you use a CPU-load graph tool in your GNOME/KDE/WindowMaker panel, set the interval really low (like 10ms) and make the "contrast" high (black background and bright foreground). This will show you almost every CPU peak and thus show you, when you are really waiting for the CPU (even if it's just ms).
If you say, milliseconds don't matter, then you don't need a top-notch PC for "typical job[...]s under Linux"
RAID is absolute must for performance system (Score:2, Interesting)
Never go for a non-redundant disk subsystem. Disks crash. Go bad. Die.
Also, I need some space to live with. About 200 gigs minimum for my next setup.
So, for ULB it would be a dual channel U160 RAID controller (64bit PCI bus, please), 14 36GB 10k rpm hotswap disks configured as 6 disk RAID5 with hotspare on each channel and mirror over channels, yielding 180GB. Takes one SCSI tower case. Performance and redundancy. And even for the ULB going for 14 72GB disks would be pretty expensive.
Also, to get the best out of CD-ROM/R/RW get Plextor UltraPlex 40 and PlexWriter. Absolutely forget all other CD-ROM/R/RW manufacturers.
Then for DVD get the Pioneer and a DVD-R(G), DVD-RW drive. Note that You still need the Plextors to get the best CD-ROM/R/RW drives available.
Hook all four into an external case, and put the DDS-4 drive there, too.
Now, put the computer case and the drive case away from Your table, take the removable media case onto Your table and be done with it.
I can't afford that setup. So, I'm going for IDE drives and 3ware Escalade IDE RAID controller. Cheaper, and gives me about 240 gigs with seven 80GB drives.
It may be "Ultimate", but... (Score:2, Insightful)
I understand the point of having the *ultimate* (rather than just "good") machine, and I realise that kernel compile speed isn't the most wonderful of metrics, but it does drive home the point that the more you pay, the less of a performance advantage you get. There's a price / performance sweet spot, and it's certainly not at the ultra high end.
The only thing I'd add would be a DVD drive - perhaps another $AUS170 for a cheapie Pioneer IDE model.
No drool (Score:2, Funny)
ULB2002: water cooling? (Score:2, Interesting)
But! You can now buy off-the-shelf parts (here, for example [dangerden.com]) that all work together and can just be bolted together. You can build sealed systems, removing the risk of spills if you move the machine and meaning you don't have to top the system up to allow for evaporating levels. You can get dinky little 120mm radiators which can be fitted inside the case, meaning the entire system can be self-contained. And if the system is well-built enough, the risk of a joint bursting and soaking your motherboard is a lot less than your HSF falling off and frying your Athlon [tomshardware.com].
Balanced against that, you can get cooling performance superior to a fan-based system and a hell of a lot quieter. And the disadvantages of watercooling will only get less as they become more and more commoditized.
Useless article, sorry (Score:4, Insightful)
You could build the ULB yourself from scratch. But unless you're either a very experienced hardware hacker or seriously interested enough in having a learning experience to accept possibly trashing some expensive parts, maybe you shouldn't. I wouldn't.
Way to encourage the hacker ethic! Yeah! Let's all run out and pay someone to do stuff for us, because everyone knows work is hard. With hardware prices as low as they are, it's a perfect time for people to "hack" their own hardware and build a powerful machine on a budget even a college student can afford. That would make an interesting article, but this one is simply, to use a phrase ESR seems to enjoy, an exercise in mental masturbation.
This reminds me... (Score:3, Funny)
* mosr.net is a Mac OS Rumors parody.
At least he's self-aware (Score:3, Informative)
"Eric S. Raymond is a wandering anthropologist and troublemaking philosopher who happened to be in the right place at the right time and has been wondering whether he should regret it ever since."
Those of us who remember when he stole the Jargon File from the community and sold it as his own think, "Why yes. Yes he should."
--Blair
Re:Cheap linux box. (Score:3, Informative)
Re:Cheap linux box. (Score:2)
Re:Cheap linux box. (Score:2)
The double-banked SDRAM has a lower latency and otherwise the same speed as DDR. Plus, the S2688 would take twice as much RAM - 6GB instead of 3GB.
Plus, the SDRAM is a lot cheaper...
Re:Cheap linux box. (Score:2)
Maybe 4 months ago, but if you check out crucial.com now they both run about the same price.
Re:Cheap linux box. (Score:2)
256 MB, DDR PC2100 CL=2.5 Registered ECC 2.5V 32Meg x 72) - $40.49
Crucial.com memory compatable with the Serverworks II HE chipset:
128 MB, SDRAM, PC133 CL=2 Registered ECC 7.5ns 3.3V 16Meg x 72 $26.99
256MB SDRAM, PC133 CL=2 Registered ECC 7.5ns 3.3V 32Meg x 72 $40.49
512MB SDRAM, PC133 CL=3 Registered ECC 7.5ns 3.3V 64Meg x 72 #74.69
1024MB SDRAM, PC133 CL=3 Registered ECC 7.5ns 3.3V 128Meg x 72 $200.69
So, if you go with all four sockets you have a maximum of 1Gig with the K7.
Okay, Crucial.com is promoting DDR by matching SDRAM's prices - but don't even produce any but 256MB modules.
Other manufacturers supply compatable 1GB DIMMS, but as Tyan's site points out:
Some modules on the list above contain stacked DRAM parts (36 chips per module). These parts have thermal limitations in some chassis configurations. It is advised to verify that your chassis configuration as adequate airflow to support stacked (36 chip) parts
And the drive requirements of all those chips means Tyan has to recommend only using 3 of the DIMM sockets with them.
Kingston Registered PC133 RAM - best pricewatch.com price - is $142.
The 1G Registered DDR PC2100 - best price on Pricewatch.com - is $488. Ouch!
Re:Cheap linux box. (Score:2)
Okay, you can use all 4 (not recommended?) slots on the Thunder K7 board and populate with 2 Gig.
Or for about the same price get 4 gig of RAM on the HEsl. Slightly faster RAM used in banks. And still have slots free.
Re:Cheap linux box. (Score:2)
The ServerWorks HE chipset is damn nice though... But for a desktop it does have a few bugs. If you're considering it for a desktop system it would be a good idea to do a little research on it (groups.google.com, etc).
Re:Cheap linux box. (Score:2)
I should also add that I agree with you about the Tulatins and the P4s - but the thing I find really irritating is that the P4 RDRAM chipsets allow using two banks of RDRAM at a time, and the new 845 chipset only one SDRAM bank. Doubly castrated... pretty obvious they're *still* trying to push Rambus - whether you like it or not they aren't going to give another solution that really works.
It would be interesting to see what banked DDR RAM would be capable of with the Athlon too...
Re:network card? (Score:2)
Re:network card? (Score:3, Funny)
PIN number = Personal Identification Number number
ATM machine = Automatic Teller Machine machine.
Re:SCSI too expensive (Score:3, Insightful)
Re:SCSI too expensive (Score:2)
Yes, he will.. (Score:2)
It provides a separate channel directly from the card for each drive, and handles IDE stuff itself instead of the CPU.
The article you talk about compares standard SCSI setups with standard IDE setups. This is completely, totally different.
Re:JOE SATRIANI! (Score:2)
Re:Whaddya mean 486/33-sadly-out-of-date? (Score:2)
Re:my idea of a perfect linux box? (Score:2, Insightful)
Linux can definetly be used as a multimedia OS. The only thing it still can't do as well is edit videos (Don't start with Broadcast 2000) and professional audio editing.
Linux is definitely a multimedia OS, unlike BeOS, which I have to say, is dead.