Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Run LinuxPPC In A Spare Drive Bay 68

Knobby was one of the several people to point out a really neat piece of hardware. He writes: "Total Impact just announced (a few days ago) their 'briQ'. It's a PPC G3 or G4 machine measuring 5.74 X 1.625 X 8.9 inches with a single 64bit 66MHz PCI slot, integrated 10/100Mbit networking, a 40GB HDD, and ships with LinuxPPC.. The press release on the page doesn't mention it, but the announcement I received mentioned a starting price of ~$2500.. Note: These are the same folks making the quad G3 and G4 processor PCI cards mentioned in an earlier article." I've long wanted a computer in which the processor / motherboard / memory were as easily removed and replaced as a hard drive, this sounds quite close to that ideal.
This discussion has been archived. No new comments can be posted.

Run LinuxPPC In A Spare Drive Bay

Comments Filter:
  • by SlashGeek ( 192010 ) <petebibbyjr@@@gmail...com> on Wednesday January 24, 2001 @08:05AM (#484281)
    It looks like the SparcPlug was made back in '96 by a company named Ross, now defunct. Here [byte.com] is an article from Byte [byte.com] with a review of the product shortly after its release. It appears that a comany named DataMaster International [dm-int.com] still sells them here [dm-int.com]. No price is listed, but they do take quotes.


    "Everything that can be invented has been invented."

  • I've long wanted a computer in which the processor / motherboard /memory were as easily removed and replaced as a hard drive, this sounds quite close to that ideal.

    You're ready to upgrade to an S-100 bus machine, then.


  • I am glad you brought this topic up. It fully agrees with the fellow who mentioned price and form factor.


    Ross Technologies, an Austin-based hardware manu who went belly-up a couple years ago, found out the hard way that there just wasn't a compelling reason to have a computer-in-a-computer. They're more expensive, less expandable, less powerful, and add to workplace noise. Since you're already limited to connecting to it via ethernet from the host computer, why not get a cheaper rackmounted solution like that sub $1k Sun box mentioned last week? If Ross technologies couldn't get this product to fly with SunOS, there's no way this thing will sell. LinuxPPC has a fraction of the apps supported that SunOS does.And don't get confused by the word 'linux' prepended to the name. Just because it's linux doesn't mean everything works. Try running icecast and watch your CPU utilization hit 95% after a couple hours...



    Seth
  • What, you never read my posts?

    More on topic - I find the idea cool, but realistically, I can get commodity 1U systems for less if all I want is slim computin power, and if I really need something tiny, I know where to get PC104.

  • I received mentioned a starting price of ~$2500.
    Should say:
    I received mentioned a startling price of ~$2500.

    Then their quad powerpc board is "$4500, quad g4/400's are ~$6500."

  • Whow - this thing must be fast. If I read their specs correctly they are using the IBM CPC710 100+ chip. This means that they have dual independant PCI busses. One 33MHz and one 66MHz. If they have hung all the slow stuff off the 33MHz and left the 66MHz on the connector then they got a quite some of a connectivity. Anyway - I am happy that it is not a Mac. Makes techsupport a lot easier.
  • The only reason I submited this article, was to counter the "PPC is nice tech, but where can I get it other than Apple" comments.. The point here is not the size, or shape of this board. The point is that someone other than Apple is shipping a PPC machine.. I realize there are a lot of good, small x86 boards out there..

    In all honesty, I don't think a $2500 box with a single G4 processor, a 10/100 adapter, and a 40GB HDD is worth the cash.. Especially, not when I can get a Dual G4 box from Apple for considerably less.. Hell, even the cube is cheaper, and it ships with a DVD

  • I know what I'd use one for. I'd snake an ethernet cable out the back and use it as a Linux box within my Windows box, without needing to deal with the space that an extra machine would take up.

    Then I'd run an X server on my Windows box so that I didn't need two monitors.

    I'd like it.. ;)
  • It's far more prudent for light duty uses like that to buy cheap PC104 hardware and run a regular x86 freenix on it.
  • I have a passive backplane 386 processor card somewhere in all my old junk. It has integrated I/O and drive controllers. I used to use it plugged into a two slot backplane. I believe I ran Slackware (probably kernel 1.2.13...) on it for a time. It was awhile ago...
  • The other big question that I have is:

    Will it run Darwin [darwin.org]? If it can, then would it be possible to run Aqua on top of that?

    Let's face it, at its core, MacOS X is another Unix distro (albeit not Linux)

    That would bring a whole new meaning to the term Mac-in-the-box.
    --

  • Get off their backs. Those guys do one hell of a good job! They support most of the PPC Linux developement projects! The donate hardware to people like BenH. They host the official PPC Linux Reference Project for all PPC Linux distros to base their distro off of (a common set of underlying components like glibc, etc...). They do a great job. There are others out there that make PPC Linux variants. YellowDog Linux [yellowdoglinux.com] is one of them. They make good stuff too. They just hired former Apple's Linux Technology Manager, Kevyn Shortell, to help them get ready for the 2.0 release of YDL. Back to LPPC, you can't knock them for what they do. Sure they may not have the fanciest name in the world. They were the first true Linux for PowerPC machines (we really can't call Mk a true Linux environment as much as I enjoy it on my old Macs). Since they were the first, I can fully understand why they would want to incorporate "PPC" into their distro name. LinuxPPC just makes sense. Linux for PPC. Until other competitors came along that also made PPC distros, it just made sense. You can't really knock them for their name or what they do.

    --

  • For those who care, we have a brief wrap up of most Linux distros available for PPC at GNUpples [nofuncharlie.com]. Enjoy.
  • "This is highly unlikely to result in all sorts of people going out and buying these sorts of machines; it's just not economical unless there's a compelling need that justifies paying a couple grand for a pretty small server."

    The point is that they are available. The $2500 is, in all likelihood, the introductory price based on "how many people want these things?!"... Once a few are sold, I'm sure the price will drop considerably.

    In time little grass-hoppah.... in time....

    Another important thing to note is these machines probably have better heat dissipation than larger machines [I'm basing this on the idea that 1) heat disipation would have to be improved to even offer them, and 2) If they are smaller, they are easier to cool; because moving air can be directed at and away from them with less energy (ie: the diff. between a cpu fan and a case-fan)].. In the least, they would be useful for applications where heat is a problem. I'm sure big bizz. will be buying into them so us little guys can reap the price drop in a year or two.

    ... Still interesting to know we should be waiting for it.

  • The points you miss are that a) there were, once upon a time, as many different measuring systems as there were civilizations (could you imagine scientific exchange in the Roman Empire?) and b) the "human factors" involved come down to familiarity.

    Human factors? How 'bout these factors:

    -The English volume system was originally base 2, and vestiges of it still exist: gal = 128 oz, qt = 64 oz, pt = 16 oz, c = 8 oz. Most of the other measurements in the series (drams, etc.) have been forgotten. (While you're at it, consider imperial volumes and dry volumes, both of which break the system as well...)
    -Distances: inches, feet, yards, furlongs, fathoms, rods, miles, nautical miles (?!), etc, etc, etc. Or you can do everything in meters and kilometers and no one will get too confused.
    -Weights: Three words: troy and avoirdupois. Why?
    -Temperature: This is a particular embarassment -- I have heard (or at least Cecil Adams claims) that Fahrenheit calibrated the bottom end of his scale for the convenience of a weather-tracking friend (I want to say Ole Roemer) so that his logbooks would never have to deal with negative numbers (at least as long as he stayed in Denmark).

    You tell me. The benefit of the metric system is that it makes consistent understanding of measurements possible. A kilo is a kilo, no matter what you're weighing. The only reason people in the US have not converted is because the government tried to split the difference back in the seventies and only wound up confusing people. But it's a lot easier than what we have.

    /Brian
  • well right now HP puts 8 half height drives in a 2u rack configuration, so if you used a 42u rack you could conciveable run 168 computers in 1 rack , thats alot of computer in a 2 metre rack, plus it could double as a whole house heating solution :-)
  • but you missed the one thing it has that you can't get from any standard desktop market machine, 64bit 66mhz PCI.

    Before Apple put AGP on their towers, they included one 64bit, 66MHz PCI slot and stuck a video card in it. With the introduction of 2x and 4x AGP on Macs, it's gone away.

    -jon

  • And the idea of a cpu being on a slot really isn't a bad idea (I think pci would be too slow personally). But why oh why would you need a completely different computer in a drive bay (thats assuming you have any available).

    This is a variant on a design concept dating from the "Big Board".

    For those not familiar with it: The Big Board was a CPM-era machine. In those days when your basic PC was a desktop box the size of a mini-tower, with a front panel full of blinky lights and switches, a pair of EXternal 8" floppy drives. 8080 or Z80 processor, up to 64K of RAM. Alphanumeric dumb terminal or teletype for a console. Brand names like "Altair" or "Imsai" and maybe you assembled it yourself.

    As complex-function chips improved a company had a great idea for a cheap process controller: They built a computer-on-a-board. It had a Z80, 64K (the max) of RAM, RAM-window alphanumeric video generator, two parallel ports (one for the keyboard, one for machine control), a serial port, a boot/monitor ROM, a floppy controller, and all supporting circuitry. But that's not all:

    The board was exactly the same form factor as the electronics card on the floppy disk, right down to the hole placement and power connector. You just bolted it on top of the drive's board (with longer screws and standoff bushings), powered it with a two-drive power supply, stuffed in a floppy, and you had a machine controller. Plug in a monitor, a keyboard, and/or a network connection if appropriate for your application. Program it with the inexpensive CPM development tools.

    Of course what ACTUALLY happened is that the hobbiests got hold of it and used it as a small, cheap, powerful CPM machine for home-computer use. (A little later Xerox licensed the design and built it into a monitor cabinet, to make a CPM machine the form factor of a monitor as their entry into the PC business.)

    But the basic idea remained valid. As drives shrank (physically) and processors advanced to X80s you continued to see strap-onto-the-drive single-board computers ("SBC"s) for industrial process automation.

    This looks like a variant on the idea: Put it in the slot next to the actual drive on a multi-drive bay (or put two drive mounts into your industrial machine), add power and some interface cables, and you're in business. No one-of engineering to automate your industrial machine, so your engineers only have to design the machine itself. The programming environment is the same as the desktops, so you can use off-the-shelf development tools.

    You don't have to reinvent the whole wheel. Just tweak the trim for the new model year. B-)
  • Interesting...how feasable would this be? I have numerical programs that run for weeks up to over a month, could I stick a couple of these "boxes-in-a-box" in and just let them hum along?
  • The SPARCPlug didn't do in Ross, the UltraSPARC did.

    Ross's meat and potatoes was their infamously fast CPU modules for SPARC 10 and 20 systems. The RT625, in a single CPU model, is easily twice as fast as Sun's highest end MBUS module for the 20. Plus, they made single-wide modules with two CPUs each, allowing you to jam up to four CPUs into a pizza box. Their HyperStations were actually decent machines, equivalent to their Sun counterparts in just about every respect. The only problems anyone ever had were that you needed an OS patch to run certain CPUs in certain configurations. Otherwise, they were solid.

    Of course, they knew they made products that were as good or better than Sun's. They had to pull some engineering tricks to do this (6+ device MCMs, massive caches, etc). As a result, Ross charged significantly more than Sun did for cloned hardware. Ross rode on this success until the UltraSPARC came out. Out of the blue, Sun revamped their architecture almost completely. And they did so at a great price: An Ultra 1/170 was under $5k in 1997, modestly decked out. In the same year, I was quoted $9000 SparcPlug with a 200MHz RT620, similarly loaded. The choice here is quite obvious.

    The SPARCPlug didn't help. The things were notoriously unreliable. The three whiny fans inside the 5 1/4" full-height enclosure didn't properly cool them, leading to an eventual heat death. They had a shoddy sheet-metal frame, which often had mis-tapped threads and would bend the CPU board if you looked at it wrong. To top it off, they had only one SBUS slot and four RAM sockets. All this to get a Sun in the same box as your PC? You might as well put an Ultra 5 in a closet somewhere, since it was intended to be accessed through XDMCP.

    They did have that nifty little blue LED tho...
  • ...who has a spare drive bay to put this thing in? ;p

    ----

  • ...need to get with it and get that PPC support out the door.. :-)
  • Now I have to admit there are good reasons for making small computers. And the idea of a cpu being on a slot really isn't a bad idea (I think pci would be too slow personally). But why oh why would you need a completely different computer in a drive bay (thats assuming you have any available). The only practical reason I could possibly see for this is having allot of servers in one box. Otherwise I can make room for another full computer :)
  • Operarting System: LinuxPPC - other distributions supported

    Anyone know which other distros are supported?

    ----------------------------
  • Could be a good mp3/vorbis player, small, much power and no fans. Would be good as nat router,too. ;)
  • And I tought the Apple cube was expensive....
  • Could be a good mp3/vorbis player, small, much power and no fans. Would be good as nat router,too.
  • Now Apple needs to make a version of the Titanium Powerbook [apple.com] with a removeable "linuxPPC" drive :).
  • according to the press release:

    "The briQ also allows the flexibility to run any PowerPC based Linux distribution available."

    i assume that means yellowdog, etc :>
  • No matter what API you're using (SMP/threads or Beowulf/PVM) these are most likely best used for SIMD (single-instruction, multiple-data) kinds of problems (of which SETI is one). Communication between boards will be a major performance bottleneck, since they all share the same bus. Since they do have local RAM (and not just cache), you load the card's RAM with one set of code and four sets of data. Do that for all the cards you have. Now wait, and get your answers back off the local RAM. Did you use threads or processes? Threads and its closer to SMP, processes and it is closer to PVM or Beowulf. But will it outperform a comparable Beowulf cluster? If it is compute-constrained, then the PCI cards will do better, especially as the problem scales, because the PCI cards share hardware costs for disks, network cards, fast bus, large RAM, etc. If it is disk or network limited, though, the Beowulf will eventually win out. The PCI cards will do well on a price/performance basis while the problem is small, because it will still be sharing hardware. But once the PCI bus fills up, those processors will start waiting on the bus. The bigger the problem gets, the more the processors wait. The Beowulf cluster, on the other hand, can distribute all that hardware - instead of one 100Mbps network card, it may have dozens (you start worrying more about what your ethernet switch's backplane looks like). So these cards are best for compute-intensive simulation-style stuff (image filters would also scream - mostly - FFTs require lots of communication). Simulated wind tunnels or weather phenomena, finite-element analysis, etc. Note though, that these cards have their own slower PCI bus, including support for an add-on card (!), so conceivably you could get a lot of server oomph by giving every four processors their own network card. But you better make sure you data (i.e., your web site) can fit in the local RAM, or you'll bog down in bus contention again.
  • It would be great for a Beowulf cluster if only it were cheaper.

    It's fast, it takes up minimal space. You could fit hundreds on a few racks if you could just figure out how to cool it.

  • This doesn't sound like an easily replaced MB to me, it sounds like one of those integrated units that you can't twiddle with.

    But the idea of an easily replaced MB is kinda silly anyhow, it is the hub into which everything else plugs, having it removable would require making something else the hub. Then it would be hard to replace... and you'd get slowdown due to increased wire length.
  • Considering the lovely condition of state poewr problems here in california, this could be a great solution if implemented.
    Since these machines are small, and are more or less fuill-featured computers, this could make building a server farm a lot cheaper and less power hungry, since one could have each of these set up as its own server, feeding off of a single rackmount cases's power supply, instead of having each server having its own oversized powersupply, as seen in SO many installations.
    and at this size and power usage, it would also cut down on AC costs dramatically, as you can now fit several dozen computers in a space that you could possibly fit only maybe 10, therefore reducing the necessary cooling costs.

    --warning--beowulf comment follows---
    Now, where can i get a beowulf cluster of these? ;-)
  • I don't know about putting it in a spare drive bay, but I do think that it would be a nice replacement for a car stereo. It already has a display built in to the front if it, just add some more controls and a sound card and you're ready to go.
    interesting...
  • Take one of those old CD-ROM towers and make a cluster. Seriously, tho, get one of those 7-bay towers, load it up, sticky-tape an 8-port switch in there, hook the uplink to a jack on the back and you've got 7 fairly powerful machines in the space of mid size tower case. Takes up less space than 7 1-U rack systems.
  • by BJH ( 11355 )
    I notice that the home page says that the 64-bit 66MHz PCI connector is "custom". Now, I don't know about you, but every time I see the word "custom" in relation to a connector, it always seems to mean something along the lines of "Proprietary form factor which will only take the expansion options produced by us, which will cost you an arm and a leg and disappear off the market approximately three seconds after we stop production of the main board. Haw haw, sucker!"

    I'd have preferred it if they could have just made it into a double drive bay item that allows you to use full-size PCI cards in the extra space.

  • I mean...OrangeMicro made an Intel-box-on-a-PCI for quite a while (OK, minus the 10/100 and the 40GB HD). Apple even distributed a machine that had that sort of card in it. They just jammed the HD and ethernet onto it too...and did it with G3/4 chips... The graphic here [totalimpact.com] says 'Patent Pending'...what would they have to offer so 'insanely great' that they would patent this? Stuffing LinuxPPC into a lunch box? I mean...I'd love to have one...I just don't see patentability...does anyone else? (not intended as a troll)

    Galego

  • One neat use I'd like to see would be in labs for O/S classes. Recompile your kernel, download it to your other box and run it. If it crashes, no problem -- it won't interfere with your development/monitoring box. And it might make it possible to do a "remote" crash analysis for debugging.


  • I had no idea that those things cost $9k!! A friend of mine gave me one for free before the company went out of business. Bought a 21" monitor off him for a couple hundred as well.


    Thanks for the info!



    Seth
  • If x number of PPC CPUs generate y amount of heat then y is directly proportional to x as such having more PPC CPUs would generate more heat. Fitting these processors into a smaller space (less potential surface area and less air volume) means you're going to end up with heat flow problems. 10 G4 Macs are going to radiate generated heat better than 10 of these little boxes stacked on top of one another due to the fact the G4 towers have the physical capacity to circulate more air.
  • Stuffing LinuxPPC into a lunch box?

    I can just see it now... Get an old metal lunchbox, stick this in the main compartment, with a small LCD screen on top, and shoehorn a keyboard w/ touchpad into the underside of the lid. Add some ports on the bottomof the lunchbox and a power supply (somewhere...) Voila! You've got a LinuxPPC lunchbox.

    Gives a whole new meaning to swapping lunches...

    Brett

  • Yay, finally some news on more people making PPC boxes! I can now rest. Lots of people have been pointing out that a couple years ago you could shell out 9k bucks for a non-UltraSPARC box that could fit inside a PC or that Cubes and G4 towers are cheaper. Who the fuck cares. The cool thing here is more people than just Apple are selling PPC boxes. All the stories of this catagory that get posted always end up bashed because everyone points out Apple and then when Apple actually does something they get bashed. Oh well.
  • I would have agreed with you completely until I saw the G4 based Titanium PowerBook. Obviously there are ways to mange the heat other than with big fans.
  • Or, if you've got any PCI-based Sun workstation (Ultra-5 and up) and you want/need a PC as well, you can get one of these puppies: http://www.sun.com/desktop/products/sunpci/ [sun.com] Populated with a Celeron-600 and (iirc) 128MB ram, they run $495
  • Maybe you should read up on you PPC Linux history. LinuxPPC was out long before the developer reference releases. Yes that's right, the misconception about misceptions that your flaunting is incorrect. The early history of PPC Linuxes is pretty easy to follow. 1) MkLinux, made by Apple and since turned over to the community. Runs on older Nubus machines thanks to the Mach kernel. Also runs on newer models. Doesn't run the latest kernels. Requires the Mach to function at all. Therefore can not be called Linux as we classify it. 2) LinuxPPC. Came out when Mk was still at DR 2.1 (or maybe a wee bit earlier). Ran _real_ kernels. This was back in the Fall of '97. Check your history before you try to sway the tide of misconceptions with more misconceptions.

    --

  • But then again, you also have to take into account that each of those 10 G4 macs also have 10 300W power supplies, each generating its own ammount of heat, whereas one or 2 300w PS could take care of thsoe 10 G4 drive bays. And you also have several other components making heat, and wasting more electricity...
    and getting back to the point of my original post, you also have to remember, even if it is more heat concentrated in that one cabinet with a couple of dozen of these drive bay computers, its still gonna be cheaper to cool that 1 cabinet then it is gonna to cool off an entire large room of huge computers.
  • That's incorrect. This Knowlegebase article [apple.com] for which you'd require a free apple care login (why I don't know), is entitled: Power Macintosh G3 (Blue and White): PCI Expansion Slot Specifications and clearly states that they include:
    • 1 - 66 MHz PCI Bus with one PCI Slot which accepts a 66 MHz 32-bit PCI card.
    • 1 - 33 MHz PCI Bus with three PCI Slots which accept 33 MHz 32-bit or 64-bit cards.
    That's not the same as 1- 66mhz pci bus that accepts 64-bit cards.

    -Daniel

    PS it did sound plausable and interesting though

  • But why oh why would you need a completely different computer in a drive bay

    Damn it, man! did it never occur to you that they might just need companionship. Have you not an ounce of compassion in your body?
    8oP

  • Who needs a Mac in a drive bay when you can stuff a SPARC in there! I remember seeing a "SPARCPlug" by Ross Technologies a few years ago. Wonder what happened to them....
  • by Phaid ( 938 ) on Wednesday January 24, 2001 @07:28AM (#484331) Homepage
    I've long wanted a computer in which the processor / motherboard / memory were as easily removed and replaced as a hard drive, this sounds quite close to that ideal.

    Also sounds like quite an expensive solution to an already-solved problem. There are a number of manufacturers of passive-backplane systems that provide just that level of convenience. Basically, the passive backplane consists of a long board with something like 6 PCI and 6 ISA slots. This backplane installs in the case in the same position as a traditional motherboard. The CPU/RAM/Chipset "motherboard" is actually just a big PCI card that does bus mastering, and all your other peripherals sit in the slots. You can even get split backplanes, where more than one "motherboard" can coexist in the same case.

    Nice thing about this design is that if any card fails, including the "motherboard", you yank it out and replace it - the backplane itself is so simple it basically never fails. And ventilation is usually better, since all your hot components are in the middle of the case rather than on the bottom or side -- a lot of these cases have a row of big 120mm fans across the entire front, so everything is well ventilated.

    Most of the ones you'll see out there are fairly large (a little bigger than an old-style AT case), but I've even seen and used passive-backplane minitowers. The nice thing about these is that the form factor allows for a lot more room for slots in the case and therefore more peripherals.
  • At $500, these would likely fly out like hotcakes. (And probably sell at a loss, unfortunately.)

    At $1000, they would be a pretty good value.

    At $2500, Californians care about the power consumption this week, although if things stabilize in a month, they may not care so much.

    For the rest of us, such pricing is daunting unless there's a really compelling application that needs the exact form factor provided.

    This is highly unlikely to result in all sorts of people going out and buying these sorts of machines; it's just not economical unless there's a compelling need that justifies paying a couple grand for a pretty small server.

  • Would be to wipe the stinky Linux and install OpenBSD PowerPC [openbsd.org]
  • by sql*kitten ( 1359 ) on Wednesday January 24, 2001 @07:38AM (#484334)
    I've long wanted a computer in which the processor / motherboard / memory were as easily removed and replaced as a hard drive, this sounds quite close to that ideal.

    Like a SPARCplug [phoenix.net] ?

  • Yes this is a powerPC board, but for ~500$ you can get a 3.5 form factor x86 with dual ether, and real video. I got one for dev at work from Advantech [advantech.com] The entire thing run off of a 5v suppy. Get a laptop hard drive (44pin IDE) and you have an entire 5v system that will take a standard power plug from your power suppy. Emjembedded [emjembedded.com] also has similar stuff.
  • The problem isn't necessarily how directed the air is, it is the amount of surface area you have available to spread the heat dissipation across. You aren't going to be able to slap a big honking heatsink on this thing (but with the PPC you probabally don't have to) so you aren't going to have as much surface area to work with.
  • Indeed - after all, who's selling podules now?!

    (a rhetorical question [atomwide.co.uk], of course!)

  • Suppose your workhorse unit is an Intel box, but you've also got to make sure your code also runs on a Sparc or PPC. Buy a couple of these guys and plug 'em in! We bought several similar Sparc "bricks" about five years ago for a similar purpose.

    Small, tidy, you don't need another monitor/keyboard.

    Plus imagine the Beowulf possibilities (smirk)
  • G4s with a nice big box, power supply and what not go for $1700 [apple.com]. Go figure.

    Bad Apple, Bad! Why don't you name your freaking gifs so people who don't surf with images can navigate your site? You gotta wonder how blind people navigate trash like that. Hate that site.

  • Does not reporting the death of the DC count as an error?
  • SUSE has support for PPC....
  • I agree, if you want to play with linuxPPC (although debain for the ppc is much better), go ahead and buy a mac, maybe even a $700 old imac.

    but you missed the one thing it has that you can't get from any standard desktop market machine, 64bit 66mhz PCI. That stuff costs REAL money; hence the price is great actually; even if you could retrofit a G4 tower with extra logic for that bus (and I don't think anyone sane can), then it would probably end up costing more than 2500 in total.

    -Daniel

  • <OT>
    Damn, thats gotta be the lowest UID ive ever seen around here on /.
    damn.
    Your old school ;-)
    </OT>
    but getting back to topic, the added benefit that this also provides, versus a completely passive backplane solution, is that while you can dedicate a rack case to a backplane with a few of the controller cards, with these, you can also put a little server in the spare drive bays of your other servers, and anywhere that it might fit, since it isnt requiring its own case like a backplane solution.
  • I've long wanted a computer in which the processor / motherboard / memory were as easily removed and replaced as a hard drive, this sounds quite close to that ideal.

    Well no, this is an integrated unit, like the "nailed" router that's providing me DSL right now. This is an embedded platform. It can sit behind a security panel and provide the processor power to do voice recognition -- that sort of thing.

    On the iMac, it's just as easy to replace the motherboard as the hard drive, because the rate-determining step is opening the case. :)
  • actually there are 3 computers at my house....so lonelyness isn't a big problem. And this weekend might I might end up with another stray computer.

    What if my computer were to get an attitude haveing to put up with a little computer sucking on its power and trying to tell it what to do...
  • I'd [while1.org] rather have [phoenix.net] a [dm-int.com] SparcPlug [computer-design.com]!
  • If each developer had their own server (albeit, in their own worksation), they could offload processes to it (have it compile while you're busy fragging), or just use it as a server to test things on. You could also give novice admins their own server to learn with.

    The main thing is, the marketroids have already figured out who the audience is, and have figured out they can make money. It's up to us to come up with new/novel uses. Like, an overly expensive MP3 player for your car.
  • I don't think they actually intended it to be put in the drive bay of another computer. It's just a convenient size if you're using it as an embedded system or part of a server cluster. You could probably build a makeshift rack for it by tearing apart a full tower.

"One lawyer can steal more than a hundred men with guns." -- The Godfather

Working...