Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

PC/104 Linux Minicluster - miniHowTo 105

coldfire writes: "At LISA2001 there was a neat presentation on a PC104 based mini-parallel computer. It seems that the how-to has now been posted, for the world to behold." From last year or not, this has some great pictures.
This discussion has been archived. No new comments can be posted.

PC/104 Linux Minicluster - miniHowTo

Comments Filter:
  • heat (Score:4, Insightful)

    by doubtless ( 267357 ) on Monday May 06, 2002 @10:32PM (#3475066) Homepage
    It sure stacks up pretty nice, I am just wondering if there is any heat dissapation issues when you have so much processing in so little space... lucky it's not running AMD.

    First post.. mandatory w00t
    • Re:heat (Score:3, Informative)

      by barzok ( 26681 )
      It's only running P2/266s (in this setup). Not too much to worry about.
    • Many PC/104 computers are made to different standards than PCs (industrial vs. general use) and it is common for them to use less power (run cooler) and generally be more tolerant to environmental/electrical factors.

      They're also commonly less powerful CPU-wise than the typical desktop PC.

  • by dsheeks ( 65644 ) on Monday May 06, 2002 @10:34PM (#3475077) Homepage
    If you pile up enough processors on this thing, you'll either reach the sky or have more computing power than God at some point. Looks fun though.
    • Eventually, however, you would run out of power. Say you have a 100W P/S (I know, incredibly low by modern desktop standards but... we are talking about systems that are designed to have low power requirements).
      Each nodes does require a finite amount of power, and having a P/S that outputs a finite amount of power, you are limited to a finite number of nodes...
      Now all we have to do is build a toilet out of these things and hook it up to of one of these [slashdot.org] (detailed here [discover.com]), and we really could have infinite processor power...
      • According to pc-104.org they take up 1-2 watts each(at least some do). That gives you at a conservative estimate 50, per 100Watt power supply, seeing as how I have seen 2000watt power supplies... :)
    • Ok so eventually you would have alot of computing power. BUT, the thing cost $5k and has less computing power than a $.5k emachine. They are embedded type cards which typically have less juice than full size PCs of the same mhz. Doing parrallel processing also typically results in less power than a single chip of the same mhz (4x250 is less than 1000mhz in resulting power) So these cards at 266mhz really wouldn't have much power at all for the money. Still neat type of system and I like the KVM type card in it. It looks like you could have 4 systems in the size of one PC.
    • Nah, it'd just get knocked down & you'd all be forced to speak different computer languages afterwards :]

      I pity the guy who gets stuck with brainf*** ...
  • Just in case! (Score:4, Informative)

    by Bios_Hakr ( 68586 ) <xptical@@@gmail...com> on Monday May 06, 2002 @10:35PM (#3475084)
    What Is PC/104?
    PC/104 (IEEE P996.1) was developed to fill the need for an embedded platform, which was compliant with standardized hardware and software of the PC architecture. Mechanically quite different from the PC form factor, PC/104 modules are 3.6 X 3.8 inches in size. A self-stacking bus is implemented with pin-and-socket connectors composed of 64- and 40- contact male/female headers, which replace the card edge connectors used in standard PC hardware. Virtually anything that is available for a standard PC is available in the PC/104 form factor. PC/104 components are designed to be stacked together to create a complete embedded solution. Normally there will be a single CPU board and several peripheral boards connected by the PC/104 (ISA) system bus. Often there will be a PCI bus provided by the CPU board that will accommodate PCI peripheral boards (this standard is called PC/104+). Overall the price point for a highly integrated PC/104 CPU module is lower than for a comparable IBM-compatible PC. However, due to the power dissipation constraints typically found in embedded applications, CPU horsepower is generally lower. For more look at the PC/104 consortium site [pc104.org].
    • from PC104.org FAQ [pc104.org]

      Q. We are a company considering using the PC/104
      standard in an embedded system. One big worry
      that we need to get answered, before even
      thinking of using this standard in our
      products, is: What is the future of PC/104
      when Microsoft has announced not to support
      in the future the ISA bus (that is, PC/104)?

      A. Despite the "PC99" recommendations of
      Microsoft and Intel, which eliminate the need
      for the ISA bus, Intel (and others) have
      promised to keep current ISA chipsets alive
      for at least five to seven years. There are
      many PC/104-based "real world" interfaces
      from hundreds of manufacturers, and these are
      not going to become obsolete just because the
      desktop PC does not require or use ISA slots
      anymore.

      Functions such as analog I/O, digital I/O,
      motion control, and custom application
      interfaces can still take advantage of low
      cost and design simplicity of the ISA bus.
      Contrary to Microsoft's and Intel's marketing
      focus, the 386 and 486 processors are still
      the most popular in PC/104-based embedded
      systems, with Pentium designs only recently
      becoming available on a wide scale.

      The PC/104 Consortium added PCI to PC/104,
      resulting in PC/104-Plus (= ISA bus PLUS PCI
      bus), in order to allow high speed processors
      such as the Intel Pentium to utilize higher
      speed I/O bandwidth to achieve their full
      potential in embedded systems. The PC/104-
      Plus standard, with its PCI in addition to
      ISA bus, provides a long-term future for
      PC/104. Manufacturers of PC/104 modules now
      have three choices from which to choose, all
      within the industry standard PC/104 form-
      factor:(1) ISA bus only; (2) PCI plus ISA
      buses; and (3) PCI bus only.

      Despite the popularity of PCI in desktop PCs,
      there will continue to be an advantage to having
      two separate buses in many embedded
      system applications: PCI bus, for high speed
      block data transfers (e.g. video, networking,
      disk storage); and ISA bus, for byte-oriented
      (e.g. real-world data acquisition and
      control).

      Today, 80% to 90% of PC/104 form-factor
      modules are using ISA bus only. Within
      approximately five years, it is likely that
      there will be greater than 50% using the PCI
      bus. It will probably take ten years before
      the situation of today is reversed, with 80%
      to 90% of PC/104 form-factor modules using
      PCI bus only. Even so, ISA will still be
      supported on PC/104-Plus modules, ten years
      from now.
      • You probably wouldn't want to run any kind of Windows on a PC/104 system anyway. Consider that the P2/266 processors used in this are pretty damn meaty by embedded standards, and the graphics cards tend to be el-cheapo generic SVGA chipsets...

        Most of them will be running some custom software, possibly written on a Unix-style kernel. Often as not, it's something like QNX.
        Redhat 7 is right out. Far too big.
        • You probably wouldn't want to run any kind of Windows on a PC/104 system anyway.

          Not so... WinCE is a perfect candidate for embedded systems. But that's not even the point. Embedded systems are used as device controllers and data collection. There is typically no GUI. And the GUI's that are written are a single app. But when you want a gui app for your computer controlled lathe, why not use WinCE's toolkits and apis?

          Redhat 7 is right out. Far too big.

          Far too big for what? To fit in ram? Redhat 7.x, Mandrake 8.x, etal... are just Linux. The way I see it, what you get when you go with a RedHat or a Mandrake is a set of matched packages. Everything is compiled, ready to go, using the same optimizations, and dependancies are checked for you. So, why not use RedHat as a base system? You pick and choose what you want to install on your hard drive when designing the system and then when you're done writing your app you pick the components that are required to run it and copy those onto the DiskOnChip that you plug into the finished system. Of course a complete install of RedHat 7.3 is not going to fit on a 128 MB chip - that's not the intended market. You can, however, easilly fit the kernel, utilities, system libs, and gtk+ for linux-fb on a 32 MB chip and have lots of room to spare for your embedded system app.

          • The problem with WinCE is that if you want to develop for it, you need to jump through all sorts of licencing hoops. QNX isn't quite as bad - at least you can download QNX for non-commercial use to see if you can use the damn thing. Also, QNX uses pretty much a standard Unix API (it's fairly POSIX-y) so it's easier to get your head round if you're used to that environment. If you're used to the Windows API, CE is probably the way to go.

            In a project I was involved with recently, non-free software was specifically excluded simply because there would be problems with independent review. With a non-free environment, there's all these NDA's and stuff, whereas if it's GPL'd that doesn't matter. Now, for those "GPL is IP theft" types in management, it was easy to show them that an embedded control system that was completely open could be played about with by other people, but was no damn use without the heinously expensive machinery it controlled.

            RedHat is OK for embedded stuff, but Mandrake *requires* a Pentium or better. If you use a 386EX board or some such, you're screwed. In any case, since space is at a premium, starting with one of the "mini" distros is often a good idea (busybox instead of bash and gnu-utils, for example). The environment is often highly unusual, and may need funny drivers in the kernel and stuff, so you're almost as well rolling your own.
    • I was wondering about that.... appreciate the information
  • by bc90021 ( 43730 ) <bc90021NO@SPAMbc90021.net> on Monday May 06, 2002 @10:37PM (#3475099) Homepage
    ...you could probably turn four of these into table legs!

    Of course, while that would save space and money, having to take your table apart every time you needed to fix or swap something would be a PITA. ;)
  • The HowTo Text (Score:1, Redundant)

    by /dev/trash ( 182850 )
    What Is PC/104?

    PC/104 (IEEE P996.1) was developed to fill the need for an embedded platform, which was compliant with standardized hardware and software of the PC architecture. Mechanically quite different from the PC form factor, PC/104 modules are 3.6 X 3.8 inches in size. A self-stacking bus is implemented with pin-and-socket connectors composed of 64- and 40- contact male/female headers, which replace the card edge connectors used in standard PC hardware. Virtually anything that is available for a standard PC is available in the PC/104 form factor. PC/104 components are designed to be stacked together to create a complete embedded solution. Normally there will be a single CPU board and several peripheral boards connected by the PC/104 (ISA) system bus. Often there will be a PCI bus provided by the CPU board that will accommodate PCI peripheral boards (this standard is called PC/104+). Overall the price point for a highly integrated PC/104 CPU module is lower than for a comparable IBM-compatible PC. However, due to the power dissipation constraints typically found in embedded applications, CPU horsepower is generally lower. For more look at the PC/104 consortium site .

    The MiniCluster power base is a custom assembly available from Parvus Corporation. Referring to the parts list, the power base is composed of a custom extrusion, end plate, power entry module, open frame power supply, and (Parvus P/N PRV-0974A-01) PC/104 power interface w/ temperature sensing. The end plate and custom extrusion form the base for the MiniCluster. The custom extrusion is machined for the power entry module and the open frame power supply. The power entry module contains a power cord receptacle, fuse, and power switch. Switched 110Vac from the power entry module is wired to the open frame power supply, which supplies all required DC voltages for the PC/104 stack. DC outputs supplied by the open frame power supply feed the PC/104 power interface module, which is the first (bottom) module in the stack. The PC/104 power interface module contains two fans, which ventilate the bottom of the stack and the open frame power supply.

    For those hearty souls wishing to construct their own power base, the open frame power supply is manufactured by Connor Power Supplies (800) 235-5929 www.condorpower.com. For technical information, reference the model GLC65A switching power supply here. The PC/104 power interface module specifications are listed here.

    The CPU modules in the system are operated as Single Board Computers (SBCs) with the exception of the top CPU in the stack. The bottom three CPUs need only be supplied power on the PC/104 bus. To interrupt all PC/104 bus lines except for the bus power lines, double-height stack-through adapters are used to connect the CPU boards together, and all PC/104 bus connections except power connections are interrupted by means of cutting pins on the adapters.

    Advanced Digital Logic MSMP5SEN/SEV CPU's are used in the MiniCluster, sporting the following features: Pentium II 266 MHz, 128 MB DRAM, LPT1 parallel port, COM1 & COM2 serial ports, speaker, PS/2 or AT keyboard interface, PS/2 mouse interface, floppy disk interface, AT-IDE hard disk interface, VGA/LCD interface, 10/100Mbit Ethernet interface, (Optional) video input with frame grabber, (optional) compact flash socket, and many more features.

    Dual PCMCIA Interface Module The Parvus PRV-1016X-03 PC/104 dual left loading PCMCIA interface works with PC Cards and compact flash devices. The board uses the Intel (Cirrus Logic) PD6722 chip which works well in Linux systems. This interface is used to provide a second (wired or wireless) network interface on node 1 (top CPU in the stack) of the MiniCluster. The second network interface is used to connect to the public network. Since Node 1 has both private and public network interfaces it may act as a routing or masquerading node for the cluster. All modules above Node 1 in the stack (Hubs, PCMCIA interface, and Quad CPU switch) share a full PC/104 bus with Node 1. Install the PCMCIA interface module with default Parvus configuration.

    The PRV-0752X-01 pC/104 10Mbit Ethernet hub board has four 10BaseT ports, one AUI port, and one 10Base2 (thin net) port. As configured in the MiniCluster, two of these hub cards are installed in the stack. One TP port on each hub module is used to interconnect the hubs - leaving six ports available. Four of the ports are used to connect the stack CPUs on a private network, one port is connected to an RJ-45 jack on the MiniCluster end plate (making the MiniCluster private network available to the outside world) and one port is unused (spare). Refer to the Parvus "PC/104 Ethernet Products User Manual" at this place for configuration and connection options.

    The Parvus PRV-0886X-01 Quad CPU Switch is essentially a KVM switch, which is integral to the PC/104 MiniCluster stack. This module also routes reset, speaker, and COM port lines to a specific CPU that it is switched to. The quad CPU switch has proven to be very useful in performing local and diagnostic operations on the MiniCluster. Refer to the Quad CPU Switch manual (pg.2) for the board and connector layout. When configuring this module be sure to jumper off the P1, P2, P3, P4 power select options. Leave the card in the default base address configuration. If an external CPU Selector switch is used, be sure to remove the 74HC574 chip from sock U7. Refer to the PC/104 Quad CPU Switch manual at this place.

    PS/2 adapter for keyboard/mouse, reset switch, speaker connections are made to the Quad CPU Switch J8-utility connection. Refer to the Quad CPU Switch manual, pg.4

    VGA port adapter is connected to the Quad CPU Switch J9-VGA. Refer to the Quad CPU Switch manual, pg.5. A cable is available from Parvus (CBL-1009a-01).

    External CPU Select Switch is connected to Quad CPU Switch J11. Refer to pg.6, Quad CPU Switch manual

    COM Port DB-9P connector can be connected to the Quad CPU Switch J7. Refer to pg.5, Quad CPU Switch manual. A cable is available from Parvus (CBL-1010a-01).

    The MiniCluster is built with Parvus SnapStick components, which form an incremental card cage as modules are put into the stack. Refer to the Parvus SnapStick webpage for more information on Snapstick Components.

    Connect a CPU Module to the Power Base via a modified double-height adapter (bus power adapter) and power the stack. Check PC/104 bus voltages. Attach Advanced Digital Logic keyboard/video/utility cable set to the CPU under test and check the CPU for proper operation. Install a compact flash microdrive with preinstalled operating system and power the stack. Check for proper operation.

    Continue to add power bus adapter/CPU modules to the stack, checking each CPU for proper operation each time a new CPU is added.

    After the fourth CPU is added to the stack, add the PCMCIA adapter interface to the stack by use of a pc/104 double height adapter. Power the stack and check that to PCMCIA module detects correctly under Linux. The CPU Modules are numbered one to four, top to bottom (of the stack). The Node 1 CPU is connected to the PCMCIA interface.
    Install a hub module into the stack. Test for proper private network operation by connecting two nodes to the hub, powering stack, and running ping tests against each of the nodes under test.

    Install the second hub module into the stack. Cross connect the two hub modules, and connect two nodes - one to a port on each of the two hub modules, power the stack and run ping tests against each of the nodes under test. This completes the test for each of the hub modules.

    Remove the PCMCIA/HUB/HUB substack above node 1 and install the Quad CPU Switch module. Connect the end plate to the Quad CPU Switch to supply mouse/keyboard/VGA monitor connection to the Quad CPU Switch. Connect a CPU to the Quad CPU Switch (utility/com/VGA connections). Select the channel under test with the external CPU select switch. Power the stack and check for proper operation of Quad CPU Switch.
    Connect remaining CPU com/utility/VGA cables to the Quad CPU Switch. Power the stack, switch between each node and check for proper operation of each CPU and of the Quad CPU Switch.
    Once the Quad CPU Switch has been integrated into the stack, reinstall the PCMCIA/Hub/Hub substack into the stack. Power the stack and check for proper operation.

    If not performed prior to this point, 1/4-20 threaded rod should be inserted into each SnapStick corner and screwed into the power base SnapMounts. The SnapStick assembly should be tightened at the top by use of a SnapWrench applied to each of the top SnapNuts. The end plate is bolted to the 6/32 nut end of the SnapNut.

    Slide the MiniCluster plastic case over the top of the stack. The cover should interface well with each set of SnapGuides in the SnapStick cage. Connect the case fans to the power base 5V screw terminal just prior to pushing the case all the way to contact the powerbase extrusion.

    Attach the case top plate to the stack end plate with sheet metal screws.

    Happy Parallel Computing!

  • by satsuke ( 263225 ) on Monday May 06, 2002 @10:49PM (#3475159)
    I know they have some public interest sections of they're site .. but they are a corporation with products and such .. I wonder how they got a .gov address ..
  • Parallel computing, Literally!

    -Jeff

  • by LuxuryYacht ( 229372 ) on Monday May 06, 2002 @11:12PM (#3475245) Homepage
    Take a look at this Cluster-in-a-lunchbox
    aka BentoBox [lanl.gov]

  • by binaryDigit ( 557647 ) on Monday May 06, 2002 @11:13PM (#3475247)
    In some of the pix, there are two rods on opposite corners. Then in one pic, all four rods are there. But then in some other pix there are two rods in a single side. I think that this is a rod conspiracy. All these pictures are not of the same unit, no sir ree bob. That page was obviously pieced together from multiple units being put together in multiple locations over some period of time. For what nefarious purpose, only Alex Jones would know (I swear, if you look real close in one of the pics, you can see a tiny black helicopter in the reflection of the DMM).
    • "In rod we trust..."
      "All hail the rod..."

      Sorry folks... haven't seen a Simpsons quite since slashback... Had to do it...
    • If not performed prior to this point, 1/4-20 threaded rod should be inserted into each SnapStick corner and screwed into the power base SnapMounts. The SnapStick assembly should be tightened at the top by use of a SnapWrench applied to each of the top SnapNuts. The end plate is bolted to the 6/32 nut end of the SnapNut.
  • I don't know who the engineer idea-guy who designed this is, but it seems fairly evident that it's influenced at least in part by Lego.

    I'd like to see an interview or something to see if this can be confirmed - if so, it presents some interesting questions about the value of today's "2-step" contruction toys.
  • by larry bagina ( 561269 ) on Monday May 06, 2002 @11:24PM (#3475285) Journal
    Can we call it "minix"?
    • minix

      that's actually an earlier *nix-type operating system. not sure if whether it's a Linux precursor, but linux contains drivers for minix filesystem.
      • Linux is actually born out of Minix. The Linux kernel was originally written to work within the Minix system, as Linus himself explained when Linux was first announced [google.com].

        Now, the following trivia comes from one of my current professors (he happens to be the Phil Nelson mentioned at the bottom of the previously linked announcement). As he tells it, Minix was created to be an instructional operating system, and the professor who wrote it is reported to have said, "If the Linux kernel had been written for my Operating Systems class, it would have received an F."

    • Poster #1- that's actually an earlier *nix-type operating system.

      Poster #2 - Linux is actually born out of Minix. The Linux kernel was originally written to work within the Minix system, as Linus himself explained when Linux was first announced [google.com].

      Poster #3 - Sorry, but Minix [cs.vu.nl] is already taken.

      Attention would-be Linux History professors:

      It. Was. A. Joke.

      That is all.

      • Fine, then let there be a compromise:

        the name shall be Minux.
        It has the same dangerously trademark-infringing characteristics we love in the OSS/FS community and has an 'x' in it. What more do you need?
  • But they should follow this up by posting a HOWTO about actually getting linux to run on this machine.
  • ...not another magic box hoax!

  • also at that site [sandia.gov] is the price list. At $642 for each of those Pentium II boards (not including RAM), I think I'll stick with buying "jumbo-mini" Beowulf nodes for the time being.
    • $642 for a 266-mhtz node is NO deal although it may have been when they wrote the article, what was it about a year ago.
  • They have 4 266MHZ Pentium IIs at almost $700 each! So about 1.1GHZ of CPU speed combined. And how much did it cost all together? Around $8000!

    It may have been fun to build it but come on. Just buy a 1GHZ Book Style Case PC for well under a $1000. It would be even smaller, consume less electricity, and probably be more reliable since there are less parts.

    • I think you fail to see the purpose. A 1GHZ Book Style Case PC would have a higher MHZ rating, but that's not what they wanted. Seems to me they intended to create a highly expandable, tiny formfactor, parallel processing computer. Not to create an exceedingly fast machine. Please correct me if I'm wrong, but it doesn't seem to be a means to an end, more of a thing to do because they can.
    • When I first saw this PC/104 spec. I thought, great.. I can build myself a nice small firewall box (literally since it would be square) with a CPU and 3 NICs, and save myself a good deal of space...

      Then I saw the price..
  • PC104 modules are not exactly cheap, and 266MHz is not exactly fast. You'd get something faster, cheaper with a dual AMD and spend less time on it.

    If people are going to spend time on this sort of thing, why not do something interesting with the architecture? Use some interesting processors, use FPGAs for interconnects, whatever.

  • BYO backplane (Score:4, Interesting)

    by seanadams.com ( 463190 ) on Tuesday May 07, 2002 @01:15AM (#3475628) Homepage
    Here's a similar project I did a couple years ago in case anyone's interested. It's a do-it-yourself backplane [seanadams.com] for those highly integrated full-length single board computers. I was able to make a pretty cost-effective high density cluser using a single case with nine PCs inside - eight single cpu celerons and a dual PII. There was even some room left over for laptop hard drives between the cards. Total rack space: 6U. You could also fit this in a deep 4U chassis.
  • This strikes me as a good idea for datacenters who wish to offer dedicated systems to their customers. Normally, this is kind of expensive with rackmounts costing at least $1200 for bottom of the line. However, imagine a lot of these little modules.

    You could hook them all up to the network, and boot off of some network attached storage, where the customer OS would be located. This way, if a server would fail, all you would have to do is replace the module and voila, the system is up again.

    Nevermind the speed issue -- I think there are some PIII PC104 modules that go into the GHz range. But it would be really cool, considering these things are a lot smaller than standard 19" racks. You could triple the storage space of a datacenter by using these things.

    Nevermind the heat issues, but it does seem like a cool idea.
  • you can make the same thing with old- P-II desktops for 1/8th the price of that thing. PC104 cpu board, espically anything above a Pentium level are horribly overpriced. (and rightfully so, as 1/50th of them get sold. compared to regular motherboards

    for less money you can make a 4 node P-III 866 cluster in rackmount cases with SCSI Ultra160 drives including the rack with nice smoked glass doors, the rackmount KVM and a rackmount 10/100 switch.... It still doesn't eliminate the "neat-o" factor of the PC-104 design though.
  • Imagine a Beowulf cluster of those things!

    Ben
  • If you are very carefull you can pull a cpu out of the middle of the stack without it falling over :).

    JENGA JENGA JENGA JENGA ....
  • But this kind of stuff doesn't seem to make a lot of sense.

    To my left as I type is a 4xPII200Mhz AMD Goliath. I did our network admin a favor and took it out of the server room for him. I'm using it as a toy machine to run apps on. It's huge, and compared to my PIII 700Mhz w/ 500Mb of ram laptop: it's just plain slow.

    My point is: I'll bet 10-1 I can write a multi-threaded app in JAVA for this beast that could spank the crap out of a distributed app written in c for that cluster. The one exeception would be ultra-low bandwidth apps like Distributed.net. Anything which required more than 1 cross-cpu transaction per second would be dreadfully slow compared to an SMP PC. But I understand the need for clustered computing and it is really cool, so I'll leave this point alone and point out the other obvious thing...

    I can see the need to build a cluster if you are doing research/development into clustered computing. But for the cost of this, you could cluster two of those Wal-mart OSless PCs. They would probably be a hell of a lot faster, take up only a couple square feet more room, be much less of a headache to get running, contain a whole lot more memory and disk space, etc...

    This is ultra-geekdom coolness but it just doesn't make sense, IMO.
    • I don't understand the logic -- all this time and thought into something, and they use a hub? I can buy a very compact L2 switch at Fry's for less than $20... performance and power is obviously the intent with any cluster, so why choose a hub over a switch? who uses hubs for anything other than packet sniffing these days??
      • Because it isn't an IP network? I'm assuming that tcp/ip has WAY too much overhead for clustering.

        I can't be sure, however, because I haven't read all the specs.
  • I am currently working for a company that is considering using a PC/104 setup for a interface modual for a small data collection unit. We don't need much in terms of power for the actual board so we may use just an old 486 style processor for it. My question is: What are some good distros out there for embedded Linux that run PC/104 boards, with networking, stripped down to a small size (Under 50,25megs?)? Are there some main distros that we might want to strip down and use? What challenges are there in getting things setup on a PC/104 board?
  • Cost and availability.
    I've been looking at PC/104 for use in mobile cluster, wearables, and mini-luggables for quite some time, and the main reason why I haven't been able to do any of the projects in my head is because I can't get the damn modules.
    Granted, ePay [ebay.com] has some stuff sometimes, but it's mostly outdated to the point of unuseability.
    (useability: I have a pair of Dolch 486 luggables I like. I suppose 486DXes are pushing me, that's the absolute lowest I'd go)

    Most of the companies that have real cool pc/104 cpu modules only sell to resellers, and even the high 486/low pentium class ones are *expensive*
    Oh well, I'm sure by the time they're completely passe I'll be attending computer 'shows' where we all showcase our hot-rodded pc/104 boxen

    Remember, pinstriping will get you everywhere :)

  • Now that is solid fuel! Imagine what you could do with that if you had some solar volcanic plates to generate the energy needed in the field.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...