PC/104 Linux Minicluster - miniHowTo 105
coldfire writes: "At LISA2001 there was a neat presentation on a PC104 based mini-parallel computer. It seems that the how-to has now been posted, for the world to behold." From last year or not, this has some great pictures.
Re:Looks cool (Score:1, Offtopic)
Oh. Another thing, that clip-on plastic bowtie you got out of the crackerjack box, that doesn't really count. You need a real tie, made out of cloth. It would also be good if the tie you pick out didn't depict various skeletons fornicating in various styles. Plain blue would be my recommendation. Take this advice, and in 10 years when the economy recovers, you oughtta be able to get a job you aren't qualified for, just like in the good ole days of 1997.
heat (Score:4, Insightful)
First post.. mandatory w00t
Re:heat (Score:3, Informative)
Re:heat (Score:1)
They're also commonly less powerful CPU-wise than the typical desktop PC.
Computing tower of Babel (Score:3, Funny)
Re:Computing tower of Babel (Score:1)
Each nodes does require a finite amount of power, and having a P/S that outputs a finite amount of power, you are limited to a finite number of nodes...
Now all we have to do is build a toilet out of these things and hook it up to of one of these [slashdot.org] (detailed here [discover.com]), and we really could have infinite processor power...
Re:Computing tower of Babel (Score:1)
computing power for a price (Score:1)
Re:Computing tower of Babel (Score:2, Funny)
I pity the guy who gets stuck with brainf***
Just in case! (Score:4, Informative)
PC/104 (IEEE P996.1) was developed to fill the need for an embedded platform, which was compliant with standardized hardware and software of the PC architecture. Mechanically quite different from the PC form factor, PC/104 modules are 3.6 X 3.8 inches in size. A self-stacking bus is implemented with pin-and-socket connectors composed of 64- and 40- contact male/female headers, which replace the card edge connectors used in standard PC hardware. Virtually anything that is available for a standard PC is available in the PC/104 form factor. PC/104 components are designed to be stacked together to create a complete embedded solution. Normally there will be a single CPU board and several peripheral boards connected by the PC/104 (ISA) system bus. Often there will be a PCI bus provided by the CPU board that will accommodate PCI peripheral boards (this standard is called PC/104+). Overall the price point for a highly integrated PC/104 CPU module is lower than for a comparable IBM-compatible PC. However, due to the power dissipation constraints typically found in embedded applications, CPU horsepower is generally lower. For more look at the PC/104 consortium site [pc104.org].
Regarding MSFT and PC/104 (Score:1, Redundant)
Q. We are a company considering using the PC/104
standard in an embedded system. One big worry
that we need to get answered, before even
thinking of using this standard in our
products, is: What is the future of PC/104
when Microsoft has announced not to support
in the future the ISA bus (that is, PC/104)?
A. Despite the "PC99" recommendations of
Microsoft and Intel, which eliminate the need
for the ISA bus, Intel (and others) have
promised to keep current ISA chipsets alive
for at least five to seven years. There are
many PC/104-based "real world" interfaces
from hundreds of manufacturers, and these are
not going to become obsolete just because the
desktop PC does not require or use ISA slots
anymore.
Functions such as analog I/O, digital I/O,
motion control, and custom application
interfaces can still take advantage of low
cost and design simplicity of the ISA bus.
Contrary to Microsoft's and Intel's marketing
focus, the 386 and 486 processors are still
the most popular in PC/104-based embedded
systems, with Pentium designs only recently
becoming available on a wide scale.
The PC/104 Consortium added PCI to PC/104,
resulting in PC/104-Plus (= ISA bus PLUS PCI
bus), in order to allow high speed processors
such as the Intel Pentium to utilize higher
speed I/O bandwidth to achieve their full
potential in embedded systems. The PC/104-
Plus standard, with its PCI in addition to
ISA bus, provides a long-term future for
PC/104. Manufacturers of PC/104 modules now
have three choices from which to choose, all
within the industry standard PC/104 form-
factor:(1) ISA bus only; (2) PCI plus ISA
buses; and (3) PCI bus only.
Despite the popularity of PCI in desktop PCs,
there will continue to be an advantage to having
two separate buses in many embedded
system applications: PCI bus, for high speed
block data transfers (e.g. video, networking,
disk storage); and ISA bus, for byte-oriented
(e.g. real-world data acquisition and
control).
Today, 80% to 90% of PC/104 form-factor
modules are using ISA bus only. Within
approximately five years, it is likely that
there will be greater than 50% using the PCI
bus. It will probably take ten years before
the situation of today is reversed, with 80%
to 90% of PC/104 form-factor modules using
PCI bus only. Even so, ISA will still be
supported on PC/104-Plus modules, ten years
from now.
Re:Regarding MSFT and PC/104 (Score:2)
Most of them will be running some custom software, possibly written on a Unix-style kernel. Often as not, it's something like QNX.
Redhat 7 is right out. Far too big.
Re:Regarding MSFT and PC/104 (Score:2)
Not so... WinCE is a perfect candidate for embedded systems. But that's not even the point. Embedded systems are used as device controllers and data collection. There is typically no GUI. And the GUI's that are written are a single app. But when you want a gui app for your computer controlled lathe, why not use WinCE's toolkits and apis?
Redhat 7 is right out. Far too big.
Far too big for what? To fit in ram? Redhat 7.x, Mandrake 8.x, etal... are just Linux. The way I see it, what you get when you go with a RedHat or a Mandrake is a set of matched packages. Everything is compiled, ready to go, using the same optimizations, and dependancies are checked for you. So, why not use RedHat as a base system? You pick and choose what you want to install on your hard drive when designing the system and then when you're done writing your app you pick the components that are required to run it and copy those onto the DiskOnChip that you plug into the finished system. Of course a complete install of RedHat 7.3 is not going to fit on a 128 MB chip - that's not the intended market. You can, however, easilly fit the kernel, utilities, system libs, and gtk+ for linux-fb on a 32 MB chip and have lots of room to spare for your embedded system app.
Re:Regarding MSFT and PC/104 (Score:2)
In a project I was involved with recently, non-free software was specifically excluded simply because there would be problems with independent review. With a non-free environment, there's all these NDA's and stuff, whereas if it's GPL'd that doesn't matter. Now, for those "GPL is IP theft" types in management, it was easy to show them that an embedded control system that was completely open could be played about with by other people, but was no damn use without the heinously expensive machinery it controlled.
RedHat is OK for embedded stuff, but Mandrake *requires* a Pentium or better. If you use a 386EX board or some such, you're screwed. In any case, since space is at a premium, starting with one of the "mini" distros is often a good idea (busybox instead of bash and gnu-utils, for example). The environment is often highly unusual, and may need funny drivers in the kernel and stuff, so you're almost as well rolling your own.
Re:Just in case! -thanx for the info! (Score:1)
With some properly ventilated casing... (Score:3, Interesting)
Of course, while that would save space and money, having to take your table apart every time you needed to fix or swap something would be a PITA.
Re:With some properly ventilated casing... (Score:2)
Sure, and lay a big LCD monitor [slashdot.org] across the top for a coffee-table computer.
The HowTo Text (Score:1, Redundant)
PC/104 (IEEE P996.1) was developed to fill the need for an embedded platform, which was compliant with standardized hardware and software of the PC architecture. Mechanically quite different from the PC form factor, PC/104 modules are 3.6 X 3.8 inches in size. A self-stacking bus is implemented with pin-and-socket connectors composed of 64- and 40- contact male/female headers, which replace the card edge connectors used in standard PC hardware. Virtually anything that is available for a standard PC is available in the PC/104 form factor. PC/104 components are designed to be stacked together to create a complete embedded solution. Normally there will be a single CPU board and several peripheral boards connected by the PC/104 (ISA) system bus. Often there will be a PCI bus provided by the CPU board that will accommodate PCI peripheral boards (this standard is called PC/104+). Overall the price point for a highly integrated PC/104 CPU module is lower than for a comparable IBM-compatible PC. However, due to the power dissipation constraints typically found in embedded applications, CPU horsepower is generally lower. For more look at the PC/104 consortium site .
The MiniCluster power base is a custom assembly available from Parvus Corporation. Referring to the parts list, the power base is composed of a custom extrusion, end plate, power entry module, open frame power supply, and (Parvus P/N PRV-0974A-01) PC/104 power interface w/ temperature sensing. The end plate and custom extrusion form the base for the MiniCluster. The custom extrusion is machined for the power entry module and the open frame power supply. The power entry module contains a power cord receptacle, fuse, and power switch. Switched 110Vac from the power entry module is wired to the open frame power supply, which supplies all required DC voltages for the PC/104 stack. DC outputs supplied by the open frame power supply feed the PC/104 power interface module, which is the first (bottom) module in the stack. The PC/104 power interface module contains two fans, which ventilate the bottom of the stack and the open frame power supply.
For those hearty souls wishing to construct their own power base, the open frame power supply is manufactured by Connor Power Supplies (800) 235-5929 www.condorpower.com. For technical information, reference the model GLC65A switching power supply here. The PC/104 power interface module specifications are listed here.
The CPU modules in the system are operated as Single Board Computers (SBCs) with the exception of the top CPU in the stack. The bottom three CPUs need only be supplied power on the PC/104 bus. To interrupt all PC/104 bus lines except for the bus power lines, double-height stack-through adapters are used to connect the CPU boards together, and all PC/104 bus connections except power connections are interrupted by means of cutting pins on the adapters.
Advanced Digital Logic MSMP5SEN/SEV CPU's are used in the MiniCluster, sporting the following features: Pentium II 266 MHz, 128 MB DRAM, LPT1 parallel port, COM1 & COM2 serial ports, speaker, PS/2 or AT keyboard interface, PS/2 mouse interface, floppy disk interface, AT-IDE hard disk interface, VGA/LCD interface, 10/100Mbit Ethernet interface, (Optional) video input with frame grabber, (optional) compact flash socket, and many more features.
Dual PCMCIA Interface Module The Parvus PRV-1016X-03 PC/104 dual left loading PCMCIA interface works with PC Cards and compact flash devices. The board uses the Intel (Cirrus Logic) PD6722 chip which works well in Linux systems. This interface is used to provide a second (wired or wireless) network interface on node 1 (top CPU in the stack) of the MiniCluster. The second network interface is used to connect to the public network. Since Node 1 has both private and public network interfaces it may act as a routing or masquerading node for the cluster. All modules above Node 1 in the stack (Hubs, PCMCIA interface, and Quad CPU switch) share a full PC/104 bus with Node 1. Install the PCMCIA interface module with default Parvus configuration.
The PRV-0752X-01 pC/104 10Mbit Ethernet hub board has four 10BaseT ports, one AUI port, and one 10Base2 (thin net) port. As configured in the MiniCluster, two of these hub cards are installed in the stack. One TP port on each hub module is used to interconnect the hubs - leaving six ports available. Four of the ports are used to connect the stack CPUs on a private network, one port is connected to an RJ-45 jack on the MiniCluster end plate (making the MiniCluster private network available to the outside world) and one port is unused (spare). Refer to the Parvus "PC/104 Ethernet Products User Manual" at this place for configuration and connection options.
The Parvus PRV-0886X-01 Quad CPU Switch is essentially a KVM switch, which is integral to the PC/104 MiniCluster stack. This module also routes reset, speaker, and COM port lines to a specific CPU that it is switched to. The quad CPU switch has proven to be very useful in performing local and diagnostic operations on the MiniCluster. Refer to the Quad CPU Switch manual (pg.2) for the board and connector layout. When configuring this module be sure to jumper off the P1, P2, P3, P4 power select options. Leave the card in the default base address configuration. If an external CPU Selector switch is used, be sure to remove the 74HC574 chip from sock U7. Refer to the PC/104 Quad CPU Switch manual at this place.
PS/2 adapter for keyboard/mouse, reset switch, speaker connections are made to the Quad CPU Switch J8-utility connection. Refer to the Quad CPU Switch manual, pg.4
VGA port adapter is connected to the Quad CPU Switch J9-VGA. Refer to the Quad CPU Switch manual, pg.5. A cable is available from Parvus (CBL-1009a-01).
External CPU Select Switch is connected to Quad CPU Switch J11. Refer to pg.6, Quad CPU Switch manual
COM Port DB-9P connector can be connected to the Quad CPU Switch J7. Refer to pg.5, Quad CPU Switch manual. A cable is available from Parvus (CBL-1010a-01).
The MiniCluster is built with Parvus SnapStick components, which form an incremental card cage as modules are put into the stack. Refer to the Parvus SnapStick webpage for more information on Snapstick Components.
Connect a CPU Module to the Power Base via a modified double-height adapter (bus power adapter) and power the stack. Check PC/104 bus voltages. Attach Advanced Digital Logic keyboard/video/utility cable set to the CPU under test and check the CPU for proper operation. Install a compact flash microdrive with preinstalled operating system and power the stack. Check for proper operation.
Continue to add power bus adapter/CPU modules to the stack, checking each CPU for proper operation each time a new CPU is added.
After the fourth CPU is added to the stack, add the PCMCIA adapter interface to the stack by use of a pc/104 double height adapter. Power the stack and check that to PCMCIA module detects correctly under Linux. The CPU Modules are numbered one to four, top to bottom (of the stack). The Node 1 CPU is connected to the PCMCIA interface.
Install a hub module into the stack. Test for proper private network operation by connecting two nodes to the hub, powering stack, and running ping tests against each of the nodes under test.
Install the second hub module into the stack. Cross connect the two hub modules, and connect two nodes - one to a port on each of the two hub modules, power the stack and run ping tests against each of the nodes under test. This completes the test for each of the hub modules.
Remove the PCMCIA/HUB/HUB substack above node 1 and install the Quad CPU Switch module. Connect the end plate to the Quad CPU Switch to supply mouse/keyboard/VGA monitor connection to the Quad CPU Switch. Connect a CPU to the Quad CPU Switch (utility/com/VGA connections). Select the channel under test with the external CPU select switch. Power the stack and check for proper operation of Quad CPU Switch.
Connect remaining CPU com/utility/VGA cables to the Quad CPU Switch. Power the stack, switch between each node and check for proper operation of each CPU and of the Quad CPU Switch.
Once the Quad CPU Switch has been integrated into the stack, reinstall the PCMCIA/Hub/Hub substack into the stack. Power the stack and check for proper operation.
If not performed prior to this point, 1/4-20 threaded rod should be inserted into each SnapStick corner and screwed into the power base SnapMounts. The SnapStick assembly should be tightened at the top by use of a SnapWrench applied to each of the top SnapNuts. The end plate is bolted to the 6/32 nut end of the SnapNut.
Slide the MiniCluster plastic case over the top of the stack. The cover should interface well with each set of SnapGuides in the SnapStick cage. Connect the case fans to the power base 5V screw terminal just prior to pushing the case all the way to contact the powerbase extrusion.
Attach the case top plate to the stack end plate with sheet metal screws.
Happy Parallel Computing!
How did they get a .gov ?? (Score:3, Insightful)
Re:How did they get a .gov ?? (Score:3, Funny)
filter me o great filter of all things lame...
Re:How did they get a .gov ?? (Score:4, Interesting)
Welcome to ERI, Embedded Reasoning Institute. ERI is a research facility in Sandia National Laboratories in Livermore, CA. We explore Machine Intelligence applied to embedded processors and sensors in a network.
Re:How did they get a .gov ?? (Score:1)
Erm.. they spent 5k on a quad 266MHz machine. How could you think it was anyone *but* a government entitiy?
Re:How did they get a .gov ?? (Score:1)
Stack those babies up! (Score:2, Funny)
Parallel computing, Literally!
-Jeff
This One's Even Smaller and has a LinuxBIOS (Score:3, Interesting)
aka BentoBox [lanl.gov]
Re:Ok.. Now what? (Score:1)
Yeah, why are they using KVM? Just get each node's network running and then telnet between nodes...
I could waste lots of time screwing with this damn thing...
Too much money, but somebody HAD to do it...
What's up with the rods? (Score:3, Funny)
Re:What's up with the rods? (Score:1)
"All hail the rod..."
Sorry folks... haven't seen a Simpsons quite since slashback... Had to do it...
Re:What's up with the rods? (Score:2)
Inspiration? (Score:1)
I'd like to see an interview or something to see if this can be confirmed - if so, it presents some interesting questions about the value of today's "2-step" contruction toys.
minicluster linux (Score:5, Funny)
Re:minicluster linux (Score:1, Offtopic)
that's actually an earlier *nix-type operating system. not sure if whether it's a Linux precursor, but linux contains drivers for minix filesystem.
Re:minicluster linux (Score:1, Informative)
Now, the following trivia comes from one of my current professors (he happens to be the Phil Nelson mentioned at the bottom of the previously linked announcement). As he tells it, Minix was created to be an instructional operating system, and the professor who wrote it is reported to have said, "If the Linux kernel had been written for my Operating Systems class, it would have received an F."
Re:minicluster linux (Score:1)
"F" for funding.
For more info. [www.dina.dk]
Re:minicluster linux (Score:1, Flamebait)
An "F". Gimme a break. Of course, he probably would have given NT an "I".
Re:minicluster linux (Score:3, Informative)
I think it's Andrew Tanenbaum and he is Amsterdam now, as far as I can see on his homepage [cs.vu.nl]
Re:minicluster linux (Score:1)
Poster #2 - Linux is actually born out of Minix. The Linux kernel was originally written to work within the Minix system, as Linus himself explained when Linux was first announced [google.com].
Poster #3 - Sorry, but Minix [cs.vu.nl] is already taken.
Attention would-be Linux History professors:
It. Was. A. Joke.
That is all.
Re:minicluster linux (Score:1)
the name shall be Minux.
It has the same dangerously trademark-infringing characteristics we love in the OSS/FS community and has an 'x' in it. What more do you need?
Re:minicluster linux (Score:1)
Interesting (Score:1)
Oh, no... (Score:1)
Whoa there tiger (Score:2)
i hear ya (Score:1)
This is madness! (Score:1)
It may have been fun to build it but come on. Just buy a 1GHZ Book Style Case PC for well under a $1000. It would be even smaller, consume less electricity, and probably be more reliable since there are less parts.
Re:This is madness! (Score:1)
Re:This is madness! (Score:1)
Then I saw the price..
What's the point? (Score:1)
If people are going to spend time on this sort of thing, why not do something interesting with the architecture? Use some interesting processors, use FPGAs for interconnects, whatever.
BYO backplane (Score:4, Interesting)
Good idea for datacenters (Score:2)
You could hook them all up to the network, and boot off of some network attached storage, where the customer OS would be located. This way, if a server would fail, all you would have to do is replace the module and voila, the system is up again.
Nevermind the speed issue -- I think there are some PIII PC104 modules that go into the GHz range. But it would be really cool, considering these things are a lot smaller than standard 19" racks. You could triple the storage space of a datacenter by using these things.
Nevermind the heat issues, but it does seem like a cool idea.
Re:Good idea for datacenters (Score:2)
very cool.. but... (Score:2)
for less money you can make a 4 node P-III 866 cluster in rackmount cases with SCSI Ultra160 drives including the rack with nice smoked glass doors, the rackmount KVM and a rackmount 10/100 switch.... It still doesn't eliminate the "neat-o" factor of the PC-104 design though.
Wait for it.... :) (Score:1)
Ben
JENGA (Score:1)
JENGA JENGA JENGA JENGA
Perhaps it's just me (Score:2)
To my left as I type is a 4xPII200Mhz AMD Goliath. I did our network admin a favor and took it out of the server room for him. I'm using it as a toy machine to run apps on. It's huge, and compared to my PIII 700Mhz w/ 500Mb of ram laptop: it's just plain slow.
My point is: I'll bet 10-1 I can write a multi-threaded app in JAVA for this beast that could spank the crap out of a distributed app written in c for that cluster. The one exeception would be ultra-low bandwidth apps like Distributed.net. Anything which required more than 1 cross-cpu transaction per second would be dreadfully slow compared to an SMP PC. But I understand the need for clustered computing and it is really cool, so I'll leave this point alone and point out the other obvious thing...
I can see the need to build a cluster if you are doing research/development into clustered computing. But for the cost of this, you could cluster two of those Wal-mart OSless PCs. They would probably be a hell of a lot faster, take up only a couple square feet more room, be much less of a headache to get running, contain a whole lot more memory and disk space, etc...
This is ultra-geekdom coolness but it just doesn't make sense, IMO.
Why a HUB?? (Score:1)
Re:Why a HUB?? (Score:1)
I can't be sure, however, because I haven't read all the specs.
PC/104 Distros and Setup (Score:1)
Two Problems: (Score:1)
Cost and availability.
Most of the companies that have real cool pc/104 cpu modules only sell to resellers, and even the high 486/low pentium class ones are *expensive*I've been looking at PC/104 for use in mobile cluster, wearables, and mini-luggables for quite some time, and the main reason why I haven't been able to do any of the projects in my head is because I can't get the damn modules.
Granted, ePay [ebay.com] has some stuff sometimes, but it's mostly outdated to the point of unuseability.
(useability: I have a pair of Dolch 486 luggables I like. I suppose 486DXes are pushing me, that's the absolute lowest I'd go)
Oh well, I'm sure by the time they're completely passe I'll be attending computer 'shows' where we all showcase our hot-rodded pc/104 boxen
Remember, pinstriping will get you everywhere :)
Cluster-in-Lunch Box (Score:1)