Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Fitting A Linux Box On A PCI Card 137

An Anonymous Coward writes: "Running on Newsforge/Linux.com is a hardware review where Slashdot's Krow took a couple of OmniCluster's Slotservers and and built a cluster configuation inside of a singe host computer (and even had DB2 running on one of the card's inside of the host). Could something like this be the future of computing where for additional processing power you just kept adding additional computers inside of a host?"
This discussion has been archived. No new comments can be posted.

Fitting A Linux Box On A PCI Card

Comments Filter:
  • Imagine...

    It would be cool to have completely separate processors in a box, so that as long as there is power, each card can run on its own. Then you could network them together into a beowulf cluster, and then make clusters of clusters

    the AC
    • G4 processor cards (Score:3, Interesting)

      by Peter Dyck ( 201979 )
      I've been wondering how expensive/difficult it would be to build a multiprocessor computer for computational physics applications based on G4 PowerPC cards [transtech-dsp.com].

      I'd just love the idea of having a host PC (or a Beowulf cluster of them ;-) with all the PCI slots filled with G4 7400 boards crushing numbers...

    • by Khalid ( 31037 )
      There has been a project a while ago, which aim was to implement a beowulf of separate StrongArm cards to be plugged in a box, they have even managed to build some prototypes.

      http://www.dnaco.net/~kragen/sa-beowulf/

      Alas I think the project seems to be dead for some time now.
    • Can this be extrapolated to a point where computers start to look like they were modeled off of brains? Very simple processors combining their efforts into a larger group effort, but retaining individuality from other similar groups. These groups are further combined (based on task/area of functionality) into larger groups which merge again into a simplified single I/O pipe for the whole super-group. Groups could be arbitrarily nested to any number of levels. Each level would be responsible for multiplexing I/O between the single I/O pipe to the higher level and the multiple sub-groups at the current level. At the top you end up with huge groups operating somewhat similar to a left brain and a right-brain with a fat I/O pipe between them acting like a corpus colossum (sp?). Talk about a huge bus width. What would that be -- a few hundred gigabit bus :)

      Just some pie in the sky amusement and speculation.
  • That slots are considered a bad thing nowdays. The trend is to manufacture boards with less expandability, not more. So let's see... Soundblaster 1024 Ultra, or another CPU board... but not both. Then again, I've never been accused of buying crappy consumer motherboards...
  • by ackthpt ( 218170 ) on Saturday November 03, 2001 @10:55AM (#2516179) Homepage Journal
    I've seen these around for ages, variety of manufacturers, but usually they're priced significantly higher than just buying several cheap PC's, granted you have a fast bus between cards/PC's, unless you have a redundant powersupply, one failure brings your whole cluster down, whereas networked mobos should be tolerant of one system failing. As for future, eh, they've been around long enough, but I expect the use has been rather specialized.
    • Well as for the single-point-of-failure for the host computer, you _are_ right.

      Another problem of course is the PCI bus speed, as someone already mentionned : if you are using some !gb/s link between the machines, that will allow you to deliver data much faster.

      But... wait ! if that's going through a PCI bus anyway...
      Hey, can some hardware people invent a _true_ bus, because we _are_ lacking something there.

      But that kind of solution might interest people wants to do more with less space... If they are ready to pay the price.

      All in all I'm not sure it's that interesting. Do someone have some benchmarking about that ???

    • Transputer advertisements were common in the back of the old Byte magazine. They were more popular in the UK than the US. With the newer low power consumption Transmeta & PowerPC CPUs + low RAM prices, this is more viable from a cost/power ratio now than then.

      It is not like Transmeta has a shortage of Linux talent to help bring this off. If Transmeta makes such a product and puts an advertisement in something like Linux Journal with Linus's smiling face beside it, it will sale like the proverbial hot cakes. I would buy one with or without his picture.

      Just a thought

    • I've seen cards that do this for sale in at a local computer shop these old people run.
  • The SETI version (Score:3, Insightful)

    by Wire Tap ( 61370 ) <frisina AT atlanticbb DOT net> on Saturday November 03, 2001 @11:00AM (#2516189)
    Does anyone here remember a while back when that "fake" company tried to sell us SETI @ Home PCI cards? I was about to place my order, until the word came to me that they were a fraud. Kind of a funny joke at the time, though. At any rate, here is the old /. story on it:

    http://slashdot.org/article.pl?sid=00/07/23/2158 22 6&mode=thread

    It would have been GREAT to have an improvement in CPU speed on a PCI card, as I always have at least two free in every system I own. What I wonder, though, is what instructional speed would the PCI card "CPUs" give us?
    • PCI card computers (Score:3, Interesting)

      by hattig ( 47930 )
      You have to remember from a certain aspect, you can add a PCI card to a motherboard which made the motherboard the PCI slave.

      PCI = PCI = PCI = CPU = PCI = PCI
      I I I I
      IDE CPU CPU CPU
      I I I
      USB PCIs PCIs
      I I
      IDE ..
      I
      USB

      I have left out memory controllers, northbridge, etc, and modern fancy chip interconnects because they are just fluff (no, not fluffers, that is another industry). In the above diagram, what is the host CPU? Is there actually such a thing as a host? The PCI bus is arguably the center of a modern PC, with CPUs and controllers hanging off of it.

      Modern motherboards are just a restriction on what you can do in reality. Reality is a PCI backplane on a case, maybe with a couple of PCI-PCI bridges. You can then add anything into any PCI card that you want - normal PCI cards, or CPUs (NB, Memory, CPU, etc).

      That is why you can configure these cards to use the 'host' IDE drive. It is just a device on every 'computer' within the case...

      I can't post a diagram though, because I must use "fewer junk characters". Bloody lameness filter - affects the real users, the people it is meant to trap just work around it. Would you call this a "lame post"?

      • At least for AMD based multiproc systems, the Northbridge seems to be the hub around which the CPUs are gathered. AFAIK, the RAM doesn't have a direct connection to either CPU, it has a dedcated bus to the Northbridge. Why is the Northbridge "fluff"? Isn't is the closest thing to a host on PC systems? It is what makes the "Motherboard" the mainboard. Where is the BIOS located? If it is part of the Northbridge, then that would close the argument for me. If it is discrete, then it is a good candidate for the center of a modern PC, if you allow that once a system is booted, "central" functions (like "basic Input and Output") can be migrated to other parts of the system.

        For me, anyway, the PCI-PCI bridge seems to be a pretty good negation of the "PCI bus as host" viewpoint. If anything, the PCI bus is just an extension to the PCI controller, which would seem to fall under the "Northbridge chipset as host" perspective.

        As we migrate from a single CPU paradyme to multiple CPU architectures, it seems the view of "Primary CPU controlling auxilliary cpus" is vestigal, and we will be moving away from it. This seems apparent if you follow the Locking mechanisms used by Linux migrating from large per-cpu locks, to finer grained locks. It is not very useful to have a CPU centric system when CPUs are commoditized. The Chipset seems to be the lowest common denominator for the forseeable future.
        • It is fluff in the context of the fact that you can connect multiple Northbridges (with CPUs, memory, PCI-PCI bridge possibly) to a PCI bus, and the PCI bus will be fine.

          The BIOS is located off an LPC device connected to the southbridge.

          A modern PC is a subset of what a PC could be. As I said.

          You can view a PC any way you like. But you can connect PPC computers on PCI cards to PCs, and they can access any resource on that PCI bus just like the host can. Because, it is simply another host on the PCI bus.

          Hence, PCI backplanes work. PCI-PCI bridges are there so you can have more than 6 PCI slots!

  • Impractical (Score:2, Troll)

    by atrowe ( 209484 )
    I don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things. I can see where they may come in handy for heavy number crunching applications such as SETI, but for web serving and DB applications, the throughput between the card and the host system is simply unacceptable.

    Also, I would imagine that the RF interference generated by having several of these in one box would be quite signifigant. PCI slots are only an inch or so apart on most motherboards, and without any sort of RF shielding between multiple cards, I can't imagine they'd function properly. It's a good idea on paper, but in reality, I'd think a few 1U rackmount servers would do the job much better. And for $499 a piece, you could get a decent single processor rackmount server for around the same price.

    • The way I understood it, the host system's motherboard was just a backplane supplying power to the computer, which was contained on the PCI card. IIRC, several years back, when Pentium IIs came out, lots of people wanted a way to upgrade their Pentium-I based systems. The easy answer was - make the motherboard into a holder for a very compact computer. It had, I think, a 333 Celeron, an SODIMM slot for memory, a single IDE channel and floppy controller, and onboard sound and video. Not too impressive, but the entire workings of the computer onto a single PCI card.

      Sun or SGI also has something like this, to allow SparcStation users to run Windows applications natively. Basically, a card with a 450MHz Pentium II, some RAM, video (no sound though), and the other necessities of a computer.

      I agree about the RF interference, however. I ran several computers, even in their shielded cases, in my room for a while, and it was a deadzone for our cordless phone. It would be only worse with inches, instead of feet, between the systems. Not all people have room for a rack to mount things on, however.
    • Re:Impractical (Score:1, Informative)

      by Anonymous Coward
      That's why they use ethernet for communications and just use the PCI bus for the power supply.

      The PCI bus is just an outdated fancy parallel port.
      • Please tell me you're not trying to argue that ethernet is faster than PCI. If that's true, your PCI bus would plug into your ethernet card and not the other way about. Do the math. 100/8 is a whole lot smaller than 133.
        • Re:Impractical (Score:2, Informative)

          by Shanep ( 68243 )
          Moderators need to mod Charles UP and the AC down.

          That's why they use ethernet for communications and just use the PCI bus for the power supply.

          There is nothing informative about this! I was supporting products like these from Cubix back in '94. Back then, those products were even using the ISA and EISA buses to carry ethernet between other Cubix cards on those buses.

          The PCI bus is just an outdated fancy parallel port.

          ROFL. ISA can carry data at 8MByte/S (8bit 8MHz = 64Mbit/S) to 32MByte/S (16bit 16MHz = 256Mbit/S) which provided a far better solution in these setups than a 10base-* NIC that was going to be plugged into ISA or EISA *anyway*!

          And PCI can carry data at 133MBytes/S (32bit 33.333MHz = 1Gbit/S)

          These cards are usually slaves to the host motherboard, not the other way around. This way they're easier to make and the assumption can be that whatever they're plugged into will be the master of the PCI bus, so no need to fiddle with master/slave config. For usage with a dumb PCI backplane, PCI master cards (one per backplane please) can also be purchased. Though I have'nt looked at this companies offerings.

          A Linux machine set up as a web server, accelerated with khttpd and one of these cards running FreeBSD serving the db would be an awesome setup. Nice and real fast. Especially with a server mobo with multiple PCI buses (buses != slots) to seperate the 100/1000Mb NIC interfaces and the db card.
    • the throughput between the card and the host system is simply unacceptable.

      I'd suspect a radar system requires much high a throughput than web or DB serving. Here's an example [transtech-dsp.com] of such a system. "160Mb/sec, 32 bit parallel synchronous interface" doesn't sound that high to me.

    • Re:Impractical (Score:3, Interesting)

      by morcheeba ( 260908 )
      RF Interference:
      I don't think there will be a problem with interference. Check out these computers. [skycomputers.com] They use a similar system, but instead of being on a pidly motherboard, they use the ubiquitous VME format. They really pack in the processors -- 4 G4 PPC's per daughter card [skycomputers.com], and 4 daughter cards per single 9U VME card, and then 16 9U cards per chassis, and then three chassis. (4*4*16*3=48 TFLOPS) The pitch spacing on PCI is comprable to that on VME.

      Also, I wondered about the connector on the tops of these boards. It looks like another PCI card edge. I wonder if this is a duplicate of the host PCI interface (for debug purposes), if it's a new "slot" to connect to the server's internal bus, or if it's a way to connect server cards bypassing the main PCI bus (for better performance).
    • Re:Impractical (Score:4, Informative)

      by Knobby ( 71829 ) on Saturday November 03, 2001 @01:13PM (#2516428)

      don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things.

      You've heard of Beowolf clusters, right?

      Let's imagine I'm running some large routine to model some physical phenomena.. Depending on the problem, it is often possible to split the computational domain into small chunks and then pass only the elements along the interfaces between nodes.. So, how does that impact this discussion? Well, let's assume I can break up an NxM grid onto four subdomains. The communication from each node will consist of N+M elements (not NxM).. Now, let's take a look at our options. I can either purchase 4 machines with gigabit (~1000Mb/s) ethernet, Myranet (~200Mb/s) cards, or maybe I can use ip-over-firewire (~400Mb/s) to communicate between machines.. Gigabit ethernet has some latency problems that are answered by Myranet, but if we just look at the bandwidth issue, then ~1000Mb/s is roughly 125MB/s. That's slower than the 133MB/s you quoted above for a 32bit, 33MHz PCI bus.. Of course there are motherboards out there that support 64bit, 66MHz PCI cards (such as these from TotalImpact [totalimpact.com])..

      You're right that the PCI bus is not as fast as the data io approaches use by IBM, Sun, SGI, etc to feed their processors. BUT, if I'm deciding between one machine sitting in the corner crunching numbers, or 4 machines sitting in the corner talking slowly to each other through an expensive gigabit ethernet switch, guess which system I'm going to look at?

    • You know that that's megaBYTES per second, yes? Or just over a gigbit per second? If that's not fast enough for you, what is? Pretty much any solution to connect something external to the box is going to have to go through the same bottleneck. Really, the only faster buses you have on a PC are the RAM sockets, and the AGP socket. I seem to recall a special high-speed networking solution that goes through AGP, but we're talking a little bit different class of hardware.
    • Hmm, methinks this is a very good troll.

      Aside from the nick and the sig, calling 133MB/s 'measly' is absurd. Sure, compared to servers that cost an order of magnitude higher than these do, it is a little slow. But comparted to 100 Mb Ethernet, it is pretty fast. For specific applications it is definately useable.

      And RF problems? How about24 CPUs [rlxtechnologies.com] in a 3u package, using a similar concept?

      But for a troll, its nicely done. Several detailed replies, even I couldn't resist!

  • Here [national.com] are the Geode specs... "Speeds offered up to 266 MHz"
  • I read through the site and I could not find ANYTHING vs. relative x86 cpu speed. Anyone find anything? Sure it's great to have a PC, but at least give us some hint of how it performs compared to an x86 cpu.

  • Perhaps this is the way to get over the anti Linux brigade when they say 'Linux is difficult to install'.

    Just hand them a PCI card and let them get on with it. I can't help thinking it would be better on a USB device though. Then you wouldn't even need to open the case !

    • Actually, in my LUG we've given the newbies an eval copy of VMware, and a pre-installed linux image... let's them play for a month before they have to think about installing.
      • VMware is another way, but its a bit expensive. I would rather spend my $300 and get some hardware to show for it and effectively 2PCs, than spend it on vmware because it will run slower.

        I did have a copy of VMware which I paid for, but I lost interest when they went all 'enterprise' on it and the prices got stupid.

        Still, theres always plex86, but I want to run it under Windows ME :-(

  • by JAZ ( 13084 ) on Saturday November 03, 2001 @11:34AM (#2516247)
    Follow me here:
    A computer used to take up a room.
    Then, computers were large cabinets in a computer room.
    Now, they are boxes in a computer cabinet in a computer room.
    So we can extrapolate the next step for computers is to be cardss in computer box in a computer cabinet in a computer room.

    It's a natural (obvious) progression really.
  • sgi has been doing this for a long time. their
    newest systems are almost this exactly, but instead of slow, thin pci, they use large, fast
    interconnects:

    http://www.sgi.com/origin/300/
  • He didn't even try to do any parrallel processing with these things! That was the first thought that came into my head.

    Here we have four or five cpus all in one machine, talking to eachother over a native PCI bus. It seems to me this would be a great way to run a Beowulf cluster In a machine.

    Anyone care to comment on why he might not have done this?
  • Commercial rent is expensive, so the least space you need to dedicate out of your office to store servers the more cost effective they are.

    These cards have been around for ages with various degrees of complexity. There used to be (don't know if they are still around) some of these cards that were designed to plug into a Mac so the card would do all the hard work if you wanted to emulate a PC.

    I don't see the value for the home user. I can't see why a true home user (not the very small percenteage of hardcore enthusiasts or people that run a business from home) would need so much power that the solution is to get a box, plug a few of these babies and cluster them.

    Still, its not so hard to come up with a home scenario:

    1. Send your broadband connection to the basement of your house and spread it to all the rooms in the house with a $80 broadband router, cheap switches and hubs.

    2. Put a box in a closet in the basement with different PCI cards to serve a specific purpose. For my own personal needs (I am a Microsoft dot whore, sorry) I would have an exchange server, one dedicated as a network file server, a sql server and a IIS server. A person of the Unix persuasion would have a card with sendmail and some kind of pop server, a file server, mysql or posgres and Apache.

    With just a little bit of money the house now packs as much punch inside of that box in a basement closet than what it takes my company to do with a row of bulky servers. Add in a blackbox switch and a cheap 14-in monitor, keyboard and mouse and you are set. Of course Unix people would use some kind of secure shell and save themselves the trip to the basement, and us lazy Microsoft whores will just have to rely on Terminal Services or pcAnywhere.

    In a corporate environment the space saving actually pays off (you don't pay your apartment rent or home mortgage by the square foot like most businesses do) as soon as you recover some of the space wasted by the server room. Right now I can see how I could take ours, gut it out, put a couple boxes full of these PCI cards in a good closet with the proper ventilation, and then turn the old equipment room into a telecommuter's lounge.

    The home solution would rock because my wife will not bother me anymore about all those weird boxes sitting under my desk in my home office. All the clutter goes away and I just keep my tower case.
  • by tcc ( 140386 )
    266mhz max. Their target audience is the firewall/network application.

    Too bad a Dual Athlon-based solution (on a full length PCI card) would suck too much juice... at least from the current PCI specs... AMD needs to make a move like intel did with their Low wattage PIII, I'd love to see a 12 processor (5 pci slots plus host) renderfarm in a single box for a decent price. Not only it would be space saving, but imagine that in a plexi-glass case :) a geek's dream.

  • this better not be another Krasnoconv with that hoax SETI-accellerator card!
    I don't know if I can take another disappointment like that.
  • by Anonymous Coward
    Imagine if all the devices in your computer were attached to each other with 100 GB optical cable.

    Essentially there would be a switch that allowed about 32 devices to be attached.

    The devices could be storage devices, processors, audio/video devices, or communication devices.

    Storage devices would be things like memory, hard drives, cdroms and the like.

    This bus would allow multiple processors to access the same device at the same time and would allow devices to communicate directly to each other, like allowing a program to be loaded directly from a hard drive into memory, or from a video capture device directly onto a hard drive.

    No motherboard, just slots that held different form factor devices with power and optical wires attached.

    A networking device would allow the internal protocol to be wrapped in IP and allow the interntal network to be bridged onto ethernet. This would allow the busses on seperate computers to work like a single computer. The processors on all the machines could easily network together, memory could be shared seamlessly, harddrive storage would be shared and kept backedup in real time. Any device in any machine could communicate directly with any other device in any other machine. Security allowing.

    Want 20 processors in your machine? Install them.

    Want 6 memory devices with 1GB each? Add them.

    Want 100 desktop devices with only a network device, display device and input/output device that use the processor and storage out of an application server? No problem.

    Want a box that seemlessly runs 20 different OSes each in a virtual machine that are ran across 10 boxes in a redundant failover system? No problem, it's all done in hardware.

    Want the hard drives in all the desktop machines to act like one giant raid 5 to store all the companies data on? No problem. (1000 machines with 10 GB each is 10 TB of storage)

    This is the future of computing.
    • I think the basic form to use is some simplified base system designed to be upgraded to the extreme. No built-in crap on the motherboard to speak of.. just lots of PCI slots. If they could share harddrive and RAM and provide a keyboard/mouse/monitor switching method similar to KVM switches but all in one box it'd be great. So rather than replacing older computers we could just add to them. Maybe perfect something like MOSIX and drop the whole stupid SMP idea. I've always imagined computers would someday be like legos where you could buy a CPU lego, a RAM lego, a harddrive lego, etc and just plug them together in any order to add to a hot system. No reboot and no case to open. If one burned out just toss it and put a new one in.
    • it exists already, sweetie. check out the infiniband spec somewhere.
  • by Phizzy ( 56929 ) on Saturday November 03, 2001 @12:17PM (#2516331)
    I am actually typing this comment on a Sun Microsystems SUNPCI card.. It's a celeron, I beleive a 466mhz or so, w/ 128m of ram. It has onboard video if you want to use an external monitor or can use the sun's native card if you want to run it windowed, ditto w/ ethernet. I've been using the card for about 3 months now, and other than some instability w/ SunOS 2.6 (which dissapeared in 2.8), I haven't had problems with it.. you can copy/paste between the Sun window and the 'PC' window, which is very helpful.. and though we are running WIN2000 on it (ok.. so shoot me) I don't see any reason why you couldn't run linux on it if you really wanted too.. All-in-all, the card is pretty badass..

    //Phizzy
    • I agree.

      I'm posting this with Konqueror on Sun Blade 100. Next to the Konq window I have a SunPCI window with W2K/Office2K. As nice as Sun's StarOffice it still doesn't import/export clients' office documents properly.

      • yeah.. that, and StarOffice eats all your ram, not to mention your desktop when you run it. ;)

        The test I've run of SunPCI has convinced our management to do away w/ separate NT/2000 systems when we move to a new building in april, and just outfit everyone w/ Ultra 5s, SunPCI cards and dual-head monitors..

        //Phizzy
    • Actually, Sun is making them now with 733 mhz Celeries in them.

      Definitely an awesome product.

  • Brings back memories of Transputer cards :D

    How does sharing of the disk between each machine on a card affect the performance ?
  • I haven't been paying attention to the market... I guess things like this aren't all that rare. Apparently there's a G4 PPC computer-on-a-card as well.

    But anyway, it reminds me a quite a bit of what Avid/Digidesign do for their high-end systems.
    You see people who've got 6 slot PCI systems and 4 of those slots are filled with extra computing cards (sometimes more... some people get expansion chasis'). You can rely on your computers processor if you're not doing to many complex effects on a track of audio, but at some point (not too hard to reach... throw in a tube amp emulator and a reverb) you run out of CPU. So they have PCI cards which have a couple of DSP chips (Motorola 56xxx series, I think) on them, and the more of these you add, the more audio processing you can do simultaneously.

    At some point, perhaps people will think: hey, why add a specialized card? Why not just more general purpose computing power?
  • I'm fairly interested in those devices, but right now the the cost for those boards is not cheap enough for me to get it. at ~$500 a pop, I could put together a cheap system with better specs (not on a board, of course). I know it's targeted for server/commercial applications, but if they are willing to lower the price some, I'm sure there'll be a lot of takers.

    my idea setup will be using a CF card with CF-IDE adapter as the boot drive(which eliminate the dependancy of the host OS on powerup and no actual HD required).

    • Minaturization costs money, at least in the macroscopic world. If you have a square foot of motherboard, relatively straightforward designs and big cheap commodity components become feasible. Smaller systems save a little power and make higher clock rates attainable (in a 1GHz clock cycle of 1ns, an electronic signal only travels about a foot), but until commodity desktop/minitower hits such a performance wall (or simply goes out of mass-market fashion) it'll always be cheaper than embeddable versions or laptops.
  • Here's a G4 card that plugs into a PC or anything with a PCI slot for $400
    http://www.sonnettech.com/product/crescendo_7200 .h tml#pricing

    The Catch: You have to write the device driver for the Motorola MPC107 PCI bridge chip.
  • I'd like to see a bus that was little more than a switch, with a minimum of logic for management.

    For cards, it'd be great if each card had its own CPU and RAM. Ideally the cards would have a few universal connectors, each of which could accomodate an I/O module which would contain just the discrete electronics necessary to drive a specific device or medium (eg, video, audio, disk, network).

    Bus-Switch modules would be interconnectable to accomodate more cards, and would have switch-like manegement features for segmentation, isolation and failover type features.

    The CPU cards themselves ought to be less complicated than motherboards since there's no bus logic, just interconnect logic to the Switch-Bus and the I/O modules, and RAM.

    Since each board has its own RAM and CPU it ought improve system performance because the O/S could offload much more processing to CPU boards dedicated to specific tasks. Instead of the kernal bothering with lower-level filesystem tasks and driving the hardware, a "driver" for filesystem and devices could be loaded on a CPU board dedicated to I/O.

    The same could be true of user interfaces -- run the the UI on the board dedicated to video, audio and USB. The kernal could run applications or other jobs on the "processing" CPU board(s).

    Networking? Offload the entire IP stack to the networking CPU board.

    • it doesn't seem like this would work so well...the PCI bus would get very hosed with I/O requests from the RAM on each chip. Instead of limiting the hosage of physical memory access to the IDE or SCSI bus, you take the PCI bus down with you too (making the bandwidth of the PCI bus limit scalability of how many procs you can use).

      also, having multiple memories accessing the same data in a distributed program adds plenty of overhead to make sure all the memories maintain validity and access control of the data. thus the chips wouldn't be as simple as CPU, RAM, interface.
  • Seems pci bus would be a horrific limiting factor considering you now have a series of processors sccessing resources across the pci bus, as well as on-board. As a firewall, unless the nic's are something decent that can handle hardware trunking to a real switch, 1 interface isn't going to take you far. If they could do dot1q trunking, it would definately be nice. Beyond that, I can't see something commercial like checkpoint fw1 running efficiently on the daughter cards with no more than 266mhz and 256mb of ram. Might work ok for something such as ipf, pf, or ip chains on a stripped down linux kernel, but no freakin way no on win2k. I say stick to a cluster of 1u's if space is that big of an issue. I doubt it's any less complicated to set up a cluster 1u's than getting these things to work *correctly*.
    • These cards have TWO NICs, one that talks across the PCI bus and one physical RJ45 10/100...

      I'm only go to say this once, but I could copy/paste the same response to 20 or 30 posts on here...
  • (disclaimer: my bro's wife usta work for this company)

    rackmounted PCs with video, etc. They're intended for offices: you run cables to each person's monitor/keyboard/mouse, manage all the actual hardware in one place ~~ ClearCube [clearcube.com]

  • I looked at this and said... wait a minute, hasn't this already been sorta done [slashdot.org]? Despite not being a full featured box, Firecard [merilus.com] is a PCI-card running Linux... for the purposes of supporting a firewall (as you could have guessed from the name if you'd not read the story -- Nov 14 2001... but it's cool that they've taken it to the next level.

  • This sounds like the old radius rocket for mac. Each rocket was essentiallly a quadra on a nubus card.
    http://lowendmac.com/radius/rocket.shtml
  • Those of you wondering about expandable servers should have a look at Transmeta's homepage (http://www.transmeta.com)- among their featured products is a server using something called Serverblade - single board servers. I think you can fit 24 of them in a 3u rack enclosure.
  • This is really a great system ! of course it's a bit expensive but still, having two more linux boxes inside your system to play around with, without the extra cases/cables/noise sounds awesome. it would also rock to host several small servers .. I'll definitely try to get some.
  • In the communications industry, this isn't anything new. The idea of having multiple computers on blades in a chassis is a (the) standard called 'PICMG', based on CompactPCI technology. It's been around a long while. I validate systems like this at work.

    You can get 4, 8, 16, or even 24 SBCs (Single Blade Computers) in a chassis, and link these chassis together via switches. Each chassis has a switch that links all the SBCs in the backplane together and has external ports to hook it up to the outside world.

    Check this out:
    http://www.picmg.org/compactpci.stm
    and this:
    http://www.intel.com/network/csp/products/cpci_i nd ex.htm
  • Sun Microsystems have had "PC cards" for a while now. There was a whitepaper they published some time ago on using a small Sun server (say, an E450) and populating it with PC cards.

    They demonstrated how an entire Windows NT cluster could be built using this technology, chucked in some Terminal Services under Windows, ran Exchange, and then did all the important stuff (mail, DNS, whatever) on the Sun box itself.

    Granted, it's not Linux, and granted, he cost of a Sun box is quite high - but the PC cards are significantly cheaper over here for Sun hardware, and Sun architecture seems to be a bit more robust and scalable than PC stuff.
  • Stuff like this started in the PC world, IIRC, with 386s on 16-bit ISA cards.

    Nobody cared then.

    Why would anyone care now?

    Please explain your point using no more than 100 words.

    -
  • Sunpci card (Score:2, Informative)

    by johnnyp ( 17487 )
    I've got an Ultra 5 with a PCI card which has an AMD K6/400 on it so's the Ultra can run a Windoze machine. The K6 shares the HD disk and can either use the Sun display or you can plug a separate monitor into its on-board vid. Also shares the ethernet card. It works OK, runs Win 98 fine (95 is supposed to work though I can't get it to, but I have seen one running NT 4.0) and you can cut and paste between CDE and Win 98. The only real use I find for it is firing up IE to see if webpages I've made look OK.

    I think you can pick them up pretty cheap nowadays if you like that sort of thing. I don't imagine much mileage from trying to install a.n.other unless you feel like writing the relevant drivers to get everything to talk to each other.

    • By which I meant a.n.other OS than Win95/98 or NT 4.0. As far as I know you can put as many of these PCI cards in your Sun box as you can fit in should it take your fancy.
      • Re:Sunpci card (Score:2, Informative)

        Sun limits the number of these cards which are supported in each system type, due to bandwidth limitations, but obeying the supported rule is up to you. I wouldn't put more than 2 in a desktop box (U5/10,SB100) and I think only 1 is supported. IIRC, Sun supports up to 6 in an E450.

        The current model is a 733MHz Celeron with 128MB RAM base, going up to 1GB RAM, onboard Rage something or other graphics. Supports all versions from Windows from 95 to 2K Adv Server.

        You can do some interesting things with these. Since Windows is 'installed' in an image file on the UFS filesystem, you can copy them, easily back them up, and bring differant images up on differant cards at differant times. You could have the office apps images running during the day and the Quake servers running at night ... ;-) They are also cheap enough to have a spare in the server should one go tits up.

        They won't run Linux unfortunately. They would have to add support for that to the SunPCi software.

        - Mark
  • wait a minute (Score:2, Insightful)

    by dakoda ( 531822 )
    this is exactly what many good video cards do, but in a specialized manner. same with high end sound etc. the idea of putting powerful cpu's on cards is probably as ancient as cards themselves.

    as has been noted before, this would really be useful if the pci bus was extended (faster/wider). of course, making it faster/wider gives you what sgi has been doing for a while too (also mentioned above).

    perhaps the most dissapointing thing is that all that power goes to waste on users playing solitare, running windows, aol, and quake, not on something that will actually need the power to perform the tasks. well, maybe quake isnt so bad...
  • by hatless ( 8275 ) on Saturday November 03, 2001 @09:57PM (#2517651)
    Can this be the, uh, future?

    No, not if it's existed for decades. It's what's referred to as a "mainframe". You know. An expandable number of processor boards running under an operating system that can treat them as any number of single-processor or multiprocessor machines, with the ability to reassign processes between CPUs.

    The Unix world has had them for a long time, too. Modern examples include Sun's higher-end servers, which support hot-swappable and hot-pluggable processors and memory.

    Doing it with x86 processors and standard x86 OSes like x86 Unixes and Windows is less common but I believe Compaq and maybe Unisys can sell you machines that can do it, too, with one or several instances of the OS running at once.

    This hatdware approach is not quite the same as VMWare's server products, which do it via software and don't limit you to one OS per processor or block of processors. It in turn mimics other decades-old mainframe operating environments in approach.
  • The only two cool things about the PCI card in the article is host-IP connectivity (which essentially is a dual ethernet card then, perfect for firewalls and such), and 10W power draw straight from the host's power supply.

    However, Powerleap [powerleap.com], ubiquitous for upgrades and socket adapters, also has a card which touts some similar attributes called the Renaissance/370S [powerleap.com] based on a Socket 370 or FC-PGA chip. It cranks with Celeron, Celeron II, and P3 chips. Quite rockin'.

    The main cool thing about this device is it does NOT use the motherboard slot it sits in. It just uses it as a place to mount. That's right, you can put one in an ISA slot and still run the motherboard it sits in and they won't know a thing about each other, because no pins are connected between them. The price is also a lot better (~$250 for a low end model), you can swap out the CPU and it has two DIMM slots, with a max ram per slot of 512mb (1GB combined). The specs are much better and the price is much lower. It's just marketed as an upgrade option rather than a performance enhancement to an existing machine.

    I've been looking into this as a solution for my cluster, but haven't gotten up the nerve to buy them yet. From what I can find on the web, they're the best cluster card option, especially if you are handy with soldering. To really maximize the power per box, I'd probably buy a dead 486 motherboard (ISA slots all the way across the board, which this card requires), slam four Renaissance cards in it, link two power supplies in parallel, rig extra power and reset switches for each card separate from the power supply, and there's your mini-cluster. Probably 4 machines per 4U case, which notably isn't a huge space savings over 4 1U pizza boxes, but it costs less than a single 1U server would.
  • IBM AS/400's have offered integrated Netfinity adapters for years. These are PCI cards with processors, memory, and console/network connections which share power and storage on the AS/400. You can fit up to 16 of these in a single machine. Check it out [ibm.com]
  • I bought one of these cards back at LinuxWorld. The hardware's nice, they don't work exactly as advertised though.

    The bundled kernel module only works with the stock kernel distribution in RedHat 6.2-7.1 (kernel 2.4.2 max). The kernel module sets up a virtual network device that allows the host pc to talk to the slotserver. The kernel module needs to run on both the host and the card to be able to communicate with each other localy. (You can still communicate via the 10/100 interface over the network.)

    Another thing they advertize is the ability to have the card boot off of the host computer via a "virtual disk" (rather than having IDE drives hanging off of the card). I have't been able to get this working at all - and the only documentation available tells you that the feature exists.

    It would kind of suck to have a PC loaded with 4 cards and 4 additional disks. I could sell a kidney and purchase some disk-on-chips I guess.

    -Andy
  • I believe Amiga made something similar about 10 years ago. It was called a bridgeboard [amiga-society.de]. It even had onboard CGA graphics..! I didn't have the slightest clue of the existance of Linux at the time, so I can't say whether it would have worked or not. Maybe someone else out there has tried it.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...