Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Fitting A Linux Box On A PCI Card 137

An Anonymous Coward writes: "Running on Newsforge/Linux.com is a hardware review where Slashdot's Krow took a couple of OmniCluster's Slotservers and and built a cluster configuation inside of a singe host computer (and even had DB2 running on one of the card's inside of the host). Could something like this be the future of computing where for additional processing power you just kept adding additional computers inside of a host?"
This discussion has been archived. No new comments can be posted.

Fitting A Linux Box On A PCI Card

Comments Filter:
  • Re:Impractical (Score:1, Informative)

    by Anonymous Coward on Saturday November 03, 2001 @11:21AM (#2516221)
    That's why they use ethernet for communications and just use the PCI bus for the power supply.

    The PCI bus is just an outdated fancy parallel port.
  • Re:Impractical (Score:4, Informative)

    by Knobby ( 71829 ) on Saturday November 03, 2001 @01:13PM (#2516428)

    don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things.

    You've heard of Beowolf clusters, right?

    Let's imagine I'm running some large routine to model some physical phenomena.. Depending on the problem, it is often possible to split the computational domain into small chunks and then pass only the elements along the interfaces between nodes.. So, how does that impact this discussion? Well, let's assume I can break up an NxM grid onto four subdomains. The communication from each node will consist of N+M elements (not NxM).. Now, let's take a look at our options. I can either purchase 4 machines with gigabit (~1000Mb/s) ethernet, Myranet (~200Mb/s) cards, or maybe I can use ip-over-firewire (~400Mb/s) to communicate between machines.. Gigabit ethernet has some latency problems that are answered by Myranet, but if we just look at the bandwidth issue, then ~1000Mb/s is roughly 125MB/s. That's slower than the 133MB/s you quoted above for a 32bit, 33MHz PCI bus.. Of course there are motherboards out there that support 64bit, 66MHz PCI cards (such as these from TotalImpact [totalimpact.com])..

    You're right that the PCI bus is not as fast as the data io approaches use by IBM, Sun, SGI, etc to feed their processors. BUT, if I'm deciding between one machine sitting in the corner crunching numbers, or 4 machines sitting in the corner talking slowly to each other through an expensive gigabit ethernet switch, guess which system I'm going to look at?

  • Sunpci card (Score:2, Informative)

    by johnnyp ( 17487 ) on Saturday November 03, 2001 @07:52PM (#2517361) Homepage
    I've got an Ultra 5 with a PCI card which has an AMD K6/400 on it so's the Ultra can run a Windoze machine. The K6 shares the HD disk and can either use the Sun display or you can plug a separate monitor into its on-board vid. Also shares the ethernet card. It works OK, runs Win 98 fine (95 is supposed to work though I can't get it to, but I have seen one running NT 4.0) and you can cut and paste between CDE and Win 98. The only real use I find for it is firing up IE to see if webpages I've made look OK.

    I think you can pick them up pretty cheap nowadays if you like that sort of thing. I don't imagine much mileage from trying to install a.n.other unless you feel like writing the relevant drivers to get everything to talk to each other.

  • Re:Sunpci card (Score:2, Informative)

    by PerfectWorld ( 301445 ) <samuraimark@gangwarily.ca> on Saturday November 03, 2001 @08:33PM (#2517461) Homepage
    Sun limits the number of these cards which are supported in each system type, due to bandwidth limitations, but obeying the supported rule is up to you. I wouldn't put more than 2 in a desktop box (U5/10,SB100) and I think only 1 is supported. IIRC, Sun supports up to 6 in an E450.

    The current model is a 733MHz Celeron with 128MB RAM base, going up to 1GB RAM, onboard Rage something or other graphics. Supports all versions from Windows from 95 to 2K Adv Server.

    You can do some interesting things with these. Since Windows is 'installed' in an image file on the UFS filesystem, you can copy them, easily back them up, and bring differant images up on differant cards at differant times. You could have the office apps images running during the day and the Quake servers running at night ... ;-) They are also cheap enough to have a spare in the server should one go tits up.

    They won't run Linux unfortunately. They would have to add support for that to the SunPCi software.

    - Mark
  • by hatless ( 8275 ) on Saturday November 03, 2001 @09:57PM (#2517651)
    Can this be the, uh, future?

    No, not if it's existed for decades. It's what's referred to as a "mainframe". You know. An expandable number of processor boards running under an operating system that can treat them as any number of single-processor or multiprocessor machines, with the ability to reassign processes between CPUs.

    The Unix world has had them for a long time, too. Modern examples include Sun's higher-end servers, which support hot-swappable and hot-pluggable processors and memory.

    Doing it with x86 processors and standard x86 OSes like x86 Unixes and Windows is less common but I believe Compaq and maybe Unisys can sell you machines that can do it, too, with one or several instances of the OS running at once.

    This hatdware approach is not quite the same as VMWare's server products, which do it via software and don't limit you to one OS per processor or block of processors. It in turn mimics other decades-old mainframe operating environments in approach.
  • Re:Impractical (Score:2, Informative)

    by Shanep ( 68243 ) on Sunday November 04, 2001 @12:18AM (#2517879) Homepage
    Moderators need to mod Charles UP and the AC down.

    That's why they use ethernet for communications and just use the PCI bus for the power supply.

    There is nothing informative about this! I was supporting products like these from Cubix back in '94. Back then, those products were even using the ISA and EISA buses to carry ethernet between other Cubix cards on those buses.

    The PCI bus is just an outdated fancy parallel port.

    ROFL. ISA can carry data at 8MByte/S (8bit 8MHz = 64Mbit/S) to 32MByte/S (16bit 16MHz = 256Mbit/S) which provided a far better solution in these setups than a 10base-* NIC that was going to be plugged into ISA or EISA *anyway*!

    And PCI can carry data at 133MBytes/S (32bit 33.333MHz = 1Gbit/S)

    These cards are usually slaves to the host motherboard, not the other way around. This way they're easier to make and the assumption can be that whatever they're plugged into will be the master of the PCI bus, so no need to fiddle with master/slave config. For usage with a dumb PCI backplane, PCI master cards (one per backplane please) can also be purchased. Though I have'nt looked at this companies offerings.

    A Linux machine set up as a web server, accelerated with khttpd and one of these cards running FreeBSD serving the db would be an awesome setup. Nice and real fast. Especially with a server mobo with multiple PCI buses (buses != slots) to seperate the 100/1000Mb NIC interfaces and the db card.
  • Re:Impractical (Score:2, Informative)

    by Luminous Coward ( 445673 ) on Sunday November 04, 2001 @03:36PM (#2519464)

    I can either purchase 4 machines with gigabit (~1000Mb/s) ethernet, Myranet (~200Mb/s) cards [...]

    You meant Myrinet [myri.com]. And you meant 200MB/s [myri.com] not 200Mb/s. Actually it's almost 2 Gbps (2000 Mbps).

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...