Fitting A Linux Box On A PCI Card 137
An Anonymous Coward writes: "Running on Newsforge/Linux.com is a hardware review where Slashdot's Krow took a couple of OmniCluster's Slotservers and and built a cluster configuation inside of a singe host computer (and even had DB2 running on one of the card's inside of the host). Could something like this be the future of computing where for additional processing power you just kept adding additional computers inside of a host?"
Ob Beowulf comment (Score:2)
It would be cool to have completely separate processors in a box, so that as long as there is power, each card can run on its own. Then you could network them together into a beowulf cluster, and then make clusters of clusters
the AC
G4 processor cards (Score:3, Interesting)
I'd just love the idea of having a host PC (or a Beowulf cluster of them ;-) with all the PCI slots filled with G4 7400 boards crushing numbers...
Re:G4 processor cards (Score:3, Insightful)
But it would be cool.
--jeff
Re:Ob Beowulf comment (Score:3, Interesting)
http://www.dnaco.net/~kragen/sa-beowulf/
Alas I think the project seems to be dead for some time now.
Re:Ob Beowulf comment (Score:1)
Just some pie in the sky amusement and speculation.
Too bad... (Score:1)
Re:Too bad... (Score:1)
I've learned how to set up bind. Maybe you didn't bother to do a proper host command? This isn't an ICANN sanctioned TLD, you know. So until you're willing to cough up the $50,000 application fee, that's not a realistic possibility... why don't you try my homepage instead? http://24.30.242.100:8000 Until you set up your _OWN_ bind properly, it isn't going to work for you.
Seen these for a long time (Score:3, Insightful)
Re:Seen these for a long time (Score:1)
Another problem of course is the PCI bus speed, as someone already mentionned : if you are using some !gb/s link between the machines, that will allow you to deliver data much faster.
But that kind of solution might interest people wants to do more with less space... If they are ready to pay the price.
All in all I'm not sure it's that interesting. Do someone have some benchmarking about that ???
aka transputers (Score:1)
Transputer advertisements were common in the back of the old Byte magazine. They were more popular in the UK than the US. With the newer low power consumption Transmeta & PowerPC CPUs + low RAM prices, this is more viable from a cost/power ratio now than then.
It is not like Transmeta has a shortage of Linux talent to help bring this off. If Transmeta makes such a product and puts an advertisement in something like Linux Journal with Linus's smiling face beside it, it will sale like the proverbial hot cakes. I would buy one with or without his picture.
Just a thought
Re:Seen these for a long time (Score:1)
Re:Performance (Score:1)
Since I've got a router/firewall box using the same CPU I should be able to answer that one (300Mhz):
RC5: Summary: 4 packets (25.00 stats units)
1.14:10:07.62 - [48,778 keys/s]
As you can see, number crunching is rigtht out of the question. Still, easily fast enough to push packets around.
The SETI version (Score:3, Insightful)
http://slashdot.org/article.pl?sid=00/07/23/215
It would have been GREAT to have an improvement in CPU speed on a PCI card, as I always have at least two free in every system I own. What I wonder, though, is what instructional speed would the PCI card "CPUs" give us?
PCI card computers (Score:3, Interesting)
PCI = PCI = PCI = CPU = PCI = PCI ..
I I I I
IDE CPU CPU CPU
I I I
USB PCIs PCIs
I I
IDE
I
USB
I have left out memory controllers, northbridge, etc, and modern fancy chip interconnects because they are just fluff (no, not fluffers, that is another industry). In the above diagram, what is the host CPU? Is there actually such a thing as a host? The PCI bus is arguably the center of a modern PC, with CPUs and controllers hanging off of it.
Modern motherboards are just a restriction on what you can do in reality. Reality is a PCI backplane on a case, maybe with a couple of PCI-PCI bridges. You can then add anything into any PCI card that you want - normal PCI cards, or CPUs (NB, Memory, CPU, etc).
That is why you can configure these cards to use the 'host' IDE drive. It is just a device on every 'computer' within the case...
I can't post a diagram though, because I must use "fewer junk characters". Bloody lameness filter - affects the real users, the people it is meant to trap just work around it. Would you call this a "lame post"?
Re:PCI card computers (Score:1)
For me, anyway, the PCI-PCI bridge seems to be a pretty good negation of the "PCI bus as host" viewpoint. If anything, the PCI bus is just an extension to the PCI controller, which would seem to fall under the "Northbridge chipset as host" perspective.
As we migrate from a single CPU paradyme to multiple CPU architectures, it seems the view of "Primary CPU controlling auxilliary cpus" is vestigal, and we will be moving away from it. This seems apparent if you follow the Locking mechanisms used by Linux migrating from large per-cpu locks, to finer grained locks. It is not very useful to have a CPU centric system when CPUs are commoditized. The Chipset seems to be the lowest common denominator for the forseeable future.
Re:PCI card computers (Score:1)
The BIOS is located off an LPC device connected to the southbridge.
A modern PC is a subset of what a PC could be. As I said.
You can view a PC any way you like. But you can connect PPC computers on PCI cards to PCs, and they can access any resource on that PCI bus just like the host can. Because, it is simply another host on the PCI bus.
Hence, PCI backplanes work. PCI-PCI bridges are there so you can have more than 6 PCI slots!
Impractical (Score:2, Troll)
Also, I would imagine that the RF interference generated by having several of these in one box would be quite signifigant. PCI slots are only an inch or so apart on most motherboards, and without any sort of RF shielding between multiple cards, I can't imagine they'd function properly. It's a good idea on paper, but in reality, I'd think a few 1U rackmount servers would do the job much better. And for $499 a piece, you could get a decent single processor rackmount server for around the same price.
Re:Impractical (Score:2)
Sun or SGI also has something like this, to allow SparcStation users to run Windows applications natively. Basically, a card with a 450MHz Pentium II, some RAM, video (no sound though), and the other necessities of a computer.
I agree about the RF interference, however. I ran several computers, even in their shielded cases, in my room for a while, and it was a deadzone for our cordless phone. It would be only worse with inches, instead of feet, between the systems. Not all people have room for a rack to mount things on, however.
Re:Impractical (Score:1, Informative)
The PCI bus is just an outdated fancy parallel port.
Re:Impractical (Score:2)
Re:Impractical (Score:2, Informative)
That's why they use ethernet for communications and just use the PCI bus for the power supply.
There is nothing informative about this! I was supporting products like these from Cubix back in '94. Back then, those products were even using the ISA and EISA buses to carry ethernet between other Cubix cards on those buses.
The PCI bus is just an outdated fancy parallel port.
ROFL. ISA can carry data at 8MByte/S (8bit 8MHz = 64Mbit/S) to 32MByte/S (16bit 16MHz = 256Mbit/S) which provided a far better solution in these setups than a 10base-* NIC that was going to be plugged into ISA or EISA *anyway*!
And PCI can carry data at 133MBytes/S (32bit 33.333MHz = 1Gbit/S)
These cards are usually slaves to the host motherboard, not the other way around. This way they're easier to make and the assumption can be that whatever they're plugged into will be the master of the PCI bus, so no need to fiddle with master/slave config. For usage with a dumb PCI backplane, PCI master cards (one per backplane please) can also be purchased. Though I have'nt looked at this companies offerings.
A Linux machine set up as a web server, accelerated with khttpd and one of these cards running FreeBSD serving the db would be an awesome setup. Nice and real fast. Especially with a server mobo with multiple PCI buses (buses != slots) to seperate the 100/1000Mb NIC interfaces and the db card.
Re:Impractical (Score:1)
I would suspect that the sig is a joke and the misspelling is the "punch line"...
maru
www.mp3.com/pixal
Re:Impractical (Score:1)
Re:Impractical (Score:1)
Re:Impractical (Score:2)
I'd suspect a radar system requires much high a throughput than web or DB serving. Here's an example [transtech-dsp.com] of such a system. "160Mb/sec, 32 bit parallel synchronous interface" doesn't sound that high to me.
Re:Impractical (Score:3, Interesting)
I don't think there will be a problem with interference. Check out these computers. [skycomputers.com] They use a similar system, but instead of being on a pidly motherboard, they use the ubiquitous VME format. They really pack in the processors -- 4 G4 PPC's per daughter card [skycomputers.com], and 4 daughter cards per single 9U VME card, and then 16 9U cards per chassis, and then three chassis. (4*4*16*3=48 TFLOPS) The pitch spacing on PCI is comprable to that on VME.
Also, I wondered about the connector on the tops of these boards. It looks like another PCI card edge. I wonder if this is a duplicate of the host PCI interface (for debug purposes), if it's a new "slot" to connect to the server's internal bus, or if it's a way to connect server cards bypassing the main PCI bus (for better performance).
Re:Impractical (Score:4, Informative)
don't see these things taking off for most uses because the PCI bus is limited to a measly 133 MB/S. Even newer 64 bit PCI slots found in some servers have insignifigant bandwidth to keep the data flowing fast enough to make full use of these things.
You've heard of Beowolf clusters, right?
Let's imagine I'm running some large routine to model some physical phenomena.. Depending on the problem, it is often possible to split the computational domain into small chunks and then pass only the elements along the interfaces between nodes.. So, how does that impact this discussion? Well, let's assume I can break up an NxM grid onto four subdomains. The communication from each node will consist of N+M elements (not NxM).. Now, let's take a look at our options. I can either purchase 4 machines with gigabit (~1000Mb/s) ethernet, Myranet (~200Mb/s) cards, or maybe I can use ip-over-firewire (~400Mb/s) to communicate between machines.. Gigabit ethernet has some latency problems that are answered by Myranet, but if we just look at the bandwidth issue, then ~1000Mb/s is roughly 125MB/s. That's slower than the 133MB/s you quoted above for a 32bit, 33MHz PCI bus.. Of course there are motherboards out there that support 64bit, 66MHz PCI cards (such as these from TotalImpact [totalimpact.com])..
You're right that the PCI bus is not as fast as the data io approaches use by IBM, Sun, SGI, etc to feed their processors. BUT, if I'm deciding between one machine sitting in the corner crunching numbers, or 4 machines sitting in the corner talking slowly to each other through an expensive gigabit ethernet switch, guess which system I'm going to look at?
Re:Impractical (Score:2, Informative)
You meant Myrinet [myri.com]. And you meant 200MB/s [myri.com] not 200Mb/s. Actually it's almost 2 Gbps (2000 Mbps).
Re:Impractical (Score:2)
Re:Impractical (Score:3)
Aside from the nick and the sig, calling 133MB/s 'measly' is absurd. Sure, compared to servers that cost an order of magnitude higher than these do, it is a little slow. But comparted to 100 Mb Ethernet, it is pretty fast. For specific applications it is definately useable.
And RF problems? How about24 CPUs [rlxtechnologies.com] in a 3u package, using a similar concept?
But for a troll, its nicely done. Several detailed replies, even I couldn't resist!
Geode Specs (Score:2)
You don't understand... (Score:1)
Re:You don't understand... (Score:2)
CPU Speed (Score:1)
Linux on PCI cards is the way forward. (Score:1)
Just hand them a PCI card and let them get on with it. I can't help thinking it would be better on a USB device though. Then you wouldn't even need to open the case !
Re:Linux on PCI cards is the way forward. (Score:2, Interesting)
Re:Linux on PCI cards is the way forward. (Score:1)
I did have a copy of VMware which I paid for, but I lost interest when they went all 'enterprise' on it and the prices got stupid.
Still, theres always plex86, but I want to run it under Windows ME :-(
Isn't that the course we've been on? (Score:4, Funny)
A computer used to take up a room.
Then, computers were large cabinets in a computer room.
Now, they are boxes in a computer cabinet in a computer room.
So we can extrapolate the next step for computers is to be cardss in computer box in a computer cabinet in a computer room.
It's a natural (obvious) progression really.
Re:Isn't that the course we've been on? (Score:2)
Re:Isn't that the course we've been on? (Score:1)
Re:Isn't that the course we've been on? (Score:2)
and then... ?
salt on chips on cards in boxes in cabinets in rooms?
hmmm
future? yes, but it's here today... (Score:1)
newest systems are almost this exactly, but instead of slow, thin pci, they use large, fast
interconnects:
http://www.sgi.com/origin/300/
Wait! What about Beowulf? (Score:1)
Here we have four or five cpus all in one machine, talking to eachother over a native PCI bus. It seems to me this would be a great way to run a Beowulf cluster In a machine.
Anyone care to comment on why he might not have done this?
If you do have storage issues... (Score:1)
These cards have been around for ages with various degrees of complexity. There used to be (don't know if they are still around) some of these cards that were designed to plug into a Mac so the card would do all the hard work if you wanted to emulate a PC.
I don't see the value for the home user. I can't see why a true home user (not the very small percenteage of hardcore enthusiasts or people that run a business from home) would need so much power that the solution is to get a box, plug a few of these babies and cluster them.
Still, its not so hard to come up with a home scenario:
1. Send your broadband connection to the basement of your house and spread it to all the rooms in the house with a $80 broadband router, cheap switches and hubs.
2. Put a box in a closet in the basement with different PCI cards to serve a specific purpose. For my own personal needs (I am a Microsoft dot whore, sorry) I would have an exchange server, one dedicated as a network file server, a sql server and a IIS server. A person of the Unix persuasion would have a card with sendmail and some kind of pop server, a file server, mysql or posgres and Apache.
With just a little bit of money the house now packs as much punch inside of that box in a basement closet than what it takes my company to do with a row of bulky servers. Add in a blackbox switch and a cheap 14-in monitor, keyboard and mouse and you are set. Of course Unix people would use some kind of secure shell and save themselves the trip to the basement, and us lazy Microsoft whores will just have to rely on Terminal Services or pcAnywhere.
In a corporate environment the space saving actually pays off (you don't pay your apartment rent or home mortgage by the square foot like most businesses do) as soon as you recover some of the space wasted by the server room. Right now I can see how I could take ours, gut it out, put a couple boxes full of these PCI cards in a good closet with the proper ventilation, and then turn the old equipment room into a telecommuter's lounge.
The home solution would rock because my wife will not bother me anymore about all those weird boxes sitting under my desk in my home office. All the clutter goes away and I just keep my tower case.
Geode? (Score:2)
Too bad a Dual Athlon-based solution (on a full length PCI card) would suck too much juice... at least from the current PCI specs... AMD needs to make a move like intel did with their Low wattage PIII, I'd love to see a 12 processor (5 pci slots plus host) renderfarm in a single box for a decent price. Not only it would be space saving, but imagine that in a plexi-glass case
Wait just a minute... (Score:1)
I don't know if I can take another disappointment like that.
Imagine a new kind of bus (Score:2, Insightful)
Essentially there would be a switch that allowed about 32 devices to be attached.
The devices could be storage devices, processors, audio/video devices, or communication devices.
Storage devices would be things like memory, hard drives, cdroms and the like.
This bus would allow multiple processors to access the same device at the same time and would allow devices to communicate directly to each other, like allowing a program to be loaded directly from a hard drive into memory, or from a video capture device directly onto a hard drive.
No motherboard, just slots that held different form factor devices with power and optical wires attached.
A networking device would allow the internal protocol to be wrapped in IP and allow the interntal network to be bridged onto ethernet. This would allow the busses on seperate computers to work like a single computer. The processors on all the machines could easily network together, memory could be shared seamlessly, harddrive storage would be shared and kept backedup in real time. Any device in any machine could communicate directly with any other device in any other machine. Security allowing.
Want 20 processors in your machine? Install them.
Want 6 memory devices with 1GB each? Add them.
Want 100 desktop devices with only a network device, display device and input/output device that use the processor and storage out of an application server? No problem.
Want a box that seemlessly runs 20 different OSes each in a virtual machine that are ran across 10 boxes in a redundant failover system? No problem, it's all done in hardware.
Want the hard drives in all the desktop machines to act like one giant raid 5 to store all the companies data on? No problem. (1000 machines with 10 GB each is 10 TB of storage)
This is the future of computing.
Re:Imagine a new kind of bus (Score:3, Interesting)
Re:Imagine a new kind of bus (Score:1)
SUN has a similar product.. (Score:5, Interesting)
//Phizzy
Re:SUN has a similar product.. (Score:1)
I'm posting this with Konqueror on Sun Blade 100. Next to the Konq window I have a SunPCI window with W2K/Office2K. As nice as Sun's StarOffice it still doesn't import/export clients' office documents properly.
Re:SUN has a similar product.. (Score:2)
The test I've run of SunPCI has convinced our management to do away w/ separate NT/2000 systems when we move to a new building in april, and just outfit everyone w/ Ultra 5s, SunPCI cards and dual-head monitors..
//Phizzy
Re:SUN has a similar product.. (Score:2)
Definitely an awesome product.
hmmm.. (Score:1)
How does sharing of the disk between each machine on a card affect the performance ?
Audio Apps -- Digidesigns DSP Farms (Score:2)
But anyway, it reminds me a quite a bit of what Avid/Digidesign do for their high-end systems.
You see people who've got 6 slot PCI systems and 4 of those slots are filled with extra computing cards (sometimes more... some people get expansion chasis'). You can rely on your computers processor if you're not doing to many complex effects on a track of audio, but at some point (not too hard to reach... throw in a tube amp emulator and a reverb) you run out of CPU. So they have PCI cards which have a couple of DSP chips (Motorola 56xxx series, I think) on them, and the more of these you add, the more audio processing you can do simultaneously.
At some point, perhaps people will think: hey, why add a specialized card? Why not just more general purpose computing power?
Re:Audio Apps -- Digidesigns DSP Farms (Score:2)
my problem is price (Score:1)
my idea setup will be using a CF card with CF-IDE adapter as the boot drive(which eliminate the dependancy of the host OS on powerup and no actual HD required).
Re:my problem is price (Score:1)
G4 PCI cpu (Score:1)
http://www.sonnettech.com/product/crescendo_720
The Catch: You have to write the device driver for the Motorola MPC107 PCI bridge chip.
Switched Bus, Multipurpose cards (Score:2)
For cards, it'd be great if each card had its own CPU and RAM. Ideally the cards would have a few universal connectors, each of which could accomodate an I/O module which would contain just the discrete electronics necessary to drive a specific device or medium (eg, video, audio, disk, network).
Bus-Switch modules would be interconnectable to accomodate more cards, and would have switch-like manegement features for segmentation, isolation and failover type features.
The CPU cards themselves ought to be less complicated than motherboards since there's no bus logic, just interconnect logic to the Switch-Bus and the I/O modules, and RAM.
Since each board has its own RAM and CPU it ought improve system performance because the O/S could offload much more processing to CPU boards dedicated to specific tasks. Instead of the kernal bothering with lower-level filesystem tasks and driving the hardware, a "driver" for filesystem and devices could be loaded on a CPU board dedicated to I/O.
The same could be true of user interfaces -- run the the UI on the board dedicated to video, audio and USB. The kernal could run applications or other jobs on the "processing" CPU board(s).
Networking? Offload the entire IP stack to the networking CPU board.
Re:Switched Bus, Multipurpose cards (Score:1)
also, having multiple memories accessing the same data in a distributed program adds plenty of overhead to make sure all the memories maintain validity and access control of the data. thus the chips wouldn't be as simple as CPU, RAM, interface.
i dunno about these (Score:1)
Read the article... (Score:2)
I'm only go to say this once, but I could copy/paste the same response to 20 or 30 posts on here...
ClearCube has another one (Score:1)
rackmounted PCs with video, etc. They're intended for offices: you run cables to each person's monitor/keyboard/mouse, manage all the actual hardware in one place ~~ ClearCube [clearcube.com]
Linux on PCI a year old, dudes! (Score:2, Insightful)
I looked at this and said... wait a minute, hasn't this already been sorta done [slashdot.org]? Despite not being a full featured box, Firecard [merilus.com] is a PCI-card running Linux... for the purposes of supporting a firewall (as you could have guessed from the name if you'd not read the story -- Nov 14 2001... but it's cool that they've taken it to the next level.
Radius Rocket (Score:1)
http://lowendmac.com/radius/rocket.shtml
Single card servers (Score:1)
I want some ! (Score:1)
This idea is actually the standard (Score:1)
You can get 4, 8, 16, or even 24 SBCs (Single Blade Computers) in a chassis, and link these chassis together via switches. Each chassis has a switch that links all the SBCs in the backplane together and has external ports to hook it up to the outside world.
Check this out:
http://www.picmg.org/compactpci.stm
and this:
http://www.intel.com/network/csp/products/cpci_
Haven't Sun been doing this for a while? (Score:1)
They demonstrated how an entire Windows NT cluster could be built using this technology, chucked in some Terminal Services under Windows, ran Exchange, and then did all the important stuff (mail, DNS, whatever) on the Sun box itself.
Granted, it's not Linux, and granted, he cost of a Sun box is quite high - but the PC cards are significantly cheaper over here for Sun hardware, and Sun architecture seems to be a bit more robust and scalable than PC stuff.
who cares? (Score:2)
Nobody cared then.
Why would anyone care now?
Please explain your point using no more than 100 words.
-
Re:who cares? lots of possibilities (Score:2)
By my reckoning, half-width 1U rackmount PCs stack more densely than mid-tower cases each with a half-dozen cards in it. Overall reliability increases, as well: you don't need to rely on a single motherboard and power supply to keep things going.
Putting a bunch of rackmount PCs into a single (portable, or not) box is also a very trivial exercise.
Is realestate at the desk really at such a premium that anyone need care about these things?
Sunpci card (Score:2, Informative)
I think you can pick them up pretty cheap nowadays if you like that sort of thing. I don't imagine much mileage from trying to install a.n.other unless you feel like writing the relevant drivers to get everything to talk to each other.
Re:Sunpci card (Score:1)
Re:Sunpci card (Score:2, Informative)
The current model is a 733MHz Celeron with 128MB RAM base, going up to 1GB RAM, onboard Rage something or other graphics. Supports all versions from Windows from 95 to 2K Adv Server.
You can do some interesting things with these. Since Windows is 'installed' in an image file on the UFS filesystem, you can copy them, easily back them up, and bring differant images up on differant cards at differant times. You could have the office apps images running during the day and the Quake servers running at night
They won't run Linux unfortunately. They would have to add support for that to the SunPCi software.
- Mark
wait a minute (Score:2, Insightful)
as has been noted before, this would really be useful if the pci bus was extended (faster/wider). of course, making it faster/wider gives you what sgi has been doing for a while too (also mentioned above).
perhaps the most dissapointing thing is that all that power goes to waste on users playing solitare, running windows, aol, and quake, not on something that will actually need the power to perform the tasks. well, maybe quake isnt so bad...
days of future passed (Score:3, Informative)
No, not if it's existed for decades. It's what's referred to as a "mainframe". You know. An expandable number of processor boards running under an operating system that can treat them as any number of single-processor or multiprocessor machines, with the ability to reassign processes between CPUs.
The Unix world has had them for a long time, too. Modern examples include Sun's higher-end servers, which support hot-swappable and hot-pluggable processors and memory.
Doing it with x86 processors and standard x86 OSes like x86 Unixes and Windows is less common but I believe Compaq and maybe Unisys can sell you machines that can do it, too, with one or several instances of the OS running at once.
This hatdware approach is not quite the same as VMWare's server products, which do it via software and don't limit you to one OS per processor or block of processors. It in turn mimics other decades-old mainframe operating environments in approach.
Some merit to the design, but this is better. (Score:1)
However, Powerleap [powerleap.com], ubiquitous for upgrades and socket adapters, also has a card which touts some similar attributes called the Renaissance/370S [powerleap.com] based on a Socket 370 or FC-PGA chip. It cranks with Celeron, Celeron II, and P3 chips. Quite rockin'.
The main cool thing about this device is it does NOT use the motherboard slot it sits in. It just uses it as a place to mount. That's right, you can put one in an ISA slot and still run the motherboard it sits in and they won't know a thing about each other, because no pins are connected between them. The price is also a lot better (~$250 for a low end model), you can swap out the CPU and it has two DIMM slots, with a max ram per slot of 512mb (1GB combined). The specs are much better and the price is much lower. It's just marketed as an upgrade option rather than a performance enhancement to an existing machine.
I've been looking into this as a solution for my cluster, but haven't gotten up the nerve to buy them yet. From what I can find on the web, they're the best cluster card option, especially if you are handy with soldering. To really maximize the power per box, I'd probably buy a dead 486 motherboard (ISA slots all the way across the board, which this card requires), slam four Renaissance cards in it, link two power supplies in parallel, rig extra power and reset switches for each card separate from the power supply, and there's your mini-cluster. Probably 4 machines per 4U case, which notably isn't a huge space savings over 4 1U pizza boxes, but it costs less than a single 1U server would.
This is nothing new (Score:2)
Drivers only work with a stock RedHat kernel (Score:1)
The bundled kernel module only works with the stock kernel distribution in RedHat 6.2-7.1 (kernel 2.4.2 max). The kernel module sets up a virtual network device that allows the host pc to talk to the slotserver. The kernel module needs to run on both the host and the card to be able to communicate with each other localy. (You can still communicate via the 10/100 interface over the network.)
Another thing they advertize is the ability to have the card boot off of the host computer via a "virtual disk" (rather than having IDE drives hanging off of the card). I have't been able to get this working at all - and the only documentation available tells you that the feature exists.
It would kind of suck to have a PC loaded with 4 cards and 4 additional disks. I could sell a kidney and purchase some disk-on-chips I guess.
-Andy
Computer in a computer (Score:1)