When The PCI Bus Departs 215
km790816 writes: "I was just reading an article in the EETimes about the possible war over the technology to replace the PCI bus. Intel has their 3GIO. (Can't find any info on Intel's site.) AMD has their HyperTransport. There has been some talk about HyperTransport going into the XBox. I hope they can agree on a bus. I don't want another bus standard war. So when can I get a fully optical bus on my PC?" Now that's what I'd like: cheap transceivers on every card and device, and short lengths of fiber connecting them up. Bye bye to SCSI, IDE, USB, Firewire ...
MMmm Fiber (Score:2)
Look for PCI-X and InfiniBand (Score:5)
First, in a nutshell, look for PCI-X. It makes PCI a switch based medium instead of a bus, will let you utilize your existing PCI devices.. Supposed to be very good for performance (Gigabytes/s) and will be out in a few months. This is the one to bank on..
The real competitor is InfiniBand. The big 7 are involved in this (includes Compaq, Intel, IBM, MS, HP..). Its huge, with an 800 page spec you can get online for $10. It may be used to replace local I/O infrastructure (ie pci), as well as comm between hosts (ie cluster computers). It functions as: an I/O replacement (pci), Storage area network solution (ie fiberchannel, smart disks), and System Area Networks (cluster computer interconnect..Myrinet, GEth, SCI). Multi-gigabit links, switched medium, memory protection for transfers between hosts, split dma transaction notions (ie remote DMA mechanisms)..InfiniBand is what happened to a lot of the smaller competing standards, btw..
A lot of people are not happy with IB. People see it as the big 7 trying to ram more hardware down our throats in a way that forces us to rewrite our OSes and do communication. Looking on the IB trade page and you will literally come across quotes along the lines of "InfiniBand(tm) is a knight in shining armor to save you from the monster of PCI".. Do we really need to have a new standard when 90% of the people out there use it for low-bandwidth soundcards, 5,000 gate ethernet transceivers....?
You MUST read this article about PCI-X vs IB:
http://www.inqst.com/articles/pcixvib/pcix.htm
grimace/Georgia Tech
Re:InfiniBand / Serial ATA / Fiber Channel HDDs (Score:2)
40 pins? Last time I worked with fibre the cables were 4 pins. They were heavenly to work with, especially after wiring up SCSI RAID racks (8 68 conductor cables all running under the floor, and 68 conductor cables are a true PITA to work with, they're heavy (especially when shielded and 50' long), don't bend easily, keep their shape even when you don't want them to (they always want to be rolled up) and it's easy to bend the pins on the connector. The Fibre Channel cables are like thin serial cables. A single person can carry enough cables for two or three enclosures without even breaking a sweat, and wire them up in 1/4 of the time.
Of course this isn't such a big deal in PC-land, but those tiny little cables do allow for better airflow in your case (especially compared to those 68 conductor ribbon walls, I usually split the cable into 4 or 5 secgments just so it doesn't create a huge heat sheild inside of my case (especially important for those hot running 10k RPM drives!)
Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs.
Re:Don't forget (Score:2)
Well then it really isn't gone is it? The southbridge has, among other things, integrated ISA.
For all practical purposes, it's gone. In cases where hardware seen on the 'ISA bus', it's actually on an LPC (Low pin count) bus. While LPC is easy to adapt to ISA pinouts, it is not a 1 to 1 connection.
SiS for example does that. The 630 (A combined north and south bridge) connects to a SuperIO chip (950) which supports floppy, serial, parallel, IR, and hardware health monitor. The 950 is configured like any PnP multifunction device (just with a LOT of functions). The 630 handles memory, video, audio, IDE and other functions.
Re:Its not needed (Score:2)
PCI is 32 bit at 33MHz. This is 1000 Mbits per second. The only interface device that needs or ever will need more than that is Graphics cards, and they have AGP slots. Most of the devices we use can even be run over USB.
Unless, of course, you have a gigabit ethernet card and also want to use the hard drive. (Or 2 Gig ethernet cards).
Re:It's a 1Gb Capacity, Shared between users (Score:2)
Do you use a Gigabit card to connect a single computer, and use the full bandwidth at all times? What is it connected to?
Yes, and then some. Think high capacity mirrored file servers.
Also, not all that data needs to go to DMA. A reasonable chunk is packet header
What's that got to do with anything? It still has to get to the card. (And actually, it does need to be DMA. I'd rather queue the packets and forget about them until they're on the 'wire'.
Re:Who the hell uses PCI for graphics anyway? (Score:2)
At least I was wrong on one point, decent PCI cards *are* still available. Just not the new top high end ones. (which are generally what the people who need TWO cards need).
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:2)
TI did NuBus (I think), but the idea of a geographically addressed bus isn't new. I seem to recall the Apple II bus working that way (but I'm not sure, I never owned one). EISA (I think) does as well. Some of the very old DEC busses did.
The problem is if you don't assign enough space to a device you end up being forced to page flip and do other unplesent things. If the bus had been designed in 1990 people would have looked at existing cards and decided 16M was far more then anything needed (1M was about as much as any PC video card of the era had). Now your Nvida card with 64M VRAM (or SGRAM or whatever) would be really painful to access. If you assign too much space you cramp either the amount of RAM that can be used, or the number of expansion cards you can have.
Some BIOSes did this, back when there was a mix of PnP and non PnP cards in most systems it was really useful. I haven't had a problem in years though, so I'm not gonna put this high on my list of stuff I want to change in the world.
Re:Will they be CPU neutral? (Score:2)
There is definitely a spec. You can have x86 code in the ROM (for "compatibility"), or OpenFirmware. Which is exactly what you wanted.
The big problem is nobody is required to do a OF boot ROM, and the even bigger problem is not with the cards, but BIOSes which don't support OpenFirmware. So rather then having the choice of doing x86 which only works on 95% of the market, or OpenFirmware which works on 100%, card makers have the choice of 5% or 95%. Most of them choose the 95% one, and sometimes make a second boot ROM "for Macintosh" (it'll work on PCI SPARCs and the like though).
I think the PCI spec even lets you have both ROM images at once, but nobody seems to do it, so there may be a problem with that.
Still, for a primarallrly Intel spec it was a good start.
Re:New way to make money (Score:2)
Re:PCI bus will be around a while (Score:2)
Interestingly enough, the MCA card edge connector seems to have lived on beyond the electronic philosophy that spawned it (now watch someone pop up to announce that it was originally designed for some short lived video game console or something), with a little geographical relocation and, in one case, a color change, as the VESA connector and the PCI connector.
The VESA extension was a more direct route into and out of the 486, which is why it only survived on a very few Pentium boards, because extra circuits had to be added to adapt it to the new processor design. If the 486 had remained on top for another 5 or 10 years, VESA would have been very, very big.
Re:PCI bus will be around a while (Score:3)
Needless to say, it was the smallest, slowest stampede in history.
At least EISA and VESA slots would also take regular ISA boards (except for that full length card skirt thing).
Don't forget (Score:3)
---
Re:Anyone else notice that Timothy never knows... (Score:2)
No more "I cant use it and therefore hate it because I dont have Windows and yet I somehow play all of these Windows games" lies!
No more "I wont believe it until they send me one for free" BS!
No more "I want one" whining!
No more "But really they could just use some 'groovy' tech like optohypernanofiber to make it work better even though I have no idea what I'm talking about and have so little knowledge about this subject that I couldn't even begin to justify this argument but since I have 5 words and never talk in the comments I won't have to" fucking garbage like this!
~GoRK
Re:Will they be CPU neutral? (Score:2)
> virtual machine spec to initialize the card and provide any functions needed to boot.
> Lastly, since the purpose of this VM is very focused, it can provide very high-level
> operations to the system.
Are you talking about a VM just to boot the hardware, or for all communications to it? I've been thinking about a custom driver VM for a while, to have completely platform-independent drivers. Since the majority of cards depend on similar operations (mem move, read/write locations, trigger interrupts etc), those could be provided as optimized high level ops with negligible slowdown. Then you could have truly generic drivers for your hardware. Some ultra-high-bandwidth devices might suffer a bit, like video, but it might be acceptable.
Forget Speed, just Solve the Problems! (Score:2)
Will somebody please explain to me why I'm seeing stuff in motherboard FAQs like "Try swapping the cards" or "don't ever put SB Live and a NIC in slots 2 and 3 together!" ??? This is crazy. A slot is a slot is a slot. If it's not, then something is dreadfully wrong (and, since it's not, then something definitely is screwed up in the PC hardware industry).
From what I've been able to learn lately, one of the big problems is that not all PCI cards properly share the bus. That they don't fully adhere to the spec. So, we've got a (relatively) decent slot spec, but because some (most?) manufacturers cut corners, we've got crazy incompatibility problems.
Add to that the fact that we're still saddled with this antique IRQ system that makes about as much sense as, well, frankly it makes no sense.
Please tell me (I haven't gone to read them yet) that these new bus architectures that people are working on will at least solve these problems? There's no reason I should have to drop into the BIOS to change an obscure setting, or to re-position all my cards, just beacuse I decided to drop in a firewire card. But that's what's happening now, and it's driving me bugfu*k.
On a more idealistic (and probably impractical) note, has anyone considered the possibility of making the bus very generalized? To the point where I could add, say, a 2nd (or 4th) processor just by slapping in a card? Or upgrade from one CPU to another by replacing the current card? Is this the way the old S-100 bus worked? Why can't we do that today? (I know, it'd require a COMPLETE overhaul of how we build computers, but might it be worthwhile?)
And, on a more immediate level, is there anything that motherboard manufacturers can do today to help relax the problems? Could an enterprising engineer move away from the current north/south bridge architecture to a totally new, IRQ-rich, forgiving, true PnP environment, without breaking existing OS or cards?
And, finally, from a more pessimistic point, what's to say that, even with a new kick-ass solve-all-our-problems promote-world-peace bus, that cards or MBs won't be built fully in compliance with the spec, and that we'll just be back in the same boat?
Re:Open Hardware... (Score:2)
IBM was trying to play nice with existing PC industry, and it's likely that they expected the clones all along. Perhaps they though they could make additional money licencing their BIOS, but that didn't work out.
As for the "Industry Standard Architecture" - that was something that was made up by a trade group that didn't include IBM (even though they were paying IBM licence fees for the tech). There were certain problems with IBM computers in the early 90s because (as IBM put it) the slots were "AT-compatible" but not fully "ISA-compatible".
--
Re:Internal Firewire (Score:2)
For some reason, it was soundly rejected by the hardware OEMs (maybe because Intel wouldn't put 1394 into the standard chipsets) and died a silent death. Instead, folks like compaq are chasing proprietary snap-in storage for their iPaq desktop. Too bad, because it would have made adding expansion much more end-user friendly.
--
Re:Don't forget (Score:3)
1) Change the BIOS setup so that it reads "Legacy Floppy Device".
2) Do nothing for 10 more years.
Where margins are razor-thin and every additional device means less profits, you might wonder why they don't just kill the thing. Well, the reason is that there is NO adequate replacement unless they 'solve' the problem of the PC's horrific bootstrapping code.
Moving to something like OpenFirmware that could treat any device as the legacy "A:" is expensive and would take some form of communication and leadership, so it's cheaper to keep pushing out a 20 year old floppy interface.
Don't forget, these are people that have found a way to reduce the retail price of a keyboard from $100 to $5, yet that $5 model still has a special "Scroll Lock" light that nobody ever uses.
--
Re:Internal Firewire (Score:2)
This is very, very likely. I saw a USB 2.0 card at Circuit City last week. It had four external ports and one INTERNAL port. I also own a FireWire card. That card has three external ports and the board etchings (but no hardware) for an internal port. It is very easy to imagine IDE having to compete with one of these standards for internal connection of periperals down the road.
It is very easy to imagine a future machine with only USB 2.0 ports inside and out for connecting up all peripherals. Slots won't go away, but their need and number could be vastly reduced. Now, if traditional peripheral boards such as sound cards and modems could be packaged as modules that can be installed without opening the case, we will be another step closer to truly consumer friendly computers.
Re:why not agp? (Score:2)
Re:why not agp? (Score:5)
There's also the basic reason that almost nothing besides gfx cards -need- the huge bandwidth and bus speed of AGP.
Internal Firewire (Score:2)
It is also worth noting that many technologies that were once found in the form of cards are being moved outside, such as analog video capture, to where Joe consumer has an easier time connecting them. For this reason we will probably find the low to medium end simply not using the internal bus, whereas the medium to high end will.
Of course technologies such as PCI buses will still be needed as parallel connectors provide a level of simplicity in the development of cards, and do not limit the total speed of data flow to that of the serial-parallel and parallel-serial converts. Though on the other hand with more computer technologies focusing on serial based solutions, maybe this is just inevetable.
Anyone else notice that Timothy never knows... (Score:5)
> Now that's what I'd like: cheap transceivers on
> every card and device, and short lengths of
> fiber connecting them up. Bye bye to SCSI, IDE,
> USB, Firewire.
Here the posting is about replacing the high-bandwidth (formerly local) bus in PC architechture, and he thinks the suggestion regarding an optical bus is to be used for the (relatively) slow I/O busses of IDE, SCSI, Firewire and USB?
I think there should be Metaeditors to handle the editors who talk before their brain starts working. Either that or Timothy should be disallowed from adding "his two cents" to a news posting.
-Chris
...More Powerful than Otto Preminger...
cheap optical == jumbo shrimp (Score:2)
Re:bye bye? (Score:2)
I've seen a telephone that was powered by optical fiber. I believe it was designed by Bell Northern Research.
Re:bye bye? (Score:2)
Nah, that's not hard at all. You just combine power along with the optical cable. 2.5" hard drive cables have been doing this for decades. It's just another wire in the bundle.
Re:Interesting (Score:2)
Somehow, I don't think fiber is the answer for mouse and keyboard, and the reason you just expressed is one of them. The other one is cost: do you really want to build a fiber transceiver into a mouse, and then pay for a fiber cable? When I can get a USB mouse for $12 at the corner store, I'm going to resist fiber mice pretty hard.
Re:Optical card questions (Score:2)
Huh? How do I spin the hard drives and CD's? How do I power the laser to burn CDR's? You're losing me.
Re:Its not needed (Score:5)
Is that like Bill Gates saying no one will ever need more than 640kb? Frankly, I use two graphics cards in my desktop, and the only reason I don't have three is that the cost of another LCD panel is ludicrous. As soon as they come down more, I want one more panel, and then I'll be happy. I hate having to settle for a differently-branded (and usually more expensive) PCI card just because I don't have more AGP ports available. Usually the cutting edge stuff only comes out on AGP.
Granted, I'm not playing Quake on all three at once, but the only reason I'm not is because I can't. I'd love to be able to play my driving games on all 3, with the left monitor being a left view, and the right being a right view. Or a view of my nearest competitor. Or even just a big rear view mirror. The possibilities are endless.
The next thing up is storage area networking. PCI cards can't handle the biggest SAN loads, like our DVD jukeboxes at work. We can only use one 300-DVD jukebox per server, because the bus load can't handle more. Think in terms of quad Xeon servers, and it'll make sense - you can indeed shuffle a lot of load across the bus and off the fiber network if you need it. (And no, it's not a single reader per jukebox, there's lots of readers in each jukebox.)
Re:Don't forget (Score:2)
And yet you can get a HappyHacker keyboard, which removes the lights, the unused keys, and the numeric keypad, for $100
You'd think they could have passed some that savings on to the consumer instead of charging more for the same product. If a keyboard coasts $5, then a keyboard with 75% of the keys should cost $3.75 (or something like that).
Re:Does anybody remember Microchannel? (Score:2)
Re:I'm keeping an eye on Apple for the answer (Score:2)
Sure, they were quick to move on USB and their own Firewire, but they aren't exactly riding on the bleeding edge on every single component in their systems.
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:2)
Apple didn't invent NuBus. My flaky memory tells me it was TI, which could be wrong, but it wasn't Apple. Apple merely selected NuBus for the Mac II, from among several alternatives that already existed at the time.
Re:bye bye? (Score:2)
DMA isn't really a feature of the bus so much as the adapter implementation. Ethernet cards can do DMA, and SCSI cards can do polled I/O.
Re:InfiniBand / Serial ATA / Fiber Channel HDDs (Score:2)
Heading? No, we're already here.
Re:InfiniBand / Serial ATA / Fiber Channel HDDs (Score:2)
19 cycles for a 1GHz processor is actually not too bad; good large-system memory interconnects today are in the hundreds of cycles for anything but the very nearest memory, and even that latency can be pretty well hidden in NUMA or MT systems. The serial nature of the interconnect is simply not that big a performance issue in real memory-system design; the simplicity/physical benefits of serial protocols and cabling are much more important.
Re:InfiniBand / Serial ATA / Fiber Channel HDDs (Score:2)
Yes. I work for EMC.
Firstly, FC-AL is pretty much dead for high-end systems, which all use switched FC nowadays.
Secondly, the very biggest storage systems out there still use SCSI inside, not FC...but it doesn't matter. It doesn't matter at all what's *inside* the box, because the whole point of such a system is to achieve high performance through aggregation of small channels and/or to avoid the channels altogether by using a huge cache. Sure, the drives are SCSI, but there are hundreds of them, on dozens of separate SCSI buses. There's plenty of internal bandwidth to make the drives do all that they're capable of doing, so the question is not "why use SCSI when FC is faster" but rather "why use FC when SCSI is fast enough".
Given that Plain Old SCSI is still doing yeoman's service as a back-end interconnect, and that FC components will probably always be more expensive than SCSI components, there's just no benefit to being "all-FC". Anyone who says otherwise is just putting marketing spin on a questionable engineering decision
Disclaimer: I work for EMC, but I don't speak for them (nor they for me). The above is all public information, personal opinion, and simple common sense, unrelated to any EMC trade secrets or marketing hype.
Problems with PCI (Score:2)
PCI was a major improvement over the VESA localbus, but things have improved since.
In the networking world, I am seeing a lot of development around AMD's HyperChannel. I don't know the specifics of how it works, but I am starting to see a lot of products utilizing it. For networking, I'm dealing with bandwidths starting at 2.5Gbps and scaling upwards from there.
PCI has outgrown its usefulness in the server world. It's not difficult to saturate 133MB/sec of the standard 32-bit 33MHz PCI. 64-bit 66MHz is an improvement, but the rate things are moving, it won't be long before that's a big bottleneck.
I'm sure that today's big raid servers can probably saturate even a 66MHz 64-bit PCI interface.
-Aaron
Re:Look for PCI-X and InfiniBand (Score:3)
One thing to remember about IB - the pinout for IB (be it Fiber or copper) is substantially less than that for the PCI local bus specs (being a serial, rather than parallel interface). IB could be useful for hooking up SANs, clusters, etc... but for the most part, it won't replace PCI/PCI-X for endpoints. There are several companies that have or are making IB to PCI/PCI-X bridge chips. A great thing for external I/O or storage towers. There's a lot of hardware involved, and for the server market, it could be a great thing. For desktops, well - it will be quite a while before anyone bothers... we still don't have 64b slots/adapters and/or a 66MHz bus on PCs (cost/benefit problems there), so until those are in demand for desktops, there shouldn't be too much of a push for an even higher bandwidth, more expensive internal connection...
(disclaimer: I work for one of the 'Big 7', though not directly on IB at the moment)
--
Re:Don't forget (Score:2)
No, it's really gone. The floppy connector is just part of the South Bridge chip, along with the keyboard controller, serial and parallel ports, and other low-bandwidth stuff. It hangs off of the PCI bus. These devices are just hard-coded to ACT like the old ISA equivalents, with their addresses, interrupts, DMA, etc. being hard-addigned to the old standard values.
Re:Design issues (Score:2)
An old story I heard once said that the original Intel spec of the 8088 (?) processor included a spec for 1024 hardware interrupts. IBM, of course, in their wisdom, said that the chip was not for a main frame (or something similar), and instead changed to spec to 8 interrupts. Someplace down the road, in the 286 I think(?), they realised the needed more, and expanded to 16 via a second chip. Which is what we have had to deal with ever since.
How can something so well documented get garbled so soon?
The original 8086 design allowed 256 interrupt vectors, hardware and software combined. The hardware has always supported 256 vectors. Intel reserved the first 32 for future use, such as processor exceptions. IBM ignored that and assigned, for instance, INT 5 (which Intel had earmarked for array bounds exceptions) to the BIOS print screen function. Hardware interrupts got shoehorned in at the 0x10-0x1f range.
Why jam so much stuff in low when there were 256 available? Because the original ROM BASIC used the upper 208 or 224 (memory fails) for short-bytecount subroutine calls. It made for a very simple, compact interpreter. A look at a disassembly of the early ones showed that it was basically machine-translated Z80 code!
Re:InfiniBand / Serial ATA / Fiber Channel HDDs (Score:3)
Don't gush. Infiniband, or any cabled serial connection, will never be a memory connection worth having. The reason is latency: what matters with memory isn't how many terabytes you can deliver per hour, but how many picoseconds the processor has to wait for that data that's stalling the pipeline right now. Which is very hard to reduce when the address has to be shipped a bit at a time over a serial link (64 bit times @ 10 Gb/s = 6.4 ns), transported over the cable (~50 ps/cm * ~60 cm = 3.0 ns), memory accessed (technology dependent), serialized (another 64 bits, 6.4 ns), shipped back (another 3.0 ns) and finally it gets to the processor. You've added 18.8 ns over and above any protocol overhead (usually much worse) and that's at 10 Gb/s!
Not gonna happen.
I just looked over several comments... (Score:2)
Think content-control. DMCA. MPAA. Nightmare city.
Think encryption throughout the bus, licences for hardware, and NOTHING for GPL open source-based systems.
As been stated by others, there is no real need for a new bus - unless you start thinking about copy prote... oops, I mean CONTENT CONTROL...
Worldcom [worldcom.com] - Generation Duh!
Re:Forget Speed, just Solve the Problems! (Score:2)
Compare the ease of use of the bus technologies in their order of development:
Things are getting better all the time. I remember on the first PC I had, there were jumpers that had to be set in order to have the memory recognized. Now all that has to be done is to plug things in, and they (hopefully, usually) work. However, I don't think that there is any chance in hell that any OS running on PC hardware will not need to be rewritten a bit in order to deal with the next generation of Busses. A lot of the problems that we encounter with current systems is that they are designed for backward compatability...
Re:PCI disappear? Yeah right... (Score:2)
Re:Anyone else notice that Timothy never knows... (Score:2)
Re:Anyone else notice that Timothy never knows... (Score:2)
Hopefully intel learned from the RAMBUS story ... (Score:2)
I think that gives intel, AMD and the rest of the industry ample time to work out a standard everyone can live with. Also it occurs to me, that intel is more concerned with keeping control of the bus-standard (as in who gets payed for the license maybe) than technical facts. The article presents very few facts and basically intel wants everyone to hold their breath half a year for their preliminary specs.
correction: bandwidth of AGP 1x, 2x, 4x (Score:2)
Minor technical correction: Basic (1x) AGP is ~266 MB/sec, while the now-standard AGP 2x is ~533 MB/sec and the AGP 4x/AGP Pro is ~1000 MB/sec. For brief but nice technical details, check here [quasartec.com].
--LP
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:2)
The PCI spec sort of follows that:
Take a look at this diagram hardware PCI interrupts [viahardware.com]
And this section Called Interrupt Pin Assignment [viahardware.com]
Re:why not agp? (Score:3)
If that were true, then this entire thread, and both Intel's and AMD's replacement bus technologies, would be moot. I don't think they're doing it for fun. Basic (1X) AGP has the same bandwidth as 66 MHz 64-bit PCI, as found in servers and stuff; ~0.5 GBps. Obviously, that isn't enough either. Gigabit Ethernet, for example, is a prime example of something that usually sits on the bus, and that definitely needs more bandwidth (1Gb/8 = ~125 MB, which is very close to PCI's 133 MBps limit). If you want multiple Gb Ethernet boards on the same bus, the bus has to be bigger.
Will they be CPU neutral? (Score:2)
Consider PCI: There is no spec for what the bootrom of the card will contain. Usually, it contains only Intel x86 real mode code. The prevents the card from being used as-is in anything else: Your Alpha, your iMac, your Sun, all are out of luck when you plug this card in (unless they have x86 emulators to run the boot code.)
How about we put some thought into something like Sun's OpenFirmware system: a small, simple virtual machine spec to initialize the card and provide any functions needed to boot.
Realize: many cards wouldn't need this. Only video, storage and network cards would need to have a driver (to allow the boot process to begin.) Once the OS, whatever it was, got loaded, it could then load native drivers for the card if they existed.
And even if the drivers don't exist: slow and working is better than not working at all. I'd rather be able to get my new floobySCSI card working, albeit slowly, then have a new paperweight.
Lastly, since the purpose of this VM is very focused, it can provide very high-level operations to the system. It can have instructions like "Configure DMA from here to there", "Move a potload of data from here to there", "On interrupt #n do this...", and those micro-ops would be in the native machine code in the VM implementation. Thus, a card's VM driver could be pretty close to optimal anyway.
However, this idea does two things that doom it:
As a result, kiss the support of the two big players you need goobye.
Re:PCI bus will be around a while (Score:2)
--
Re:When replace? Enhance! (Score:2)
It already exists- its called PCI-X, coming to a computer near you, but it has its own problems, like a limited number of targets per bus (very limited, 1 or 2 I think). One problem with PCI is that it was designed as a low power bus- which is a good idea when you have 64 bits of data coming down the line. Unfortunately that makes it difficult to deal with when you're getting to higher speeds.
PCI has deeper limitations- when a transfer is initated, the target has no idea who is the master of the transaction, which makes it harder to restart a transaction that has been pre-empted. PCI-X attempts to solve this, and other issues that have come up, but maintaining backwards compatibility will only result in band-aid fixes, the fundamental flaws are still there. Sometimes you need to cut your losses and run- like the ISA/EISA to PCI jump- no compatibility there, but higher speed processors make different demands on the bus. As we've built faster and faster systems, we've found new limitations.
This would make transition easier, as old PCI cards would run in the new slots (at least this works for 64Bits Pci-Slots, which can also run 32Bit Card). If this doesn't work for Slots using a higher frequency, the chipsets could include 2 Pci-Controllers, each driving 3 or 4 slots, and each controller could fall back to 33Mhz if there is one card which doesn't support higher speeds.
This technology exists right now- if you have a PCI "bridge" in your system that supports 66MHz/64 bit, that is how it is supposed to work. Unfortunately if your data has to cross a bridge, you get a performance hit.
We can continue patching PCI, or we can learn from our experience and design a new bus.
Re:Look for PCI-X and InfiniBand (Score:2)
PCI-X does have some very good advantages, but unfortunately if you plug a PCI device into a PCI-X bus, you've just turned that bus (segment) into a PCI bus, no PCI-X advantages. Plus there are serious limitations on the number of loads (cards) on a PCI-X bus- which makes for more bridges on the motherboard, and a performance hit when you have to cross it.
Re:Interesting (Score:2)
Maybe it's all you need (Score:2)
Right now I'm wrestling with the problem of finding inexpensive test systems with 64-bit 66MHz PCI so I can test our FibreChannel products. UltraSparc 60s are just a little expensive to set up even a small SAN testbed. You can get X86 systems, but they are usually Xeon systems which raises the price too high. I've found motherboards, but half the time when you try and order one, they don't really exhist, or they only have one 64-bit 66Mhz slot. Or they bridge all the busses together serially which limits your overall bandwidth.
I can understand that you really don't need more than 32-bit 33MHz PCI at home, but the bandwidth that's required for servers becomes cost effective for the home in only a few years. CPUs are getting faster, and memory bandwidth is going up, but if you want to stream high res video to your hard drive in a couple of years then you're going to need more bandwidth on your PC.
Re:Its not needed (Score:2)
Re:Optical does not mean better.. (Score:2)
You commet about optics not being able to provide power is also a good point. No one wants a mouse that you have to connect a power supply to seperately.
There are some companies that like to play with parallel fibers (Vitess comes to mind for some reason), but copper busses still have a good number of years yet.
Re:Open Hardware... (Score:2)
Re:InfiniBand / Serial ATA / Fibre Channel HDDs (Score:2)
As you can imagine, cable lengths using this technology are very limited...
Since it's a serial interface cable lengths for 1 Gbit copper aren't as short as you might think. A card with a copper HSSDC interface can use cable lengths up to 30 meters. With a shortwave laser interface this increases to 300 meters, and with a longwave laser it's 10 kilometers. There aren't a lot of applications where you need to have your storage 10 km from your computer, but they do exhist. Latency does become a bit of an issue when you start running that much fibre though.
Re:I'm keeping an eye on Apple for the answer (Score:2)
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:3)
Re:bye bye? (Score:2)
----------
Re:Its not needed (Score:2)
An uncompressed vidostream at 800x600, 32Bits, 25 fps gives (800*600*4*25/(1024*1024)) = 45 mbytes/sec.
Given that there is surely some overhead in PCI transactions, this means that two of this streams pretty much need the whole bandwidth that PCI can provide.
Actually, there is very little overhead on PCI -- in fact, once you start the stream, you can keep going at 32 bits/cycle forever. That's what the bus was designed for, after all.
Now, streaming two things at one time willl add overhead, because you have to interleave the streams, but I don't see why you'd have two long streams going at the same time. If you're dumping your tv stream to the hard-drive, that's only one stream.
----------
Kill ISA? Over my dead and lifeless 74LS138 (Score:2)
Proof Wintel alliance is dead. (Score:2)
Re:Kill ISA? Over my dead and lifeless 74LS138 (Score:2)
Which really hoses people like myself who like to run SoftICE on an MDA monitor.
Re:Its not needed (Score:2)
Then let the professionals pay for it. 99% of the user base doesn't need another internal bus. PCI is not a bottleneck for anything but graphics at this time, and AGP addresses that.
Re:Kill ISA? Over my dead and lifeless 74LS138 (Score:2)
I'm mostly bemoaning the lack of even a single ISA slot on the newer motherboards I've seen. Like a great many other people, I have applications for the ISA slot that are not easily replacable with a PCI or USB solution.
Re:Nobody needs Gigabit Ethernet, either. (Score:2)
What you want is a new, optional bus for connecting extremely high-bandwidth devices. Such a bus would be available on... *gasp!* servers. Joe User should not have to pay for, replace his hardware because of, or otherwise be inconvenienced by, your specialized requirements as a Server Guy.
What this article is talking about is obsoleting and replacing the PCI standard. That, IMHO, is an extremely inappropriate idea whose time will probably never come.
Re:"150 MPH ought to be enough for anyone." (Score:2)
Don't make everybody buy a Porsche just because you want one. That is, in essence, what will happen if you deprecate PCI in favor of some incompatible multigigabit Bus From Hell.
Multiple displays (Score:2)
There's also the basic reason that almost nothing besides gfx cards -need- the huge bandwidth and bus speed of AGP. :)
gfx cards, plural. Media creation often uses multiple displays, one for the document being worked on and the others for palettes, etc.
Game copy protection? Bah. (Score:2)
you'd probably find that your game copy-protection wouldn't work
I didn't. I never played copy-protected games. I played free(beer) games from BBSes such as America Online (back when it was a BBS and not a wannabe ISP). Floppy disk copy protection is a conspiracy to destroy floppy drives so that floppy drive manufacturers can sell more units and make more money.
Better cooling with REGULAR ata cables (Score:2)
The successor to IDE is already on the way: Serial ATA. Reportedly, PC makers like it because the thin cables allow them to build smaller systems with better cooling.
If airflow properties are all you're using Serial ATA for, you don't really need it. All you need is to separate your existing parallel ATA ribbon cables every four wires [earthweb.com] (use xacto knife to make notches, then pull the wire groups apart like Twizzlers candy) and tie-wrap them back together; poof, no more airflow obstruction.
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:2)
One of my machines has a supermicro MB [supermicro.com] that does exactly this. (came in real handy when PnP for W2K professional turned up broken).
Is this a new concept?
---
Re:Assign resources (IRQs/ports/DMAs) to SLOTS!!!! (Score:2)
http://www.supermicro.com/PRODUCT/Manuals/MB/440BX /BX3.2i.pdf
section 5-1-5 on PnP setup [supermicro.com] details this on about page 85 or so. Actually, it is the _priority_ for IRQ and DMA that is set here, maybe that is different from what the original post talked about?
---
Re:Interesting (Score:2)
I tend to wonder if having a fiber-based bus is a good idea, though -- it's one thing to run your networks or external storage on fiber, but quite another to try and build your motherboard with it.
What I want to see is a hot-pluggable open expansion bus based on something similar to PCMCIA, especially on the Mac. I find it rather strange that I can crack the case on a G4 while it's running but I can't do anything useful in there.
/Brian
Re:Interesting (Score:2)
And as it happens I've never done the PS/2 mouse trick; I mostly work on Macs and ADB isn't that finicky
(Chilling random thought: SCSI keyboard...)
/Brian
PCI bus will be around a while (Score:2)
Busses, controllers, etc... (Score:2)
Design issues (Score:2)
Bus Speed, and Interrupts (or their equivalent)
The speed issue is fairly straight ahead, but the interrupts issue is not so obvious. Part of it is that we are still married to archaic features of the early PCs in some regards.
An old story I heard once said that the original Intel spec of the 8088 (?) processor included a spec for 1024 hardware interrupts. IBM, of course, in their wisdom, said that the chip was not for a main frame (or something similar), and instead changed to spec to 8 interrupts. Someplace down the road, in the 286 I think(?), they realised the needed more, and expanded to 16 via a second chip. Which is what we have had to deal with ever since.
Of course 20/20 hindsight tells us they should have stuck with the original spec. But I can't say that we would have arrived at a better situation if we had all of those interrupts in the first place. People would have figured out wierd and woderful ways to use them up well before now as it is.
I still think it would be great if they somehow did a re-design so that their would be, say, 64 hardware interrupts. Even with PCI interrupt sharing, things are getting tight, it seems.
I have seen so many machines over the past few years where every interrupt was being used, and is you wanted to add something, you had to take something out. This has upset more than a few people. Which is why alot of folks stay away from the all in one non-upgradle pizza boxen systems
Check out the Vinny the Vampire [eplugz.com] comic strip
Re:Its not needed (Score:2)
If I recall right, Panasonic used to sell (maybe still does) a three panel wrap-around display that was completely wild. It was even written up here in Slash one or two times.
I can't find that one right now, but there is this 42" plasma display [panasonic.com] for a mere 10,000 usa dollars. Maybe when I hit the lottery.
Check out the Vinny the Vampire [eplugz.com] comic strip
Assign resources (IRQs/ports/DMAs) to SLOTS!!!!!! (Score:2)
The proper solution is to rigidly and without exception, divide up the system resources and assign them to each expansion slot. Then, as long as each card has its own slot, there will be no resource conflicts!
At the very least, add a feature to the BIOS to let the user choose plug'n'play or manually assign resources to SPECIFIC SLOTS so that from the card's point of view, it has ONLY those resources to choose from.
The latter solution would be compatible with the current PCI standard.
Re:Its not needed (Score:2)
Here's a future scenario that may be made possible (in part) by faster busses: you create your own server and host it at home on a spare / old machine connected to the internet through your cheap gigabit ethernet connection. Fantasy you say? In a year or two after uber-fast busses are main stream, the price will drop enough to be available for use in budget boxes. Cheap high-speed networking equipment will enable faster links accross the world at fractions of present cost. A major part of the cost of modern high speed networking comes from the expense of maintaining and operating many exotic high-speed routers, switches, etc. Lowering those costs means more ultra-high-speed backbones, more high-speed links, and more high-speed connections to homes. Gigabit to the curb could very well be a reality, perhaps as soon as 2005.
Adoption of this type of technology for mainstream use could very will bring things like streaming video serving, data warehousing, mega popular web site hosting and serving, etc. into the realm of the hobbyist. As in times past, so in times future, what was once the realm of the elite and the wealthy will become commonplace. It's a good thing.
Re:Who the hell uses PCI for graphics anyway? (Score:2)
Re:bye bye? (Score:2)
One problem is if you send two light sources one-way on a fiber. The problem then would be that you have to stick with glass components. Any plastic material that you emit the light through is birefringent, meaning that it will break the light into the plastic's own X and Y axis. This, by itself, is okay becuase under perfect conditions the light would be reconstructed once it came out of the plastic material. However, plastics exibit photoelasticity, so stress and strains on the plastic would alter the light differently for the two directions, altering the altering the signals differently depending upon the wave axis and the stress in the fibre. A WDM would split the light back into it's original axes, but again, the problem is that the axes were effected individually by the plastic material.
One way to circumvent this is to use all glass fiber and glass components or use multiple optic fibers. One for each signal.
I like your idea for lighting up the keyboard lights using the light from the fiber. Maybe in the future, one of the components in every computer will be a Laser!
Can you tell me how to power my mp3 player off of light? Then I would be impressed.
bye bye? (Score:3)
Whoa, slow down partner. USB and Firewire have something that optical will never have, and that is power. You will never be able to send electricity down an optical cable. The only way to power something would be to send a bright light and use solar panels on the other end--not likely.
Furthermore, SCSI has direct memory access. Unless the new bus has DMA, then SCSI will still have a niche market. IDE? Well, maybe it is time to retire IDE.
I'm keeping an eye on Apple for the answer (Score:4)
Re:PCI disappear? Yeah right... (Score:2)
These guys are crying now... the commodity suppliers have all but dropped ISA. They have to move to industrial PCs or redesign for PCI-based DATAC and control (if they can find it; many controls components are just now becoming available in PCI). And of course, the industrial components are more bucks than the commodity stuff, which bites into their margin (which they had kept small to get a lowball price on the end product).
Personally, I never liked the idea of using commodity PCs for industrial applications in the first place, but those bean counters have a lot of influence in the engineering process. Note to the cognoscenti: I'm not talking about computer-related business here, but about stuff like big industrial machines. The people designing in this sector are surprisingly low-tech, in my experience.
Its not needed (Score:2)
All this development is simply an excuse for the technology industries to sell you a new mothboard, sound card, network card, modem, HD controller and graphics card. The only people who are going to need the new bus are the must have generation.
Hell, half of my equipment is still ISA. I still get comparable performance with a modern system.
Open Hardware... (Score:4)
It might be interesting if folks started building Open Hardware (much like the Open Software movement). With Open Hardware, the specifications are open (as are the standards), but customers would still need to pay for manufacturing.
I wish we'd start doing something like this... we could then build Linux on top of it, and know that the drivers will work well. Not to mention the benefits of open peer review against the hardware specs.
Re:Interesting (Score:2)
Ok, this should worry any healthy paranoid. (Score:2)
IBM has also been proposing a new HD standard, CPRM, and got smacked down (for the moment).
Hmmm... can anyone tell me if a new bus standard could be used to drop or encrypt certain data types as they pass from one device to another?
Need I say more to explain my paranoia?
-Kasreyn
Yeah, Don't you need ISA... (Score:2)
Trolls throughout history:
InfiniBand / Serial ATA / Fiber Channel HDDs (Score:4)
What about InfiniBand [infinibandta.org], which all the major PC hardware design people seem to be involved with? This takes a "switched fabric" approach to linking function blocks together, via Switches (which is where Brocade [brocade.com] hopes to be the next Cisco). Need more CPU power or more memory? Hot-plug a module into the Infiniband Switch. Version 1.0 of the spec. is available for download at the site, for those interested.
The successor to IDE is already on the way: Serial ATA [serialata.org]. Reportedly, PC makers like it because the thin cables allow them to build smaller systems with better cooling. V1.0 is not going to be much faster than UltraATA/100, but they say there's room for growth there.
Plus, you can have fibre channel (not fiber) hard drives right now, from Seagate (example) [seagate.com], IBM (example) [ibm.com], etc., and the big storage guys are heading that way too. Fibre Channel doesn't always mean Optical - these drives use a 40-pin "copper" connection, which can be a cable or a backplane (for hot-plugging). The SCSI-3 protocol is carried over the Fibre Channel interface, meaning that with a FC driver loaded, the drives look like SCSI devices.
Anyone see a trend here? It's the end of the parallel interface in all its forms, much as USB and FireWire are replacing the humble parallel port...
Re:Its not needed (Score:4)
The above figures don't even include overhead either, obviously no bus performs to its optimum because no board is built for perfect bus timings.
As you said, the plan of over the last few years has been to shift everything off the PCI bus, graphics went to AGP and in most modern chipsets the south bridge has a dedicated 266MB/s link to the south bridge, rather than a standard PCI bus link. They've also took the ATA controller off the PCI bus and given it a dedicated channel to the north bridge.
Even by taking everything off the PCI bus... it's still hitting its limit, as for a bus that is nearly 10 years old, it's done quite well. It's not quite end-game yet though, remember PCI2.2 allows 64bit transfers, so they've effectively doubled the throughput and given it a little more breathing space, however this isn't a long-term solution.
Interesting (Score:5)
To take that idea a bit further, would it be possible to implement a protocol which is extendable? For example, each device connected gets a dedicated strand of fiber. The system, when polling the device, can negotiate a frequency range and transmission speed dynamically.
If I understand things correctly, this can help the system decide where it needs to put its resources, because higher demand devices would want a higher frequency range and transmission speed (hard drives, video cards etc) where simple devices like the mouse and keyboard will only take a little bit.
I think it'd be a great way to build a scalable architecture which might be unlimited in capacity, and eliminating wasted bandwidth and resources.
Optical does not mean better.. (Score:4)
Optics are mainly used as long distance communication devices (a few feet+). The reason that USB is used over fiber is that USB provides power.. 100ma if I remember corectly. Fiber can not reasonably provide power.
And all this is neglecting the cost/size considerations. Gold vias are nice because they are VERY thin and can be stamped into layers really easy. In other words, There will be no PCB like fiber for a while.. to large, way to complicated...
(Sorry.. I get tired of the 'fiber solves all speed/electronics problems' comments.)