Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

When The PCI Bus Departs 215

km790816 writes: "I was just reading an article in the EETimes about the possible war over the technology to replace the PCI bus. Intel has their 3GIO. (Can't find any info on Intel's site.) AMD has their HyperTransport. There has been some talk about HyperTransport going into the XBox. I hope they can agree on a bus. I don't want another bus standard war. So when can I get a fully optical bus on my PC?" Now that's what I'd like: cheap transceivers on every card and device, and short lengths of fiber connecting them up. Bye bye to SCSI, IDE, USB, Firewire ...
This discussion has been archived. No new comments can be posted.

When The PCI Bus Departs

Comments Filter:
  • by Anonymous Coward
    Yes, we all need more fiber in our computers. To keep us regular. And to keep our computers regular. PCI and ISA tend to lead to constipation. not to mention VESA. So give us fiber. Lots and lots of fiber.
  • by Anonymous Coward on Friday April 20, 2001 @03:23AM (#277510)
    Part of my PhD work is in this area and I just wanted to throw in some of the things I've seen..
    First, in a nutshell, look for PCI-X. It makes PCI a switch based medium instead of a bus, will let you utilize your existing PCI devices.. Supposed to be very good for performance (Gigabytes/s) and will be out in a few months. This is the one to bank on..
    The real competitor is InfiniBand. The big 7 are involved in this (includes Compaq, Intel, IBM, MS, HP..). Its huge, with an 800 page spec you can get online for $10. It may be used to replace local I/O infrastructure (ie pci), as well as comm between hosts (ie cluster computers). It functions as: an I/O replacement (pci), Storage area network solution (ie fiberchannel, smart disks), and System Area Networks (cluster computer interconnect..Myrinet, GEth, SCI). Multi-gigabit links, switched medium, memory protection for transfers between hosts, split dma transaction notions (ie remote DMA mechanisms)..InfiniBand is what happened to a lot of the smaller competing standards, btw..

    A lot of people are not happy with IB. People see it as the big 7 trying to ram more hardware down our throats in a way that forces us to rewrite our OSes and do communication. Looking on the IB trade page and you will literally come across quotes along the lines of "InfiniBand(tm) is a knight in shining armor to save you from the monster of PCI".. Do we really need to have a new standard when 90% of the people out there use it for low-bandwidth soundcards, 5,000 gate ethernet transceivers....?

    You MUST read this article about PCI-X vs IB:
    http://www.inqst.com/articles/pcixvib/pcix.htm
    grimace/Georgia Tech
  • Fibre Channel doesn't always mean Optical - these drives use a 40-pin "copper" connection, which can be a cable or a backplane (for hot-plugging).

    40 pins? Last time I worked with fibre the cables were 4 pins. They were heavenly to work with, especially after wiring up SCSI RAID racks (8 68 conductor cables all running under the floor, and 68 conductor cables are a true PITA to work with, they're heavy (especially when shielded and 50' long), don't bend easily, keep their shape even when you don't want them to (they always want to be rolled up) and it's easy to bend the pins on the connector. The Fibre Channel cables are like thin serial cables. A single person can carry enough cables for two or three enclosures without even breaking a sweat, and wire them up in 1/4 of the time.

    Of course this isn't such a big deal in PC-land, but those tiny little cables do allow for better airflow in your case (especially compared to those 68 conductor ribbon walls, I usually split the cable into 4 or 5 secgments just so it doesn't create a huge heat sheild inside of my case (especially important for those hot running 10k RPM drives!)

    Down that path lies madness. On the other hand, the road to hell is paved with melting snowballs.
  • Well then it really isn't gone is it? The southbridge has, among other things, integrated ISA.

    For all practical purposes, it's gone. In cases where hardware seen on the 'ISA bus', it's actually on an LPC (Low pin count) bus. While LPC is easy to adapt to ISA pinouts, it is not a 1 to 1 connection.

    SiS for example does that. The 630 (A combined north and south bridge) connects to a SuperIO chip (950) which supports floppy, serial, parallel, IR, and hardware health monitor. The 950 is configured like any PnP multifunction device (just with a LOT of functions). The 630 handles memory, video, audio, IDE and other functions.

  • PCI is 32 bit at 33MHz. This is 1000 Mbits per second. The only interface device that needs or ever will need more than that is Graphics cards, and they have AGP slots. Most of the devices we use can even be run over USB.

    Unless, of course, you have a gigabit ethernet card and also want to use the hard drive. (Or 2 Gig ethernet cards).

  • Do you use a Gigabit card to connect a single computer, and use the full bandwidth at all times? What is it connected to?

    Yes, and then some. Think high capacity mirrored file servers.

    Also, not all that data needs to go to DMA. A reasonable chunk is packet header

    What's that got to do with anything? It still has to get to the card. (And actually, it does need to be DMA. I'd rather queue the packets and forget about them until they're on the 'wire'.

  • Yeah, that's all MOST people need, right? One graphics card? the rest of us can go to hell. I made this same point when Intel first proposed AGP.
    At least I was wrong on one point, decent PCI cards *are* still available. Just not the new top high end ones. (which are generally what the people who need TWO cards need).
  • Apple figured this out long ago when they came up with NuBus. Plug and Play is a crock of shit. It always breaks sooner or later.

    TI did NuBus (I think), but the idea of a geographically addressed bus isn't new. I seem to recall the Apple II bus working that way (but I'm not sure, I never owned one). EISA (I think) does as well. Some of the very old DEC busses did.

    The problem is if you don't assign enough space to a device you end up being forced to page flip and do other unplesent things. If the bus had been designed in 1990 people would have looked at existing cards and decided 16M was far more then anything needed (1M was about as much as any PC video card of the era had). Now your Nvida card with 64M VRAM (or SGRAM or whatever) would be really painful to access. If you assign too much space you cramp either the amount of RAM that can be used, or the number of expansion cards you can have.

    At the very least, add a feature to the BIOS to let the user choose plug'n'play or manually assign resources to SPECIFIC SLOTS so that from the card's point of view, it has ONLY those resources to choose from.

    Some BIOSes did this, back when there was a mix of PnP and non PnP cards in most systems it was really useful. I haven't had a problem in years though, so I'm not gonna put this high on my list of stuff I want to change in the world.

  • Consider PCI: There is no spec for what the bootrom of the card will contain. Usually, it contains only Intel x86 real mode code. The prevents the card from being used as-is in anything else: Your Alpha, your iMac, your Sun, all are out of luck when you plug this card in (unless they have x86 emulators to run the boot code.)

    There is definitely a spec. You can have x86 code in the ROM (for "compatibility"), or OpenFirmware. Which is exactly what you wanted.

    The big problem is nobody is required to do a OF boot ROM, and the even bigger problem is not with the cards, but BIOSes which don't support OpenFirmware. So rather then having the choice of doing x86 which only works on 95% of the market, or OpenFirmware which works on 100%, card makers have the choice of 5% or 95%. Most of them choose the 95% one, and sometimes make a second boot ROM "for Macintosh" (it'll work on PCI SPARCs and the like though).

    I think the PCI spec even lets you have both ROM images at once, but nobody seems to do it, so there may be a problem with that.

    Still, for a primarallrly Intel spec it was a good start.

  • Are you sure that you don't mean introduce standards killers every few years?
  • When I say VESA slot, I'm referring to the black card edge connector (16 bit ISA) and the brown card edge connector as one physical unit designed to peacefully co-exist with legacy expansion cards. Now that I think of it I should include PCI in that list with VESA and EISA, in that as long as the motherboard manufacturer puts both kinds of connectors at any given slot position, it's available for either type of card.

    Interestingly enough, the MCA card edge connector seems to have lived on beyond the electronic philosophy that spawned it (now watch someone pop up to announce that it was originally designed for some short lived video game console or something), with a little geographical relocation and, in one case, a color change, as the VESA connector and the PCI connector.

    The VESA extension was a more direct route into and out of the 486, which is why it only survived on a very few Pentium boards, because extra circuits had to be added to adapt it to the new processor design. If the 486 had remained on top for another 5 or 10 years, VESA would have been very, very big.

  • by unitron ( 5733 ) on Friday April 20, 2001 @03:43AM (#277529) Homepage Journal
    The way I heard it the reason was that IBM would only license the MCA slots to computer makers that also paid them retroactive royalties for all those years that they made stuff with and for 8 and 16 bit ISA slots.

    Needless to say, it was the smallest, slowest stampede in history.

    At least EISA and VESA slots would also take regular ISA boards (except for that full length card skirt thing).

  • by CMiYC ( 6473 ) on Friday April 20, 2001 @04:06AM (#277532) Homepage
    Hey guys, don't forget that the PCI and ISA busses are more than just slots on your motherboard. Only until recently was the IDE controller on most motherboards moved over to the PCI bus. If you have a floppy drive on your computer, chances are it is still using the ISA bus... So just because you don't have an ISA "slot", don't think that the bus is gone...

    ---
  • Hear hear! An option to turn off the editors' "two cents" in the prefs would be great!

    No more "I cant use it and therefore hate it because I dont have Windows and yet I somehow play all of these Windows games" lies!

    No more "I wont believe it until they send me one for free" BS!

    No more "I want one" whining!

    No more "But really they could just use some 'groovy' tech like optohypernanofiber to make it work better even though I have no idea what I'm talking about and have so little knowledge about this subject that I couldn't even begin to justify this argument but since I have 5 words and never talk in the comments I won't have to" fucking garbage like this!

    ~GoRK
  • > How about we put some thought into something like Sun's OpenFirmware system: a small, simple
    > virtual machine spec to initialize the card and provide any functions needed to boot.

    > Lastly, since the purpose of this VM is very focused, it can provide very high-level
    > operations to the system.

    Are you talking about a VM just to boot the hardware, or for all communications to it? I've been thinking about a custom driver VM for a while, to have completely platform-independent drivers. Since the majority of cards depend on similar operations (mem move, read/write locations, trigger interrupts etc), those could be provided as optimized high level ops with negligible slowdown. Then you could have truly generic drivers for your hardware. Some ultra-high-bandwidth devices might suffer a bit, like video, but it might be acceptable.
  • I've managed to avoid caring about busses and cards and slots and IRQs for some time, largely by having only one computer at home (and that a NeXT). Lately, however, I've begun building a full-out network, and I'm really hating life.

    Will somebody please explain to me why I'm seeing stuff in motherboard FAQs like "Try swapping the cards" or "don't ever put SB Live and a NIC in slots 2 and 3 together!" ??? This is crazy. A slot is a slot is a slot. If it's not, then something is dreadfully wrong (and, since it's not, then something definitely is screwed up in the PC hardware industry).

    From what I've been able to learn lately, one of the big problems is that not all PCI cards properly share the bus. That they don't fully adhere to the spec. So, we've got a (relatively) decent slot spec, but because some (most?) manufacturers cut corners, we've got crazy incompatibility problems.

    Add to that the fact that we're still saddled with this antique IRQ system that makes about as much sense as, well, frankly it makes no sense.

    Please tell me (I haven't gone to read them yet) that these new bus architectures that people are working on will at least solve these problems? There's no reason I should have to drop into the BIOS to change an obscure setting, or to re-position all my cards, just beacuse I decided to drop in a firewire card. But that's what's happening now, and it's driving me bugfu*k.

    On a more idealistic (and probably impractical) note, has anyone considered the possibility of making the bus very generalized? To the point where I could add, say, a 2nd (or 4th) processor just by slapping in a card? Or upgrade from one CPU to another by replacing the current card? Is this the way the old S-100 bus worked? Why can't we do that today? (I know, it'd require a COMPLETE overhaul of how we build computers, but might it be worthwhile?)

    And, on a more immediate level, is there anything that motherboard manufacturers can do today to help relax the problems? Could an enterprising engineer move away from the current north/south bridge architecture to a totally new, IRQ-rich, forgiving, true PnP environment, without breaking existing OS or cards?

    And, finally, from a more pessimistic point, what's to say that, even with a new kick-ass solve-all-our-problems promote-world-peace bus, that cards or MBs won't be built fully in compliance with the spec, and that we'll just be back in the same boat?

  • The IBM PC was somewhat of a "clone" itself of earlier "industry standard" S-100 8080/Z-80 machines. Not to mention that IBM themselves was selling many of the patented parts (ISA slots, video board chips) to the clone makers, and even came with schematics to help you build your own copy. Not to mention the fact that they happily gave MS a non-exclusive licence to produce the OS.

    IBM was trying to play nice with existing PC industry, and it's likely that they expected the clones all along. Perhaps they though they could make additional money licencing their BIOS, but that didn't work out.

    As for the "Industry Standard Architecture" - that was something that was made up by a trade group that didn't include IBM (even though they were paying IBM licence fees for the tech). There were certain problems with IBM computers in the early 90s because (as IBM put it) the slots were "AT-compatible" but not fully "ISA-compatible".
    --
  • Actually Microsoft of all companies came up with a spec for something they called "Device Bay". Essentially was an internal Firewire/Power chain with a standard slide-out bay that would allow hot-swappable drives. Read about it in their "PC'99" spec. The idea was that you would put all of your drives and internal doohickeys on it, with the exception of your (non-removable) boot drive.

    For some reason, it was soundly rejected by the hardware OEMs (maybe because Intel wouldn't put 1394 into the standard chipsets) and died a silent death. Instead, folks like compaq are chasing proprietary snap-in storage for their iPaq desktop. Too bad, because it would have made adding expansion much more end-user friendly.
    --
  • by IntlHarvester ( 11985 ) on Friday April 20, 2001 @06:14AM (#277541) Journal
    Here's the PC Industry's solution to the floppy drive interface problem:

    1) Change the BIOS setup so that it reads "Legacy Floppy Device".
    2) Do nothing for 10 more years.

    Where margins are razor-thin and every additional device means less profits, you might wonder why they don't just kill the thing. Well, the reason is that there is NO adequate replacement unless they 'solve' the problem of the PC's horrific bootstrapping code.

    Moving to something like OpenFirmware that could treat any device as the legacy "A:" is expensive and would take some form of communication and leadership, so it's cheaper to keep pushing out a 20 year old floppy interface.

    Don't forget, these are people that have found a way to reduce the retail price of a keyboard from $100 to $5, yet that $5 model still has a special "Scroll Lock" light that nobody ever uses.
    --
  • This is very, very likely. I saw a USB 2.0 card at Circuit City last week. It had four external ports and one INTERNAL port. I also own a FireWire card. That card has three external ports and the board etchings (but no hardware) for an internal port. It is very easy to imagine IDE having to compete with one of these standards for internal connection of periperals down the road.

    It is very easy to imagine a future machine with only USB 2.0 ports inside and out for connecting up all peripherals. Slots won't go away, but their need and number could be vastly reduced. Now, if traditional peripheral boards such as sound cards and modems could be packaged as modules that can be installed without opening the case, we will be another step closer to truly consumer friendly computers.

    Shameless plug time: Information on using Linux and Bluetooth [matlock.com] together is available. Please help keep Linux at the front of this technology.
  • I've seen multi-AGP boards. They're weird, and most of 'em are multi-CPU as well.
  • by Teferi ( 16171 ) <teferi&wmute,net> on Friday April 20, 2001 @03:04AM (#277546) Homepage
    Well, the basic answer is that AGP isn't a bus, it's a port. A bus can have multiple devices chained off a controller, but a port, only 1; each AGP slot on a motherboard needs its own controller chip. That's one reason you only rarely see mobos with multiple AGP slots.
    There's also the basic reason that almost nothing besides gfx cards -need- the huge bandwidth and bus speed of AGP. :)
  • With certain external techologies getting as fast as some of the internal connection technologies, we might even see a Firewire bus, or something equivalent, replace the PCI slots. One added advantage is that you could easily decide to have a small base unit and then add the cards in an extension tower, if necessary.

    It is also worth noting that many technologies that were once found in the form of cards are being moved outside, such as analog video capture, to where Joe consumer has an easier time connecting them. For this reason we will probably find the low to medium end simply not using the internal bus, whereas the medium to high end will.

    Of course technologies such as PCI buses will still be needed as parallel connectors provide a level of simplicity in the development of cards, and do not limit the total speed of data flow to that of the serial-parallel and parallel-serial converts. Though on the other hand with more computer technologies focusing on serial based solutions, maybe this is just inevetable.
  • by the_tsi ( 19767 ) on Friday April 20, 2001 @04:00AM (#277549)
    ...what he's talking about?

    > Now that's what I'd like: cheap transceivers on
    > every card and device, and short lengths of
    > fiber connecting them up. Bye bye to SCSI, IDE,
    > USB, Firewire.

    Here the posting is about replacing the high-bandwidth (formerly local) bus in PC architechture, and he thinks the suggestion regarding an optical bus is to be used for the (relatively) slow I/O busses of IDE, SCSI, Firewire and USB?

    I think there should be Metaeditors to handle the editors who talk before their brain starts working. Either that or Timothy should be disallowed from adding "his two cents" to a news posting.

    -Chris
    ...More Powerful than Otto Preminger...
  • Cheap and Optical aren't two words you normally use together. Unless it's "He went with Firewire because he was too cheap to use optical interconnect"

  • You will never be able to send electricity down an optical cable. The only way to power something would be to send a bright light and use solar panels on the other end--not likely.

    I've seen a telephone that was powered by optical fiber. I believe it was designed by Bell Northern Research.

  • Whoa, slow down partner. USB and Firewire have something that optical will never have, and that is power. You will never be able to send electricity down an optical cable.

    Nah, that's not hard at all. You just combine power along with the optical cable. 2.5" hard drive cables have been doing this for decades. It's just another wire in the bundle.
  • where simple devices like the mouse and keyboard will only take a little bit.

    Somehow, I don't think fiber is the answer for mouse and keyboard, and the reason you just expressed is one of them. The other one is cost: do you really want to build a fiber transceiver into a mouse, and then pay for a fiber cable? When I can get a USB mouse for $12 at the corner store, I'm going to resist fiber mice pretty hard.
  • If all devices were optical, there'd be no reason for electrical current other than to light up the fiber strands in the first place. Therefore, power to expansion cards wouldn't be needed at all and your PC would consume 3% of the energy it is using at this instant.

    Huh? How do I spin the hard drives and CD's? How do I power the laser to burn CDR's? You're losing me.
  • by Brento ( 26177 ) <brento AT brentozar DOT com> on Friday April 20, 2001 @03:19AM (#277560) Homepage
    PCI is 32 bit at 33MHz. This is 1000 Mbits per second. The only interface device that needs or ever will need more than that is Graphics cards, and they have AGP slots. Most of the devices we use can even be run over USB.

    Is that like Bill Gates saying no one will ever need more than 640kb? Frankly, I use two graphics cards in my desktop, and the only reason I don't have three is that the cost of another LCD panel is ludicrous. As soon as they come down more, I want one more panel, and then I'll be happy. I hate having to settle for a differently-branded (and usually more expensive) PCI card just because I don't have more AGP ports available. Usually the cutting edge stuff only comes out on AGP.

    Granted, I'm not playing Quake on all three at once, but the only reason I'm not is because I can't. I'd love to be able to play my driving games on all 3, with the left monitor being a left view, and the right being a right view. Or a view of my nearest competitor. Or even just a big rear view mirror. The possibilities are endless.

    The next thing up is storage area networking. PCI cards can't handle the biggest SAN loads, like our DVD jukeboxes at work. We can only use one 300-DVD jukebox per server, because the bus load can't handle more. Think in terms of quad Xeon servers, and it'll make sense - you can indeed shuffle a lot of load across the bus and off the fiber network if you need it. (And no, it's not a single reader per jukebox, there's lots of readers in each jukebox.)
  • Don't forget, these are people that have found a way to reduce the retail price of a keyboard from $100 to $5, yet that $5 model still has a special "Scroll Lock" light that nobody ever uses.

    And yet you can get a HappyHacker keyboard, which removes the lights, the unused keys, and the numeric keypad, for $100 :)

    You'd think they could have passed some that savings on to the consumer instead of charging more for the same product. If a keyboard coasts $5, then a keyboard with 75% of the keys should cost $3.75 (or something like that).

  • Not just IBM. NCR and Radio Shack used it too. EISA may not have been quite as technically cool as microchannel, but the backwards compatibility of EISA really killed microchannel more than anything else, in my opinion.

  • I'm not so sure about that. Remember how long it took to get AGP on PowerMacs? Also, PowerMacs only recently moved to PC-133 memory after it being out for a few years. I'd like to see DDR SDRAM in a Mac sometime. They're also still using 64-bit, 33 MHz PCI slots when they could be using 66 MHz slots. I think they're also still only supporting UltraATA/66 instead of UltraATA/100.

    Sure, they were quick to move on USB and their own Firewire, but they aren't exactly riding on the bleeding edge on every single component in their systems.
  • Apple figured this out long ago when they came up with NuBus.

    Apple didn't invent NuBus. My flaky memory tells me it was TI, which could be wrong, but it wasn't Apple. Apple merely selected NuBus for the Mac II, from among several alternatives that already existed at the time.

  • Furthermore, SCSI has direct memory access

    DMA isn't really a feature of the bus so much as the adapter implementation. Ethernet cards can do DMA, and SCSI cards can do polled I/O.

  • Plus, you can have fibre channel (not fiber) hard drives right now, from Seagate (example), IBM (example), etc., and the big storage guys are heading that way too.

    Heading? No, we're already here.

  • You've added 18.8 ns over and above any protocol overhead (usually much worse) and that's at 10 Gb/s!

    19 cycles for a 1GHz processor is actually not too bad; good large-system memory interconnects today are in the hundreds of cycles for anything but the very nearest memory, and even that latency can be pretty well hidden in NUMA or MT systems. The serial nature of the interconnect is simply not that big a performance issue in real memory-system design; the simplicity/physical benefits of serial protocols and cabling are much more important.

  • I was thinking of the "big guys" - EMC

    Yes. I work for EMC.

    There are a few FC-AL enclosures, but only their very biggest systems use FC-AL drives, the rest use SCSI for the drives via controllers / convertors. I wonder how long it will take for prices of all-FC systems to drop to "human" levels..?

    Firstly, FC-AL is pretty much dead for high-end systems, which all use switched FC nowadays.

    Secondly, the very biggest storage systems out there still use SCSI inside, not FC...but it doesn't matter. It doesn't matter at all what's *inside* the box, because the whole point of such a system is to achieve high performance through aggregation of small channels and/or to avoid the channels altogether by using a huge cache. Sure, the drives are SCSI, but there are hundreds of them, on dozens of separate SCSI buses. There's plenty of internal bandwidth to make the drives do all that they're capable of doing, so the question is not "why use SCSI when FC is faster" but rather "why use FC when SCSI is fast enough".

    Given that Plain Old SCSI is still doing yeoman's service as a back-end interconnect, and that FC components will probably always be more expensive than SCSI components, there's just no benefit to being "all-FC". Anyone who says otherwise is just putting marketing spin on a questionable engineering decision

    Disclaimer: I work for EMC, but I don't speak for them (nor they for me). The above is all public information, personal opinion, and simple common sense, unrelated to any EMC trade secrets or marketing hype.

  • Having worked at a company that manufactured and designed PCI devices, there are some serious problems with PCI. First of all, there is no way to properly prioritize traffic. For example, a sound card *must* be able to get the data it needs at low latency, or a NIC card must be able to read an entire packet at 100Mbps or 1Gbps or you will get underruns, whereas this is less critical for disk I/O or graphics.

    PCI was a major improvement over the VESA localbus, but things have improved since.

    In the networking world, I am seeing a lot of development around AMD's HyperChannel. I don't know the specifics of how it works, but I am starting to see a lot of products utilizing it. For networking, I'm dealing with bandwidths starting at 2.5Gbps and scaling upwards from there.

    PCI has outgrown its usefulness in the server world. It's not difficult to saturate 133MB/sec of the standard 32-bit 33MHz PCI. 64-bit 66MHz is an improvement, but the rate things are moving, it won't be long before that's a big bottleneck.

    I'm sure that today's big raid servers can probably saturate even a 66MHz 64-bit PCI interface.

    -Aaron
  • by Tower ( 37395 ) on Friday April 20, 2001 @05:19AM (#277572)
    Well, there is a big difference between PCI-X and IB... PCI-X is defined to be a local bus interconnect, while IB is designed to be extensible for everything from local bus to SAN fabric, to the entire Internet (GIDs / IPv6). PCI/PCI-X provide data transfer, and a little notification, while IB is more of an application level interconnect (with memory protection, provisions for flow control and fairness, and a whole lot of higher level concepts). IB tries to be everything to everyone (that's what happens when you take the NGIO and FutureIO specs and ram them together, and invite more people to add requirements.

    One thing to remember about IB - the pinout for IB (be it Fiber or copper) is substantially less than that for the PCI local bus specs (being a serial, rather than parallel interface). IB could be useful for hooking up SANs, clusters, etc... but for the most part, it won't replace PCI/PCI-X for endpoints. There are several companies that have or are making IB to PCI/PCI-X bridge chips. A great thing for external I/O or storage towers. There's a lot of hardware involved, and for the server market, it could be a great thing. For desktops, well - it will be quite a while before anyone bothers... we still don't have 64b slots/adapters and/or a 66MHz bus on PCs (cost/benefit problems there), so until those are in demand for desktops, there shouldn't be too much of a push for an even higher bandwidth, more expensive internal connection...

    (disclaimer: I work for one of the 'Big 7', though not directly on IB at the moment)
    --
  • If you have a floppy drive on your computer, chances are it is still using the ISA bus... So just because you don't have an ISA "slot", don't think that the bus is gone...

    No, it's really gone. The floppy connector is just part of the South Bridge chip, along with the keyboard controller, serial and parallel ports, and other low-bandwidth stuff. It hangs off of the PCI bus. These devices are just hard-coded to ACT like the old ISA equivalents, with their addresses, interrupts, DMA, etc. being hard-addigned to the old standard values.

  • An old story I heard once said that the original Intel spec of the 8088 (?) processor included a spec for 1024 hardware interrupts. IBM, of course, in their wisdom, said that the chip was not for a main frame (or something similar), and instead changed to spec to 8 interrupts. Someplace down the road, in the 286 I think(?), they realised the needed more, and expanded to 16 via a second chip. Which is what we have had to deal with ever since.

    How can something so well documented get garbled so soon?

    The original 8086 design allowed 256 interrupt vectors, hardware and software combined. The hardware has always supported 256 vectors. Intel reserved the first 32 for future use, such as processor exceptions. IBM ignored that and assigned, for instance, INT 5 (which Intel had earmarked for array bounds exceptions) to the BIOS print screen function. Hardware interrupts got shoehorned in at the 0x10-0x1f range.

    Why jam so much stuff in low when there were 256 available? Because the original ROM BASIC used the upper 208 or 224 (memory fails) for short-bytecount subroutine calls. It made for a very simple, compact interpreter. A look at a disassembly of the early ones showed that it was basically machine-translated Z80 code!
  • by overshoot ( 39700 ) on Friday April 20, 2001 @04:53AM (#277575)
    Need more CPU power or more memory? Hot-plug a module into the Infiniband Switch.

    Don't gush. Infiniband, or any cabled serial connection, will never be a memory connection worth having. The reason is latency: what matters with memory isn't how many terabytes you can deliver per hour, but how many picoseconds the processor has to wait for that data that's stalling the pipeline right now. Which is very hard to reduce when the address has to be shipped a bit at a time over a serial link (64 bit times @ 10 Gb/s = 6.4 ns), transported over the cable (~50 ps/cm * ~60 cm = 3.0 ns), memory accessed (technology dependent), serialized (another 64 bits, 6.4 ns), shipped back (another 3.0 ns) and finally it gets to the processor. You've added 18.8 ns over and above any protocol overhead (usually much worse) and that's at 10 Gb/s!

    Not gonna happen.
  • And while I admit I may have missed it somewhere in there - I don't think I saw a real concern we should be having right now regarding any new bus...

    Think content-control. DMCA. MPAA. Nightmare city.

    Think encryption throughout the bus, licences for hardware, and NOTHING for GPL open source-based systems.

    As been stated by others, there is no real need for a new bus - unless you start thinking about copy prote... oops, I mean CONTENT CONTROL...

    Worldcom [worldcom.com] - Generation Duh!
  • I think that the major problem with PCI is that not all slots are "bus mastering" and Some types of cards require a bus mastering slot. There is an additional problem when you consider IRQ sharing - but these are all work-around that have come from the evolution of the hardware. Yes, bad design is bad design and you have some motherboards that do things in very stupid ways; hopefully the mistakes and ambiguities will work themselves out.

    Compare the ease of use of the bus technologies in their order of development:
    • ISA - jumpers for IRQs, No sharing...
    • SCSI - IDs, termination, wide/narrow, high byte active/passive termination, and more connectors than you can shake a stick at...
    • IDE ATA/33 - master or slave - cables sometimed a little odd or keyed the wrong way
    • PCI - IRQs are now automatic, based on the slot
    • ISA PnP- sometimes a nasty hack but works better than opening the case to change a jumper every time
    • USB- the first dummy-proof connectivity. Only thing to worry about is power, and it is pretty good about babysitting you on that one
    • IDE ATA/66,100- Special, shandard cable that takes all ambiguities out of IDE cabling
    • IEEE 1394- Idiot proof autoconfiguration, not used much as of yet


    Things are getting better all the time. I remember on the first PC I had, there were jumpers that had to be set in order to have the memory recognized. Now all that has to be done is to plug things in, and they (hopefully, usually) work. However, I don't think that there is any chance in hell that any OS running on PC hardware will not need to be rewritten a bit in order to deal with the next generation of Busses. A lot of the problems that we encounter with current systems is that they are designed for backward compatability...

  • My AMD motherboard has no ISA slots or bus. Everything is PCI with one AGP slot.
  • Or everyone who hates Tim should post right here. YOU SUXOR TIM!
  • How exactly does this solve anything? "If you close your eyes the monsters will go away.." Take your solipsist attitudes elsewhere pal.
  • Hopefully intel learned from the RAMBUS story that they can't push through standards any longer at their whim. As long as no clear standard emerges intel can't expect much support (cards/periphery) and hence not much of a market for their eagerly announced new bus-standard. With PCI-X as an easy option to prolong the lifetime of PCI for another five years i don't see the industry rush for a standard that essentially makes most present hardware obsolete.

    I think that gives intel, AMD and the rest of the industry ample time to work out a standard everyone can live with. Also it occurs to me, that intel is more concerned with keeping control of the bus-standard (as in who gets payed for the license maybe) than technical facts. The article presents very few facts and basically intel wants everyone to hold their breath half a year for their preliminary specs.
  • Basic (1X) AGP has the same bandwidth as 66 MHz 64-bit PCI, as found in servers and stuff; ~0.5 GBps.

    Minor technical correction: Basic (1x) AGP is ~266 MB/sec, while the now-standard AGP 2x is ~533 MB/sec and the AGP 4x/AGP Pro is ~1000 MB/sec. For brief but nice technical details, check here [quasartec.com].

    --LP

  • > At the very least, add a feature to the BIOS to let the user choose plug'n'play or manually assign resources to SPECIFIC SLOTS so that from the card's point of view, it has ONLY those resources to choose from.

    The PCI spec sort of follows that:

    Take a look at this diagram hardware PCI interrupts [viahardware.com]

    And this section Called Interrupt Pin Assignment [viahardware.com]

  • by Emil Brink ( 69213 ) on Friday April 20, 2001 @03:38AM (#277590) Homepage
    There's also the basic reason that almost nothing besides gfx cards -need- the huge bandwidth and bus speed of AGP.
    If that were true, then this entire thread, and both Intel's and AMD's replacement bus technologies, would be moot. I don't think they're doing it for fun. Basic (1X) AGP has the same bandwidth as 66 MHz 64-bit PCI, as found in servers and stuff; ~0.5 GBps. Obviously, that isn't enough either. Gigabit Ethernet, for example, is a prime example of something that usually sits on the bus, and that definitely needs more bandwidth (1Gb/8 = ~125 MB, which is very close to PCI's 133 MBps limit). If you want multiple Gb Ethernet boards on the same bus, the bus has to be bigger.
  • The question I have is, will this new bus be CPU neutral?

    Consider PCI: There is no spec for what the bootrom of the card will contain. Usually, it contains only Intel x86 real mode code. The prevents the card from being used as-is in anything else: Your Alpha, your iMac, your Sun, all are out of luck when you plug this card in (unless they have x86 emulators to run the boot code.)

    How about we put some thought into something like Sun's OpenFirmware system: a small, simple virtual machine spec to initialize the card and provide any functions needed to boot.

    Realize: many cards wouldn't need this. Only video, storage and network cards would need to have a driver (to allow the boot process to begin.) Once the OS, whatever it was, got loaded, it could then load native drivers for the card if they existed.

    And even if the drivers don't exist: slow and working is better than not working at all. I'd rather be able to get my new floobySCSI card working, albeit slowly, then have a new paperweight.

    Lastly, since the purpose of this VM is very focused, it can provide very high-level operations to the system. It can have instructions like "Configure DMA from here to there", "Move a potload of data from here to there", "On interrupt #n do this...", and those micro-ops would be in the native machine code in the VM implementation. Thus, a card's VM driver could be pretty close to optimal anyway.

    However, this idea does two things that doom it:
    1. It allows non-Intel systems to not be second class citizens.
    2. It allows non-Microsoft OSs to not be second class citizens.

    As a result, kiss the support of the two big players you need goobye.
  • The VESA extension is a port, not a slot. It hooks directly to the front side bus of older hardware. This, and some signalling reasons, is why you could only use so many VESA ports as you ramped up the bus speed (3 @ 25Mhz, 2 @ 33Mhz, 1 @ 40Mhz, 0 @ 50Mhz).

    --
  • It _should_ be possible the create a PCI-Specification with let's say 133Mhz, and 64 Bits...

    It already exists- its called PCI-X, coming to a computer near you, but it has its own problems, like a limited number of targets per bus (very limited, 1 or 2 I think). One problem with PCI is that it was designed as a low power bus- which is a good idea when you have 64 bits of data coming down the line. Unfortunately that makes it difficult to deal with when you're getting to higher speeds.

    PCI has deeper limitations- when a transfer is initated, the target has no idea who is the master of the transaction, which makes it harder to restart a transaction that has been pre-empted. PCI-X attempts to solve this, and other issues that have come up, but maintaining backwards compatibility will only result in band-aid fixes, the fundamental flaws are still there. Sometimes you need to cut your losses and run- like the ISA/EISA to PCI jump- no compatibility there, but higher speed processors make different demands on the bus. As we've built faster and faster systems, we've found new limitations.

    This would make transition easier, as old PCI cards would run in the new slots (at least this works for 64Bits Pci-Slots, which can also run 32Bit Card). If this doesn't work for Slots using a higher frequency, the chipsets could include 2 Pci-Controllers, each driving 3 or 4 slots, and each controller could fall back to 33Mhz if there is one card which doesn't support higher speeds.

    This technology exists right now- if you have a PCI "bridge" in your system that supports 66MHz/64 bit, that is how it is supposed to work. Unfortunately if your data has to cross a bridge, you get a performance hit.

    We can continue patching PCI, or we can learn from our experience and design a new bus.
  • First, in a nutshell, look for PCI-X. It makes PCI a switch based medium instead of a bus, will let you utilize your existing PCI devices..

    PCI-X does have some very good advantages, but unfortunately if you plug a PCI device into a PCI-X bus, you've just turned that bus (segment) into a PCI bus, no PCI-X advantages. Plus there are serious limitations on the number of loads (cards) on a PCI-X bus- which makes for more bridges on the motherboard, and a performance hit when you have to cross it.
  • That's true only to a certain extent. Economies of scale mean that the development and setup costs are spread out over more units. However, the raw materials and the equipment for the optics are still considerably more expensive than copper wire. You can't lose money on each unit and make up for it on volume. This is something that many of the internet companies are just learning.
  • Maybe that's all you need, or will need for a while. I have friends that are completely happy with their Pentium 133 computers and see no need to upgrade.

    Right now I'm wrestling with the problem of finding inexpensive test systems with 64-bit 66MHz PCI so I can test our FibreChannel products. UltraSparc 60s are just a little expensive to set up even a small SAN testbed. You can get X86 systems, but they are usually Xeon systems which raises the price too high. I've found motherboards, but half the time when you try and order one, they don't really exhist, or they only have one 64-bit 66Mhz slot. Or they bridge all the busses together serially which limits your overall bandwidth.

    I can understand that you really don't need more than 32-bit 33MHz PCI at home, but the bandwidth that's required for servers becomes cost effective for the home in only a few years. CPUs are getting faster, and memory bandwidth is going up, but if you want to stream high res video to your hard drive in a couple of years then you're going to need more bandwidth on your PC.
  • Actually, they're seperate PCI busses. You can only have on 64-bit 66MHz slot per bus, or 2 64-bit 33MHz slots. When you see on 64/66 slot, two 64/33 slots, and several 32/33 slots, there are actually 3 PCI busses on the system. If your're really lucky the busses aren't all bridged together and sharing 532 MB/s worth of bandwidth.
  • I agree that it only makes sence to use copper wires for short distances, but it's kind of misleading to say that optical is slower over small distances. The amount of time spent converting from optical to a voltage just isn't significant. There are 10 Gbit optical transceivers available. They are EXTREMELY expensive, but you can get them. 2 Gbit ones are finally comming down to a reasonable price, but they still cost in the price range of an average PC motherboard.

    You commet about optics not being able to provide power is also a good point. No one wants a mouse that you have to connect a power supply to seperately.

    There are some companies that like to play with parallel fibers (Vitess comes to mind for some reason), but copper busses still have a good number of years yet.
  • If you can afford the resources to work on the hardware (Analyzers, scopes, CAD tools), then you can probably afford the nominal fee to join the trade association and join in the standars making process. The specifications are already open, and people are already building Linux on top of it.
  • I think the original intention was to use only optical on the interface cards. The disks have a copper interface and plug into disk chassis. THe chassis then has either a copper or optical interface. They usually have a copper DB9 interface, and a MIA is used to convert it to shortwave or longwave optical.

    As you can imagine, cable lengths using this technology are very limited...

    Since it's a serial interface cable lengths for 1 Gbit copper aren't as short as you might think. A card with a copper HSSDC interface can use cable lengths up to 30 meters. With a shortwave laser interface this increases to 300 meters, and with a longwave laser it's 10 kilometers. There aren't a lot of applications where you need to have your storage 10 km from your computer, but they do exhist. Latency does become a bit of an issue when you start running that much fibre though.
  • My first reaction what to think "what is this guy smokin?", but Apple has been pretty forward thinking where busses for personal computers are concerned. The Apple Desktop Bus, for example, was actually a really good idea for it's time. There are some things that Apple does very well.
  • by flatrock ( 79357 ) on Friday April 20, 2001 @08:56AM (#277605)
    That works fine in a simple Apple Macintosh which only has a couple of expansion slots and doesn't have very many devices that need interrupts. It's just not reasonable to limit the number of devices to the limited number of interrupts available. It's also not reasonable to require the computers to have a vast number of interrupts available in case someone want to use them. These resources can be safely and efficiently shared. The reason plug and play doesn't work is because of crappy hardware and driver development. I write drivers for FibreChannel boards, and my drivers can share interrupts and still perform well. Don't try and tell me that some ethernet card has to have it's own interrupt. It's simply a poor implementation.
  • DMA has been a common-place thing on every bus.. SCSI, EIDE, PCI, AGP. That's hardly an important selling point for SCSI.

    ----------
  • 1000Mbits/sec are 125 mbytes/sec.
    An uncompressed vidostream at 800x600, 32Bits, 25 fps gives (800*600*4*25/(1024*1024)) = 45 mbytes/sec.

    Given that there is surely some overhead in PCI transactions, this means that two of this streams pretty much need the whole bandwidth that PCI can provide.

    Actually, there is very little overhead on PCI -- in fact, once you start the stream, you can keep going at 32 bits/cycle forever. That's what the bus was designed for, after all.

    Now, streaming two things at one time willl add overhead, because you have to interleave the streams, but I don't see why you'd have two long streams going at the same time. If you're dumping your tv stream to the hard-drive, that's only one stream.


    ----------

  • ISA is still a gread bus for low speed IO. It very simple to toss a couple PALs on a PCB and rock & roll. You also have the joy of interrupt lines (including NMI). Try that with USB.
  • All of the storys about AMDs gaining on Intel have been good, but this is the first possible proof the Wintel (Windows/Intel) alliance is trully dieing. Remember just a year or 2 ago we all talked about how MS and Intel were working together to insure they lead the PC market.... and now here comes the news that MS is going to pick an AMD technology over Intels solution for the Xbox, I really think this drives home the idea that Intel might be in a little trouble.
  • Unfortunately, MS's PC98 standard has pretty much done ISA in. Even the off-brand developer machines we've been getting from AMD no longer have any ISA support.

    Which really hoses people like myself who like to run SoftICE on an MDA monitor. :(
  • Indeed! Keep in mind professionals use something more than a meer tv-tuner-card and will need something better than PCI.

    Then let the professionals pay for it. 99% of the user base doesn't need another internal bus. PCI is not a bottleneck for anything but graphics at this time, and AGP addresses that.
  • You're right, the distinction between ISA slots and the ISA bus is an important one. The bus itself isn't dead by a long shot.

    I'm mostly bemoaning the lack of even a single ISA slot on the newer motherboards I've seen. Like a great many other people, I have applications for the ISA slot that are not easily replacable with a PCI or USB solution.
  • Actually, I missed the part where you explained why everyone who buys a PC needs to subsidize your "servers with large storage arrays, SANs, or most types of clusters."

    What you want is a new, optional bus for connecting extremely high-bandwidth devices. Such a bus would be available on... *gasp!* servers. Joe User should not have to pay for, replace his hardware because of, or otherwise be inconvenienced by, your specialized requirements as a Server Guy.

    What this article is talking about is obsoleting and replacing the PCI standard. That, IMHO, is an extremely inappropriate idea whose time will probably never come.
  • A better analogy, perhaps.

    Don't make everybody buy a Porsche just because you want one. That is, in essence, what will happen if you deprecate PCI in favor of some incompatible multigigabit Bus From Hell.

  • There's also the basic reason that almost nothing besides gfx cards -need- the huge bandwidth and bus speed of AGP. :)

    gfx cards, plural. Media creation often uses multiple displays, one for the document being worked on and the others for palettes, etc.

  • you'd probably find that your game copy-protection wouldn't work

    I didn't. I never played copy-protected games. I played free(beer) games from BBSes such as America Online (back when it was a BBS and not a wannabe ISP). Floppy disk copy protection is a conspiracy to destroy floppy drives so that floppy drive manufacturers can sell more units and make more money.

  • The successor to IDE is already on the way: Serial ATA. Reportedly, PC makers like it because the thin cables allow them to build smaller systems with better cooling.

    If airflow properties are all you're using Serial ATA for, you don't really need it. All you need is to separate your existing parallel ATA ribbon cables every four wires [earthweb.com] (use xacto knife to make notches, then pull the wire groups apart like Twizzlers candy) and tie-wrap them back together; poof, no more airflow obstruction.

  • >At the very least, add a feature to the BIOS to let the user choose plug'n'play or manually assign resources to SPECIFIC SLOTS so that from the card's point of view, it has ONLY those resources to choose from

    One of my machines has a supermicro MB [supermicro.com] that does exactly this. (came in real handy when PnP for W2K professional turned up broken).

    Is this a new concept?

    ---

  • sorry to reply to my own post, but I found the user manual online (pdf, sorry) where it shows the instructions for using this feature on the P6DBE board I have.

    http://www.supermicro.com/PRODUCT/Manuals/MB/440BX /BX3.2i.pdf

    section 5-1-5 on PnP setup [supermicro.com] details this on about page 85 or so. Actually, it is the _priority_ for IRQ and DMA that is set here, maybe that is different from what the original post talked about?

    ---

  • USB is massive overkill for a mouse and keyboard. The big advantage it has is that (in theory for most PC users; in practice for Mac users) it provides one less port you have to fight with. You may not need anything resembling USB's bandwidth for a keyboard, but it's great for a scanner or a printer (passable for mass storage, but FireWire does better there).

    I tend to wonder if having a fiber-based bus is a good idea, though -- it's one thing to run your networks or external storage on fiber, but quite another to try and build your motherboard with it.

    What I want to see is a hot-pluggable open expansion bus based on something similar to PCMCIA, especially on the Mac. I find it rather strange that I can crack the case on a G4 while it's running but I can't do anything useful in there.

    /Brian
  • I'm not saying that it's a bad idea to use USB for such purposes; I'm merely trying to say that you don't need USB horsepower for a keyboard. It really is just a bandwidth issue, and I'm not saying that's a problem.

    And as it happens I've never done the PS/2 mouse trick; I mostly work on Macs and ADB isn't that finicky :-)

    (Chilling random thought: SCSI keyboard...)

    /Brian
  • Even though a replacement is proposed for PCI, it will be around for a bit longer. Just look at the way ISA was slowly phased out. While many motherboards have only PCI now, (and an AGP slot for video), many main boards still have legacy ISA slots, usually 1 or 2. If a new standard bus is agreed upon, PCI 'legacy' slots will probably remain on newer boards.
  • ISA is not an extension of the PCI bus. ISA preceded PCI. ISA never had a controller, but was rather an extension of the 8086 external bus. Reads and Writes to the bus were controlled by the address and IO lines of the CPU. ISA "controllers" are simply bridges which provide access to the ISA bus through a PCI bus. The ISA bus itself is just a collection of wires and trancievers, no smarts. The PCI bus actually has some smarts and provides some basic services such as memory space translation and resource conflict avoidance.



  • as I see it, there are two issues on this.

    Bus Speed, and Interrupts (or their equivalent)

    The speed issue is fairly straight ahead, but the interrupts issue is not so obvious. Part of it is that we are still married to archaic features of the early PCs in some regards.

    An old story I heard once said that the original Intel spec of the 8088 (?) processor included a spec for 1024 hardware interrupts. IBM, of course, in their wisdom, said that the chip was not for a main frame (or something similar), and instead changed to spec to 8 interrupts. Someplace down the road, in the 286 I think(?), they realised the needed more, and expanded to 16 via a second chip. Which is what we have had to deal with ever since.

    Of course 20/20 hindsight tells us they should have stuck with the original spec. But I can't say that we would have arrived at a better situation if we had all of those interrupts in the first place. People would have figured out wierd and woderful ways to use them up well before now as it is.

    I still think it would be great if they somehow did a re-design so that their would be, say, 64 hardware interrupts. Even with PCI interrupt sharing, things are getting tight, it seems.

    I have seen so many machines over the past few years where every interrupt was being used, and is you wanted to add something, you had to take something out. This has upset more than a few people. Which is why alot of folks stay away from the all in one non-upgradle pizza boxen systems

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • I'd love to be able to play my driving games on all 3, with the left monitor being a left view, and the right being a right view. Or a view of my nearest competitor. Or even just a big rear view mirror. The possibilities are endless.

    If I recall right, Panasonic used to sell (maybe still does) a three panel wrap-around display that was completely wild. It was even written up here in Slash one or two times.

    I can't find that one right now, but there is this 42" plasma display [panasonic.com] for a mere 10,000 usa dollars. Maybe when I hit the lottery.

    Check out the Vinny the Vampire [eplugz.com] comic strip

  • Apple figured this out long ago when they came up with NuBus. Plug and Play is a crock of shit. It always breaks sooner or later.

    The proper solution is to rigidly and without exception, divide up the system resources and assign them to each expansion slot. Then, as long as each card has its own slot, there will be no resource conflicts!

    At the very least, add a feature to the BIOS to let the user choose plug'n'play or manually assign resources to SPECIFIC SLOTS so that from the card's point of view, it has ONLY those resources to choose from.

    The latter solution would be compatible with the current PCI standard.

  • Hmmm, not needed eh? Well, of course there's the standard gaming needs. Then there are similar needs, 3D rendering etc. With faster processors and faster busses, 3D rendering gets faster too (duh). 2D graphics also get a boost. Should be good for graphic artists. But, here's something us techies should be thinking about. Servers. Imagine a single box that can handle incredibly high usage loads (in webserver terms, think serving up gigabytes and millions of hits per day). Now imagine something somewhat similar. Routers. Switches. Gateways. Faster busses + faster ram + faster processors = better (and cheaper!) networks.

    Here's a future scenario that may be made possible (in part) by faster busses: you create your own server and host it at home on a spare / old machine connected to the internet through your cheap gigabit ethernet connection. Fantasy you say? In a year or two after uber-fast busses are main stream, the price will drop enough to be available for use in budget boxes. Cheap high-speed networking equipment will enable faster links accross the world at fractions of present cost. A major part of the cost of modern high speed networking comes from the expense of maintaining and operating many exotic high-speed routers, switches, etc. Lowering those costs means more ultra-high-speed backbones, more high-speed links, and more high-speed connections to homes. Gigabit to the curb could very well be a reality, perhaps as soon as 2005.

    Adoption of this type of technology for mainstream use could very will bring things like streaming video serving, data warehousing, mega popular web site hosting and serving, etc. into the realm of the hobbyist. As in times past, so in times future, what was once the realm of the elite and the wealthy will become commonplace. It's a good thing.

  • AGP is not a bus standard. AGP is for one card, and only one. Two AGP slots on one motherboard is sheer fantasy and would require a new "AGP standard". Notice that they don't call it an interface or a bus, they call it a "port". Which is what AGP is. It's a very high speed port to the memory. If you want AGP speeds in a bus, use 66-MHz 64-bit PCI (which is a bus, and is a standard).
  • Okay, mister optical know-it-all. There's a flaw in your idea (even though I must admit that it is an interesting and cool^H^H^H good one).

    One problem is if you send two light sources one-way on a fiber. The problem then would be that you have to stick with glass components. Any plastic material that you emit the light through is birefringent, meaning that it will break the light into the plastic's own X and Y axis. This, by itself, is okay becuase under perfect conditions the light would be reconstructed once it came out of the plastic material. However, plastics exibit photoelasticity, so stress and strains on the plastic would alter the light differently for the two directions, altering the altering the signals differently depending upon the wave axis and the stress in the fibre. A WDM would split the light back into it's original axes, but again, the problem is that the axes were effected individually by the plastic material.

    One way to circumvent this is to use all glass fiber and glass components or use multiple optic fibers. One for each signal.

    I like your idea for lighting up the keyboard lights using the light from the fiber. Maybe in the future, one of the components in every computer will be a Laser!

    Can you tell me how to power my mp3 player off of light? Then I would be impressed.

  • by grammar nazi ( 197303 ) on Friday April 20, 2001 @03:07AM (#277654) Journal
    Bye bye to SCSI, IDE, USB, Firewire

    Whoa, slow down partner. USB and Firewire have something that optical will never have, and that is power. You will never be able to send electricity down an optical cable. The only way to power something would be to send a bright light and use solar panels on the other end--not likely.

    Furthermore, SCSI has direct memory access. Unless the new bus has DMA, then SCSI will still have a niche market. IDE? Well, maybe it is time to retire IDE.

  • by mblase ( 200735 ) on Friday April 20, 2001 @04:17AM (#277658)
    ...no joke. Sure, it sounds stupid that they should, of all companies, come up with the replacement for PCI, but remember that it took Apple's decision to completely replace serial with USB and SCSI with FireWire for those two technologies to be seen as bona fide replacements by PC manufacturers. If a practical and real replacement for PCI does come along, I'm going to expect to see it on the new PowerMacs long before I'll see it on an Intel motherboard.
  • ie we'll have motherboards with PCI replacement slots, PCI slots, and ISA slots, rather than just PCI and ISA slots as now ;)
    Mobos with ISA slots are getting harder to find. I'm seeing this in the industrial controls sector, where there a lot of ISA-based DATAC and control boards. There are also a lot of machine builders that bought into the soft-PC control hype of the past few years and designed around commodity PCs.

    These guys are crying now... the commodity suppliers have all but dropped ISA. They have to move to industrial PCs or redesign for PCI-based DATAC and control (if they can find it; many controls components are just now becoming available in PCI). And of course, the industrial components are more bucks than the commodity stuff, which bites into their margin (which they had kept small to get a lowball price on the end product).

    Personally, I never liked the idea of using commodity PCs for industrial applications in the first place, but those bean counters have a lot of influence in the engineering process. Note to the cognoscenti: I'm not talking about computer-related business here, but about stuff like big industrial machines. The people designing in this sector are surprisingly low-tech, in my experience.

  • PCI is 32 bit at 33MHz. This is 1000 Mbits per second. The only interface device that needs or ever will need more than that is Graphics cards, and they have AGP slots. Most of the devices we use can even be run over USB.

    All this development is simply an excuse for the technology industries to sell you a new mothboard, sound card, network card, modem, HD controller and graphics card. The only people who are going to need the new bus are the must have generation.

    Hell, half of my equipment is still ISA. I still get comparable performance with a modern system.

  • It might be interesting if folks started building Open Hardware (much like the Open Software movement). With Open Hardware, the specifications are open (as are the standards), but customers would still need to pay for manufacturing.

    I wish we'd start doing something like this... we could then build Linux on top of it, and know that the drivers will work well. Not to mention the benefits of open peer review against the hardware specs.

  • Note that both Fibre Channel (and its relative Gigabit Ethernet) use copper for short-distance interconnects. It's cheaper and by using signaling techniques such as Gigabit's PAM-5, you can get pretty impressive data rates over copper. Although these are really long(er) distance connections and we probably want short-distance interconnects, the same idea applies. The SGI Origin 2000 and 3000 distribued supercomputers have processors, video boxes, pci boxes, and storage in different racks. They use a huge copper cable and have aggregate bandwidths of up to 716 GB/s. (The manual for the 3000 also warns not to turn the power switch on or off by yourself. You must have a field service engineer do it for you)
  • IBM proposes new bus standard to go in all PC's?

    IBM has also been proposing a new HD standard, CPRM, and got smacked down (for the moment).

    Hmmm... can anyone tell me if a new bus standard could be used to drop or encrypt certain data types as they pass from one device to another?

    Need I say more to explain my paranoia?

    -Kasreyn
  • To use an internal modem in linux?

    Trolls throughout history:

  • What about InfiniBand [infinibandta.org], which all the major PC hardware design people seem to be involved with? This takes a "switched fabric" approach to linking function blocks together, via Switches (which is where Brocade [brocade.com] hopes to be the next Cisco). Need more CPU power or more memory? Hot-plug a module into the Infiniband Switch. Version 1.0 of the spec. is available for download at the site, for those interested.

    The successor to IDE is already on the way: Serial ATA [serialata.org]. Reportedly, PC makers like it because the thin cables allow them to build smaller systems with better cooling. V1.0 is not going to be much faster than UltraATA/100, but they say there's room for growth there.

    Plus, you can have fibre channel (not fiber) hard drives right now, from Seagate (example) [seagate.com], IBM (example) [ibm.com], etc., and the big storage guys are heading that way too. Fibre Channel doesn't always mean Optical - these drives use a 40-pin "copper" connection, which can be a cable or a backplane (for hot-plugging). The SCSI-3 protocol is carried over the Fibre Channel interface, meaning that with a FC driver loaded, the drives look like SCSI devices.

    Anyone see a trend here? It's the end of the parallel interface in all its forms, much as USB and FireWire are replacing the humble parallel port...

  • by Aztech ( 240868 ) on Friday April 20, 2001 @03:29AM (#277689)
    Yup, the PCI bus does ~1gbps or 133megabytes/sec, considering we have things like 1gbps Ethernet (~125meg/sec) and Ultra160meg SCSI... soon to be Ultra320, it's well under siege. I know these technologies aren't in the mainstream yet, but it certainly doesn't leave much headroom for growth when they do trickle down into general computers.

    The above figures don't even include overhead either, obviously no bus performs to its optimum because no board is built for perfect bus timings.

    As you said, the plan of over the last few years has been to shift everything off the PCI bus, graphics went to AGP and in most modern chipsets the south bridge has a dedicated 266MB/s link to the south bridge, rather than a standard PCI bus link. They've also took the ATA controller off the PCI bus and given it a dedicated channel to the north bridge.

    Even by taking everything off the PCI bus... it's still hitting its limit, as for a bus that is nearly 10 years old, it's done quite well. It's not quite end-game yet though, remember PCI2.2 allows 64bit transfers, so they've effectively doubled the throughput and given it a little more breathing space, however this isn't a long-term solution.
  • by JediTrainer ( 314273 ) on Friday April 20, 2001 @03:04AM (#277709)
    That is an interesting idea - that we might replace all of our device connections (the bus) with fiber.

    To take that idea a bit further, would it be possible to implement a protocol which is extendable? For example, each device connected gets a dedicated strand of fiber. The system, when polling the device, can negotiate a frequency range and transmission speed dynamically.

    If I understand things correctly, this can help the system decide where it needs to put its resources, because higher demand devices would want a higher frequency range and transmission speed (hard drives, video cards etc) where simple devices like the mouse and keyboard will only take a little bit.

    I think it'd be a great way to build a scalable architecture which might be unlimited in capacity, and eliminating wasted bandwidth and resources.
  • by Liquid-Gecka ( 319494 ) on Friday April 20, 2001 @03:14AM (#277712)
    Just because optical has the potential to be faster doesn't mean is IS faster. You have to remember that you not only have to have a fiber, but you also have to have a very expensive source and reciever setup on either side. and fiber can not be run in parallel like electronics can. Each card would have to have a chain like device to allow communication with the next device on the bus. Aka, fiber to the first drive, that drive has fiber to the next drive, etc etc etc.. also another problem with fiber is that it is SLOWER over small distances because of the Voltage -> light -> Voltage conversion process.

    Optics are mainly used as long distance communication devices (a few feet+). The reason that USB is used over fiber is that USB provides power.. 100ma if I remember corectly. Fiber can not reasonably provide power.

    And all this is neglecting the cost/size considerations. Gold vias are nice because they are VERY thin and can be stamped into layers really easy. In other words, There will be no PCB like fiber for a while.. to large, way to complicated...

    (Sorry.. I get tired of the 'fiber solves all speed/electronics problems' comments.)

The world will end in 5 minutes. Please log out.

Working...