PCI Express - Coming Soon to a PC Near You 359
Max Romantschuk writes "I've been following the emerging of PCI Express for some time now. PCI Express, previously known as "Third Generation I/O" or "3GIO", is the technology set to replace PCI. PCI has been with us for around ten years now, and is rapidly running out of bandwidth. Last week Anandtech ran an interresting story on PCI Express. The techology has previously been covered by Hexus and ExtremeTech aswell. I feel this technology looks all set to replace PCI, and we really do need some new bus technology to keep up with the bandwidth demands of today's applications. Or is this just yet another way to force us into a new upgrade cycle?"
It will not just replace PCI (Score:5, Informative)
Speed (Score:2, Interesting)
Re:Speed (Score:5, Informative)
Parallel faster than Serial (Score:5, Informative)
I'm afraid this might add to the confusion about serial interfaces being 'faster' than parallel. While it is true that you don't have to worry about data/clock skew when using serial interfaces, enabling you to clock them faster, a parallel interface running at the same clock speed as a serial interface will always be faster in terms of data throughput. The reason for this is simple: serial == 1 bit per clock, parallel = > 1 bit per clock.
So, saying that serial is faster than the "equivalent" parallel interface is confusing, and incorrect, because one could be referring to equivalent clock rates being used for each interface, in which case parallel will provide at least twice the data throughput. On the other hand, "equivalent" could be referring to identical throughput rates, in which case the serial and parallel interfaces would provide, by definition, identical data rates.
The real advantage that PCI Express has over PCI/PCI-X is that it is a point-to-point, rather than a multi-drop, bus. This setup requires less time between pin transitions, meaning that it can be clock faster. Also, like Ethernet, a serial protocol can imbed the clock into the data stream so clock/data skew is no problem whatsoever.
Serial is not better than parallel anymore than digital is better than analog, there are just physical reasons why implementing point-to-point serial at significantly higher clock rates is easier than multi-drop parallel.
Anyone still awake?
Didn't think so
Re:Parallel faster than Serial (Score:2)
Re:Parallel faster than Serial (Score:4, Informative)
Look on any mother board of the past 5-10 years - you'll see bunches of wiggly traces deliberatly lengthened to deal with just these problems.
I think that this thread has however become rather confused - parallel/serial vs. point-to-point/multidrop
On a multidrop bus you mjust meet setup.hold at every slot on the bus, you get nasty reflections that make this even harder to implement (look at the PCI spec for an example of wrestling with these problems) - point-to-point signals can be cleanly terminated and only have to be correct at one place - the other end of the bus the amount of skew can be greatly reduced
Inter-bit skew on a parallel bus has its own problems you have to meet setup/hold on every bit wrt the clock - that's a hard layout problem. You can solve this a lot of ways - bundling (ala EV6/AMD slot2k) where bits are bundled into smaller chunks with their own locks, or even at the extreme run a clock per bit, or use self-clocking protocols on each bit (wastes a little bandwidth). These techniques cost more gates and latency than the traditional methods - parallel isn't impossible, it's just a little harder
Re:Parallel faster than Serial (Score:5, Informative)
Need more bandwidith - add more pins. With each pin delivering 100 megabytes, there's lots of room to grow.
Re:Parallel faster than Serial (Score:3, Insightful)
That's the whole point isn't it? Wide and slow or narrow and f
Re:Speed (Score:2)
Having multiple serial channels seems like the most expensive way of going about things, bu
Re:Speed (Score:5, Informative)
Re:Speed (Score:2, Interesting)
Re:Speed (Score:2, Interesting)
Re:Speed (Score:2)
Re:Speed (Score:3, Interesting)
Re:Speed (Score:3, Informative)
Not that this concept is anything new, half of the 40 pins from earlier versions of ATA were grounded. (every other one) Same thing with old parallel ports, the data lines have ground interleaved.
Re:It will not just replace PCI (Score:5, Funny)
This is not technically true, though I can see why you would be confused.
They anticipate that customer demand for PCI-X will be so great that it will be difficult to sell AGP boards, therefore AGP will be renamed PCI-X. In order to distinguish between the two, the PCI-X spec will be designated "PCI-X High Speed" while the AGP spec will be designated "PCI-X Full Speed"
Re:It will not just replace PCI (Score:5, Funny)
the PCI-X spec will be designated "PCI-X High Speed" while the AGP spec will be designated "PCI-X Full Speed"
The really terrible thing here is that I can't tell if you're being serious or not.
No use. The well has been poisoned. (Score:5, Insightful)
An existing alternative (Score:2)
Re:It will not just replace PCI (Score:3, Interesting)
Where PCI Express will really shine is in block transfer devices such as HD and CD-ROM and high volume streaming devices such as those producing vi
Re:PCI doesn't need to be replaced (Score:2, Insightful)
Also, you don't seem to be looking at individual PCI devices rather than the total bandwith for all devices. Right now if you want more than 2 IDE drives and have them not affect each other, you need multiple IDE controllers. Individually they may fit into the available bandwith fine, but combine several and you can be
Re:PCI doesn't need to be replaced (Score:2)
Exactly - this is why server chipsets often have two PCI busses.
Say goodbye to legacy crap (Score:2, Funny)
Oh Shit, we all have iMac's now...
Re:Say goodbye to legacy crap (Score:5, Funny)
Whooooa! (Score:2)
What can I say: WOW! - Hello gigabit netcards, (multiple) extreme graphics adapters etc!
Re:Whooooa! (Score:2, Funny)
PHB: Very good, Igor, very good.
Marketing: Shall I go for another?
PHB: They will come to you in time.
Marketing: What's that sound?
PHB: It's the sound of thousands of mid-level product managers from struggling PC sellers banging on your cubicle wall, young Igor.
Marketing: *cries*
That said, I'm excited about PCIExpress. Perhaps not as much as Daath, but excited nonetheless. 'Course, this just means that the GeForce4 I bought 6 months ago will look about as quaint as Combat! on an
rat race (Score:2)
PCI 2 is the same as PCI? (Score:4, Funny)
I'm just wonering now if that external HD USB2 case I bought is really 1.1 or not... Grrrrr.
Hmmm (Score:4, Interesting)
Or maybe current PCI devices don't support DRM out of the box ? Please upgrade your bus techno, so we can use all this extra bandwidth to transfer huge crypto keys to/from your hardware, just in case you want to play a copyrighted sample on your soundcard
(-1 Paranoid)
Re:Hmmm (Score:3, Informative)
Only the electrical connection will actually change, the 'language' spoken over it will be no different than todays PCI, this way drivers will not need changed or upgraded to support the PCI-X version of the hardware.
Its just like how serial ATA is replacing our current ATA.
The standard they use to speak is not changing, only the electrical interface.
As a matter of fact, adding DRM (To PCI-X *OR* PCI) would indeed require driver changes, so you c
Re:Hmmm (Score:2)
Actually, it depends on your content/key ratio. If you have only one 512-bytes key to send for each sound file played, it won't make any difference indeed.
But when you start associating keys to groups of sound samples (or movie frames), you may end up transmitting more crypto than content. More bandwith would be a boon in this case.
However I don't know if this kind of implementation is planned for DRM (I
Does PCI Express solve the shared IRQ problem? (Score:5, Insightful)
that doesn't have a problem with too many extensions
because of a limited number of IRQs.
Today most mainboard come with many onboard PCI componentes. If you really are going to put in 3-5 extra PCI components in a stock PC, you usually end up in a nice game of 'let's see what order works best', or cannot use all cards together at all.
Re:Does PCI Express solve the shared IRQ problem? (Score:5, Informative)
The problem today is more with interrupt line sharing (#A, #B, #C, #D -- some motherboards add more, but four is the old spec), and cards sharing the actual interrupt and not the interrupt queue (IRQ), depending on how you place them.
But yes, to answer your question, there's less problems, due to the parallel serial nature (now is that an oxymoron?) of the controller interface, working somewhat like SCSI does.
At least until 4x, 8x and 16x PCI Express arrives in force, and cards starts competing and assuming that all the streams are available for THEIR card, much like some cards today think it's ok to bump up the PCI latency, cause the user SURELY must have some unpopulated slots we can steal time from...
Yeah, when that happens, it may be hell to troubleshoot, but we'll just wait and see...
Re:Does PCI Express solve the shared IRQ problem? (Score:2)
Under Windows 2000, I see IRQs ranging from 1 to 31 on my three year old PIII Xeon. My P4 has available IRQ slots going up to at least 22. I'd say there is plenty of room.
Re:Does PCI Express solve the shared IRQ problem? (Score:4, Informative)
Is this what the consumers want or... (Score:2, Insightful)
Re:Is this what the consumers want or... (Score:3, Insightful)
So, at some point, even if you develop faster drives you're going to need a faster bus to support it.
Re:Is this what the consumers want or... (Score:5, Informative)
Moderators on crack. This is just plain wrong. Firewire is 400 or 800 Mbits/s, while SCSI is up to 320 Mbyte/s, IDE is up to 133 MByte/s, and Fibre Channel is up to 250 Mbyte/s. These numbers are directly comparable, because different buses have different amounts of overhead, but for sure firewire is a slow also-ran when talking only about performance. (When talking about cost, flexibility, etc., firewire looks better, of course.) As far as PCI goes, the top end is over 1 Gbyte/s, which is a bottleneck for some applications, but not firewire. Also, in high-end servers you'll have a number of pci buses to improve performance.
Wanted: (Score:2, Offtopic)
Ding, Ding!
Cynicism overkill (Score:2, Troll)
Re:Cynicism overkill (Score:2)
I have no idea. I've submitted 2 stories which have been accepted, so you'll have to ask the editors about the picking of cynical-tagline-ending-stories...
What I do know is that I felt raising the question would benefit the discussion, and I wanted to hear people's oppinion in comparison to my own. Questioning everything is part of my nature, i guess.
Re:Cynicism overkill (Score:2)
Mine too.
See below for details.
Re:Cynicism overkill (Score:3, Interesting)
Re:Cynicism overkill (Score:2, Funny)
You must be new around here. Welcome to
About time (Score:5, Insightful)
Re:About time (Score:2)
Why? (Score:2)
Re:Why? (Score:2)
Re:Why? (Score:5, Insightful)
> from a faster PCI bus?
Gigabit ethernet, soon 10gbit ethernet..
multiple firewire buses, or even one firewire 800 bus..
Multiple high speed graphics cards..
Multiple SCSI or fiberchannel buses..
> I don't have anything.
> I really have nothing that will gain any benefit.
Well thank you for deciding that what you need is exactly what everyone else needs and they should be happy with that
A reasonable upgrade cycle (Score:5, Interesting)
OK, so yes we can probably live with PCI for longer (possibly much longer), but why not introduce a new standard with better potential? It maintains complete backwards compatability with regular PCI components, so manufacturers of harware don't even have to change anything. Of course another issue is motherboard cost, but there will always be new features put into successive motherboard generations that aren't in widespread use yet... like serial ATA, gigabit ethernet, etc. And there will probably be motherboards available for a lower cost without those features as well.
Physical Connector (Score:5, Funny)
How are those tiny little serial connectors supposed to support the weight of my 2007 GeForce Maxx Fury 7 video blaster with its jet turbine fan? They'll snap like twigs, I tell ya!
Re:Physical Connector (Score:2)
Re:Physical Connector (Score:3, Informative)
From the demo I saw about a year ago, the cards mount as-is and get power from the standard PCI bus, but that is all.
Then there is a small connector on the edge of the card oposit from the back panel (Some disk cards have LED header pins here, some cards have power connectors here where a floppy power jack like USB and FW cards)
This is where the serial PCI-X will connect, and have a thin cable connecting to whereever on the mobo it will go.
Lat
Is HDTV the only application? (Score:2)
I'm not saying that there aren't lots of uses for a faster bus, but changing to PCI was painful (Remember EISA/VESA Local FUD? Remember motherboards with four slots, not
Re:Is HDTV the only application? (Score:5, Interesting)
One gigabit Ethernet card can do >80MB/s
Together they are limited by PCI.
Now try Raid, TV-Card with PCI-OVerlay, GFX-Cards (Yes, they need a few 100MB/s)...
Plus remember that you NEVER EVER reach 133 MB/S with PCI. Even a single device can be happy to get 110MB with long bursts, and if you have many devices, effective total bandwith is more like 66 than 133 MB/s.
Re:Is HDTV the only application? (Score:2)
Linux support... (Score:4, Interesting)
What upgrade cycle? (Score:5, Insightful)
It may well be one of the intentions of it, but one thing I don't get is that with CPU speeds and hard disk capacities where they are now, the average computer buyer (which probably is not very well represented on slashdot) no longer really needs to upgrade their computer, so changing interface/slot shape/etc won't really matter to them.
I know I'm generalising, but the only applications that really push today's computers are games (and high end scientific programs, but they're a fairly minor special case) and I would guess that most computers are not used primarily for games (ie. "serious gamers" - think families). Serious gamers will always be upgrading their computer to the latest and greatest anyway - they don't need to be forced into an upgrade cycle.
It's getting to the point now where by the time the average family decides they need to upgrade their computer, it is easier (and maybe even cheaper) to just buy the latest middle-of-the-line computer package.
I'd almost question whether the idea whole idea of upgrading is itself becoming obsolete for an average computer user?
Re:What upgrade cycle? (Score:2, Informative)
Re:What upgrade cycle? (Score:3, Informative)
I used to upgrade relatively often as the performance of parts increased significantly from month to month. Quick upgrade from 386 to 486 to pentium to celeron to AMD. After that last one, to a fast Athlon, then a faster Athlon, I just quit noticing any real benefit. Sure, my kernels would build faster but even with that I have slowed down as linux kernels have just gotten plain excellent.
I jumped up to 512MB Ram and got a big HDD and am set for quite a while, it would seem. No big performance gains f
Component upgrade vs Computer upgrade... (Score:2)
It's getting to the point now where by the time the average family decides they need to upgrade their computer, it is easier (and maybe even cheaper) to just buy the latest middle-of-the-line computer package.
I'd almost question whether the idea whole idea of upgrading is itself becoming obsolete for an average computer user?
I imagine what they're talking of here
Re:What upgrade cycle? (Score:3, Informative)
3 gigabit interfaces (2 in redundant failover mode, one for backups), 6 fiber channel disks on 2 diskplanes, and 2 qlogic HBA's to our EMC array.
Each one of those cards has quite the ability to saturate a single PCI bus. Thankfully, the boxes we're putting them into have 4 different PCI busses, so we can put 1 fiber or 1 HBA onto each.
Question (Score:2, Funny)
User-facing bus losing importance (Score:5, Interesting)
Ten years ago it was almost a given that at some point, you (or your Computer Guy) had to add or replace one of the cards -- add Ethernet, upgrade the video, whatever. Nowadays, the hardware on-board is more than sufficient, and any of those "special" accessories you get, such as storage drives for your digital camera, or a scanner, or whatever, are more likely than not going to be USB or FireWire.
It's very likely that the mainstream desktop computer is going to move to a slotless "brick" form factor. This would have the side benefit of making it much cheaper. This form factor is available already, but it's not yet cheap because it's still considered a "specialty" unit.
I'd also be happy to see the return of the Commodore 64 form factor -- just shove everything into the keyboard. Plug in your mouse and monitor and Ethernet, and go.
I like to upgrade (Score:2, Interesting)
But I like to upgrade!
I usually build two computers a year. If I sell my computer every six months at 75% (which is about the going price) of its original price, I can keep up experimenting with sweet new hardware.
As an added bonus, I've built an expanding network of friends, friends' friends and practically unknown people who have been referred to me by the others. They buy my second hand computers, consult me whenever they want to buy a computer
Rapidly running out of bandwidth? (Score:3, Interesting)
some IDE controllers, each of which can push maybe 50MB/sec to the media (RAID-0) tops.
audio, keyboard, some other I/O, maybe 1 MB/sec
NIC, 10MB/sec tops
Ok, so I do the math and get 61MB/sec, or just under 1/2 the bandwidth of PCI. For 90% of the PCs out there, this is sufficient. For high end boxes, you can use 64bit or 66MHz PCI, or PCI bridges.
Tell me again why this technology is necessary?
Re:Rapidly running out of bandwidth? (Score:5, Informative)
Now that's a no-brainer.
My computer is by far not a high-end box, but PCI is a (small) bottle-neck, even for me.
Let's see: 2 IDE channels, 2 disks, that's 50 MB/s each, 1 GBit network, that's peak 100MB/s. A U2W SCSI host adapter with 1 single, very fast disk is good for 70MB/s. Then there is USB2 (everything is USB2 now) and Firewire (each 50MB/s). Adds up to (peak) 370MB/s.
You and me and most people know, that a usual user and most unusual users like the ./ crows will never use all devices at once. But just copying data from disk to network saturates the bus.
A simple fix is 66MHz 64Bit PCI, but those are very rare in consumer machines. So while PCI-Express might be currently overkill, I doubt simple 33MHz 32Bit PCI will be sufficient even for consumer grade computers. Just imagine 10 years ago when PCI started: most were using ISA and that was enough for most usual users. 10MBit/s Ethernet cards used less than 1MB/s. Who needs a faster bus? Only servers needed PCI (or EISA).
Watching the long migration from ISA to PCI until ISA was (mostly) replaced, I don't expect PCI-Express to replace PCI within 5 years. And in 5 years I would bet, that PCI looks like ISA does now: slow and outdated.
You don't need to use the PCI bus for all of those (Score:2)
My computer is by far not a high-end box, but PCI is a (small) bottle-neck, even for me.
Let's see: 2 IDE channels, 2 disks, that's 50 MB/s each, 1 GBit network, that's peak 100MB/s. A U2W SCSI host adapter with 1 single, very fast disk is good for 70MB/s. Then there is USB2 (everything is USB2 now) and Firewire (each 50MB/s). Adds up to (peak) 370MB/s.
On almost all modern chipsets, these devices do not use the pci bus at all, unless you've put it your own add-in cards. (so, I ima
Re:Rapidly running out of bandwidth? (Score:5, Informative)
PCI-X will finally bring HDTV (~200MB/sec) within reach. What this means is that you'll be able to have a software-only HDTV decoder - which will make it trivial to receive HDTV broadcasts on a PC, and make HD-DVD players possible.
At the pro level, this is just about the last thing that a $50K SGI system has over a cheap Linux PC - playback and maniupulation of uncompressed HDTV video. It's about time PCs finally caught up to "workstations" in the bus department...
Re:Rapidly running out of bandwidth? (Score:3, Interesting)
You're thinking max theoretical (Score:3, Insightful)
Now, add in IDE RAID cards, and SCSI cards and those along can saturate the bus. Consider that a single SCSI HD can now pump out about 70MB/sec when used in an STR intensive application.
Re:Rapidly running out of bandwidth? (Score:3, Interesting)
(1) bandwidth
AGP is pushed to the limits with 8x, and will not go any higher easily. PCI-Express, on the other hand, will easily start with the same bandwidth, and will have plenty of headroom for future cards.
(2) cost
If your audio card is only using 1MB/s, then you will use a slow x1 lane to hook it up, and your motherboard becomes way simpler and cheaper to design -- instead of routing 40 pins accross all PCI slots, you'll route 10 pins directly to the x
Faster pr0n (Score:2, Funny)
Insightful : +1 (Score:2)
Pretty simple.
Then again, pr0n was also the reason I learned to tell time (parents got home from work at exactly 5:35pm each day so I could read my dad's Playboys until shortly before then and not get caught.) I was the best time-teller in the whole damn first grade.
Tiny systems! (Score:2)
Infiniband Lite? (Score:2)
More confusion? (Score:2)
Does this mean that now we'll have motherboards with ISA, PCI, PCI Express (I know - they'll physically co-exist with PCI slots happily, but that doesn't alleviate the confusion to the !/. crowd) and AGPx1,2,4,8 (take your pick)? Any future motherboard that has ONLY PCI Express (this means no AGP slot, not 0 PCI slots) would be rightly considered to be less backwards compatible, and would therefore offer much less choice in terms of components, lowering its perceived market value. The perfect example here w
Re:More confusion? (Score:3, Informative)
Well, it's more than one ISA slot, but at least they're there [supermicro.com]...
The transition will be very long (Score:2)
While PCI Express will probably replace PCI, the transition will probably be very long.
As the Anandtech authors point out, there were still ISA slots on motherboards 10 years after the introduction of PCI. So I would expect that you'll still be able to use your PCI cards in new computers five years from now.
This is long overdue. (Score:3, Informative)
âoeI feel this technology looks all set to replace PCI, and we really do need some new bus technology to keep up with the bandwidth demands of today's applications. Or is this just yet another way to force us into a new upgrade cycle? When I look back at the explosion of technology within the past decade and the ever-continuing attempts to eradicate the bottlenecks that computer systems have had PCI Express is a breath of fresh air. For example lets take a look at processors; within the past ten years processor speeds have doubled every eighteen months if we go by Moores Law. Itâ(TM)s hard to believe it was a little over ten years ago that Intel released the First Pentium Chips [compsoc.net]. HDD speeds (physical read) have also increased dramatically from about 2 MB/s for a 635MB HDD to over 45 MB/s for a modern HDD. Graphics were given a face lift with the introduction of the AGP bus pushing the speeds of transfer up from PCIâ(TM)s 133 MB/s to 2.1GB/s however many systems are used for a LOT more than video rendering capabilities and are geared more towards storage markets were data access speed is of the utmost importance. 64 bit PCI gave us a boost to 266 MB/s transfer speeds to be used in conjunction with high speed U320 SCSI but even then we cannot take full advantage of the capabilities offered. PCI express opens up the horizons for computers letting us transfer substantial amounts of data in less time. This can only be a good thing. More Information * Shorter Time = Greater Efficiency Therefore I donâ(TM)t see this as another way to force us into the upgrade cycle but a good solid advancement in computers. Also, the good thing is that it is coming wither we like or not.
PCI-X? (Score:2)
Why do we need this? Gigabit Ethernet. (Score:3, Interesting)
Server hacks like the 66-MHz PCI bus speed and 64-bit-wide PCI are neither practical nor sustainable. That's why we need something different, something like PCI Express. It raises the I/O bar enough to give us another few years of unconstrained growth of the PC architecture.
Well, of course. PCI isn't fast enough. Or is it? (Score:5, Informative)
And even that wasn't fast enough, now we have AGP 8x.
But seriously, is PCI really not fast enough for the general consumer, once he's got an AGP socket? PCI that runs on a 66MHz bus that's 64 bits wide has existed and even been available in high-end PC class hardware for years, but few of even Slashdotters have anything other than 32 bit 33MHz PCI in our home machines. The only time I ever deal with the 64 bit PCI cards is for Sun Microsystems hardware at the office.
I don't think this is "forcing another upgrade cycle" at all-- upgrades already exist, and most of us don't have 'em.
Maybe CY2005 (Score:2)
Backward compatability? (Score:4, Interesting)
It would be very nice to maintain a PCI port that was capable of faster speeds but still able to run old devices (somewhat like AGP 2x/4x/8x or USB 1.0/1.1/2.0 ramping up, ignoring recent USB developments).
I still remember one of biggest pains in my backside was trying to run PC's that needed an old ISA device (Scanner interface, old ISA SCSI card, special controller card, whatever) which I have heard is a drag on the whole system. Nowadays, I've got only PCI and AGP, though my old but still very good ISA SCSI scanner is still plugged into my 1Ghz Duron (with a single ISA port).
Will we get the best of both worlds? If express supports normal PCI, we can replace the old stuff in a jiffy. Running mixed slots again might be a pain, though.
Are you sure? (Score:3, Interesting)
Are you *sure* it's running out of bandwidth?
The old-time, 10-year old 33 MHz, 32-bit PCI bus is still handles 99% of all home users just fine. However, for the more bandwidth-hungry users, you can increase the width to 64 bits. Not enough? Double the frequency. Still not enough? PCI-X will run them at up to *133 MHz*.
Let's put some numbers to that. On a 32/33 bus, you're looking at a maximum real-world, sustained throughput of about 100 megabytes/second. Double the width, that's 200 megabytes/second. Double the frequency, that's 400 megabytes/second.
Alrighty, then. Nearly a half of a gigabyte per second. That's awfully tough to fill. That will handle two gigabit ethernet controllers running full-tilt, and still have enough bandwidth left over that you'd need at least an INCREDIBLY fast RAID array to fill it.
But, just for fun, let's say it's still not enough. PCI-x, at 133 MHz, will double that *again*, to a full gigabyte per second. On a single controller. You're going to have an *INCREDIBLY* tough time actually using that - you'd be very hard pressed to actually get that much to move over a network and/or disk.
Still, you need more? No sweat. Many boards offer more than one controller. With two PCI-x controllers, that's two gigabytes/second of bandwidth. Not two gigaBITS, but rather two gigaBYTES.
Tyan recently introduced a board that has four gigabit controllers, each on their own PCI-x controller, with an additional 64/133 controller, a 64/100 controller, and a 32/33 controller. Again, let's put some numbers to that:
At 100 MB/s for each of the gigE controllers, that's 400 MB/s right off the bat. Add in the 64/133 controller, that's about 1400 MB/sec. Add in the 64/100, you're looking at about 2200 megaBYTES per second.
Now, really... can *anyone* here raise their hand and say that they could actually *utilize* 2200 megabytes/second of bandwidth to the outside world, either via network or disk?
Despite all of the ideas of the sky falling, PCI has done a very good job for the last decade, and amazingly enough, is still going strong. Strong enough that it will be quite a while before it truly NEEDS to be replaced.
Now, when it *IS* replaced, I'd much rather see the interconnects being optical, not electrical. Instead of cracking open the case, shutting off the power, and trying to wedge yet another card inside (especially in low-height rackmounts), I'd much rather set the device on a shelf, and run a fiber patch cable over to the computer. No shutting down, and a whole lot more simple.
steve
Re:Third Generation ? (Score:2)
Re:Third Generation ? (Score:2)
Vesa Local Bus and AGP aren't general purpose buses like ISA and PCI if you ask me. There is the PCI-X standard though, but I guess the 3GIO name was aiming for a third generation of dominant technology kind of thing.
Re:Third Generation ? (Score:2)
Did we forget EISA?
How about microchannel? which was designed to be better than EISA. Although many would argue that it was more like VESA...
but EISA was there before Vesa...
Re:Third Generation ? (Score:2)
ISA = 1st gen
Vesa Local Bus and PCI are essentially just different versions of the same thing = 2nd gen
So PCI express will be 3rd Gen.
AGP is a bit different as it's graphics only.
Re:Is this really needed??? (Score:2, Interesting)
Re:Is this really needed??? (Score:2)
TV tuner card? Are there any motherboards with those integrated? I also think it's a good idea to keep a generic interface, like PCI, available, "just in case". There will always be something someone wants that the mobo manufacturers didn't think of.
Re:Is this really needed??? (Score:4, Interesting)
About the only stuff that has made it into the chipset are cheap soundcards (yes creative is cheap to) and some extremely cheap raid solutions. A lot of other stuff is still in one form or another on the PCI bus. Even if it is not included on a plugin board.
So yes there is a real need for it. Simple example? Raid disks. With striping (multiple disks working together) it is now very easy to saturate the PCI Bus with the cheapest disks.
Same with gigabyte ethernet.
Of course it will be a long time before any real replacement will happen if ever. If I look at some of my old boards on top of the bookclosset I can see it took a long time before ISA was off, and I also see some odd really short slots I never used or seen cards for.
Re:Is this really needed??? (Score:2)
Southbridges haven't been connected to Northbridges through the PCI bus for a while now. Which means that eveyrthing that is integrated on the Southbridge does not run over PCI, but over HubLink, HyperTransport, A-Link, etc. (whatever chipset you happe
Re:Is this really needed??? (Score:2, Informative)
The vast majority of users will not have any need for this kind of bandwidth for quite a while. People doing heavy graphics/video processing will like it but 99% of the public will yawn.
There are two major benefits, however.
Re:Remember USB? (Score:2, Insightful)
Oh, I'm sure at least a couple of Linux companies will get access to the specs. That isn't a whole lot of help to those of us who are not working on Linux. We either have to wait for the code to be completed and available from Linux (From which we then have to reverse engineer the exact process from) or hunt ar
Re:can't please everybody I guess (Score:3, Insightful)
I suspect this will be a long attrition as it was with phasing ISA out of motherboards.
From the Anandtech article:
So, for many users PCI-Express will not be a necessity because the unwashed masses are by and large not on the cutting edge of t
PCI-X vs PCI-Express (Score:2)
Re:Somewhat depressing for hobbyists (Score:3, Informative)
Re:cards consumers can install w/o a screwdriver? (Score:3, Insightful)
Standard PCI is laughable for high availability.
Parity is for farmers -- Seymour Cray