Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

PCI Express - Coming Soon to a PC Near You 359

Max Romantschuk writes "I've been following the emerging of PCI Express for some time now. PCI Express, previously known as "Third Generation I/O" or "3GIO", is the technology set to replace PCI. PCI has been with us for around ten years now, and is rapidly running out of bandwidth. Last week Anandtech ran an interresting story on PCI Express. The techology has previously been covered by Hexus and ExtremeTech aswell. I feel this technology looks all set to replace PCI, and we really do need some new bus technology to keep up with the bandwidth demands of today's applications. Or is this just yet another way to force us into a new upgrade cycle?"
This discussion has been archived. No new comments can be posted.

PCI Express - Coming Soon to a PC Near You

Comments Filter:
  • by motown ( 178312 ) on Thursday June 19, 2003 @07:24AM (#6241621)
    Due to its high bandwidth, it's expected to replace AGP as well.
    • Speed (Score:2, Interesting)

      by baker_tony ( 621742 )
      Excuse me for being dumb, buy why is everything going serial over parallel? I.E. Why is serial transfer faster than parallel transfer?
      • Re:Speed (Score:5, Informative)

        by ViXX0r ( 188100 ) on Thursday June 19, 2003 @07:37AM (#6241715) Homepage
        As I understand it, using serial there is no having to worry about whether all the bits arrive at the same time (as there obviously is with parallel), and so the speed of transmission can be dramatically increased past the point at which it becomes faster than the "equivalent" parallel technology... bits arrive in the order they were sent - guaranteed.
        • by gorjusborg ( 603799 ) on Thursday June 19, 2003 @08:20AM (#6241985) Journal
          using serial there is no having to worry about whether all the bits arrive at the same time (as there obviously is with parallel), and so the speed of transmission can be dramatically increased past the point at which it becomes faster than the "equivalent" parallel technology... bits arrive in the order they were sent - guaranteed.

          I'm afraid this might add to the confusion about serial interfaces being 'faster' than parallel. While it is true that you don't have to worry about data/clock skew when using serial interfaces, enabling you to clock them faster, a parallel interface running at the same clock speed as a serial interface will always be faster in terms of data throughput. The reason for this is simple: serial == 1 bit per clock, parallel = > 1 bit per clock.

          So, saying that serial is faster than the "equivalent" parallel interface is confusing, and incorrect, because one could be referring to equivalent clock rates being used for each interface, in which case parallel will provide at least twice the data throughput. On the other hand, "equivalent" could be referring to identical throughput rates, in which case the serial and parallel interfaces would provide, by definition, identical data rates.

          The real advantage that PCI Express has over PCI/PCI-X is that it is a point-to-point, rather than a multi-drop, bus. This setup requires less time between pin transitions, meaning that it can be clock faster. Also, like Ethernet, a serial protocol can imbed the clock into the data stream so clock/data skew is no problem whatsoever.

          Serial is not better than parallel anymore than digital is better than analog, there are just physical reasons why implementing point-to-point serial at significantly higher clock rates is easier than multi-drop parallel.

          Anyone still awake?
          Didn't think so :-l
          • With the signal rates of computers getting faster and faster, we need to pay attention to the length of the wire that the signal travels. On a parallel bus, the leading edge of bit 0 may arrive a little before, or a little after the leading edge of bit 7 (or 15, or whatever) simply because the bits take different, but parallel, paths. It is possible that a parallel system could slip bits because of this signal lag.
            • by taniwha ( 70410 ) on Thursday June 19, 2003 @01:03PM (#6245167) Homepage Journal
              (puts on chip designer hat) this has long been a problem - even at the on-die level (that's what he ment by "worrying about data/clock skew").

              Look on any mother board of the past 5-10 years - you'll see bunches of wiggly traces deliberatly lengthened to deal with just these problems.

              I think that this thread has however become rather confused - parallel/serial vs. point-to-point/multidrop

              On a multidrop bus you mjust meet setup.hold at every slot on the bus, you get nasty reflections that make this even harder to implement (look at the PCI spec for an example of wrestling with these problems) - point-to-point signals can be cleanly terminated and only have to be correct at one place - the other end of the bus the amount of skew can be greatly reduced

              Inter-bit skew on a parallel bus has its own problems you have to meet setup/hold on every bit wrt the clock - that's a hard layout problem. You can solve this a lot of ways - bundling (ala EV6/AMD slot2k) where bits are bundled into smaller chunks with their own locks, or even at the extreme run a clock per bit, or use self-clocking protocols on each bit (wastes a little bandwidth). These techniques cost more gates and latency than the traditional methods - parallel isn't impossible, it's just a little harder

          • by jmichaelg ( 148257 ) on Thursday June 19, 2003 @09:38AM (#6242732) Journal
            Which is why PCI Express is specified as a scaleable technology. You can get single pin X1, dual pin X2, quad pin X4, and so on.

            Need more bandwidith - add more pins. With each pin delivering 100 megabytes, there's lots of room to grow.
          • I'm afraid this might add to the confusion about serial interfaces being 'faster' than parallel. While it is true that you don't have to worry about data/clock skew when using serial interfaces, enabling you to clock them faster, a parallel interface running at the same clock speed as a serial interface will always be faster in terms of data throughput. The reason for this is simple: serial == 1 bit per clock, parallel = > 1 bit per clock.

            That's the whole point isn't it? Wide and slow or narrow and f
        • ok, so have multiple serial channels on the same cable seperated by grounding plates. I agree that having multiple parallel signals produces capacitance issues, but this is the same with all neighboring wires on the motherboard. The only difference is that many adjacent parallel wires will be in sync; but if they are seperate serial channels, then there would be no difference between this an independent connectors.

          Having multiple serial channels seems like the most expensive way of going about things, bu
      • Re:Speed (Score:5, Informative)

        by KrishnaACD ( 675295 ) on Thursday June 19, 2003 @07:38AM (#6241726)
        I wondered this too, so went digging. the most concise and, to me, most credible answer was the following (Credit to K. Adam's at Geek.com)
        Serial Faster than Parallel... (5:41pm EST Wed Jul 25 2001) The problem with parallel (ribbon) data transfer cables is the crosstalk that occurs between adjacent conductors at very high clock/transfer speeds. IBM developed a work-around for ATA-66 and ATA-100 by using an 80-conductor cable with a 40-pin interface, by stringing a "ground" conductor between each "signal" conductor. Capacitance issues, "standing waves," and impedance (electrical resistance as relates to rapidly-changing voltages) matching problems become more evident in parallel (ribbon) cables as you crank up the clock/transfer speeds, also. It's a lot easier to match the impedance of a few conductors in a serial cable to its interface, than trying to match impedance for 40 conductors. Parallel schemes actually have a lot less "processing" overhead than serial schemes, but you're ultimately limited by physics a lot more quickly than with serial... - by K. Adams
        Kacd.
        • Re:Speed (Score:2, Interesting)

          by hamanu ( 23005 )
          Yeah, well the ATA cables with 80 pins are not needed because the clock speed is so high, they're needed becasue IDE is not properly terminated, so your source is shot down.
        • Re:Speed (Score:3, Interesting)

          by stilwebm ( 129567 )
          In some sense ribbon cables are easier to maintain parallel connections through, as well. With a motherboard you want the shortest path possible, the least amount of circuit trace path that is. As you add bus lines you lose circuit real estate, increase EMF output, run in to more problems with capacitance/inductance/resistance varying between lines, and generally increase the headaches of designing a stable motherboard. These all add up to more costly (6+ layer PCB design, more R&D, etc) products for
        • Re:Speed (Score:3, Informative)

          by rabidcow ( 209019 )
          IBM developed a work-around for ATA-66 and ATA-100 by using an 80-conductor cable with a 40-pin interface, by stringing a "ground" conductor between each "signal" conductor.

          Not that this concept is anything new, half of the 40 pins from earlier versions of ATA were grounded. (every other one) Same thing with old parallel ports, the data lines have ground interleaved.
    • by merlin_jim ( 302773 ) <James@McCracken.stratapult@com> on Thursday June 19, 2003 @08:13AM (#6241919)
      Due to its high bandwidth, it's expected to replace AGP as well.

      This is not technically true, though I can see why you would be confused.

      They anticipate that customer demand for PCI-X will be so great that it will be difficult to sell AGP boards, therefore AGP will be renamed PCI-X. In order to distinguish between the two, the PCI-X spec will be designated "PCI-X High Speed" while the AGP spec will be designated "PCI-X Full Speed"
    • by JCCyC ( 179760 ) on Thursday June 19, 2003 @08:56AM (#6242279) Journal
      Never again will any announcement of new hardware technology be received by us geeks with the glee it once was. The only thing that comes to our minds now is "great, another opportunity for them to add DRM and phase out hardware that allows copying"
    • I think that 64-bit PCI is a good alternative to normal PCI, it is already in all the PowerMac G3/G4 motherboards and Sun motherboards, you can get an x86 one with 64-bit PCI too if you look hard enough.
    • PCI Express is packet based which makes operation of memory mapped devices across it exceptionally inefficient, both in bandwith and latency. So I would be suprised if PCI Express replaces AGP, where the primary interface is a huge direct-mapped on-board memory that video drivers directly paint the desired picture via massive use of load and store instructions.

      Where PCI Express will really shine is in block transfer devices such as HD and CD-ROM and high volume streaming devices such as those producing vi
  • Let's start fresh! SATA, PCI-Express, USB2.0! Time for a clean break! Get rid of all the legacy crap. We're supposed to upgrade every three years anyway, so let's really upgrade.

    Oh Shit, we all have iMac's now...
  • I am speechless! "PCI Express currently runs at 2.5Gbps, or 250MBps per lane in each direction, providing a total bandwidth of 16GBps in a 32-lane configuration."

    What can I say: WOW! - Hello gigabit netcards, (multiple) extreme graphics adapters etc!
    • Marketing: We got one, sir.
      PHB: Very good, Igor, very good.
      Marketing: Shall I go for another?
      PHB: They will come to you in time.
      Marketing: What's that sound?
      PHB: It's the sound of thousands of mid-level product managers from struggling PC sellers banging on your cubicle wall, young Igor.
      Marketing: *cries*

      That said, I'm excited about PCIExpress. Perhaps not as much as Daath, but excited nonetheless. 'Course, this just means that the GeForce4 I bought 6 months ago will look about as quaint as Combat! on an
  • The high bandwith demands of today's application are designed to lock us into another upgrade cycle.
  • by tekrat ( 242117 ) on Thursday June 19, 2003 @07:32AM (#6241672) Homepage Journal
    Just wait until the PCI group renames PCI Express to PCI just to keep things confusing to the consumer. After all, if consumers are demanding PCI Express in their computers, then just rename everything to PCI express... or however that USB fiasco works out....

    I'm just wonering now if that external HD USB2 case I bought is really 1.1 or not... Grrrrr.
  • Hmmm (Score:4, Interesting)

    by koh ( 124962 ) on Thursday June 19, 2003 @07:33AM (#6241679) Journal
    Or is this just yet another way to force us into a new upgrade cycle?

    Or maybe current PCI devices don't support DRM out of the box ? Please upgrade your bus techno, so we can use all this extra bandwidth to transfer huge crypto keys to/from your hardware, just in case you want to play a copyrighted sample on your soundcard :)

    (-1 Paranoid)

    • Re:Hmmm (Score:3, Informative)

      by dissy ( 172727 )
      Fortunatly PCI express will be a dropin replacement for PCI.
      Only the electrical connection will actually change, the 'language' spoken over it will be no different than todays PCI, this way drivers will not need changed or upgraded to support the PCI-X version of the hardware.

      Its just like how serial ATA is replacing our current ATA.
      The standard they use to speak is not changing, only the electrical interface.

      As a matter of fact, adding DRM (To PCI-X *OR* PCI) would indeed require driver changes, so you c
  • by WeiszNet ( 88819 ) on Thursday June 19, 2003 @07:33AM (#6241684) Homepage
    More than bandwidth, what I need would be a bus
    that doesn't have a problem with too many extensions
    because of a limited number of IRQs.

    Today most mainboard come with many onboard PCI componentes. If you really are going to put in 3-5 extra PCI components in a stock PC, you usually end up in a nice game of 'let's see what order works best', or cannot use all cards together at all.
    • by arth1 ( 260657 ) on Thursday June 19, 2003 @07:44AM (#6241757) Homepage Journal
      The limited number of IRQs hasn't been a problem since PCI 2.1 and APIC. It's a problem with Windows 9x and a few other operating systems, but those won't be able to use PCI Express anyhow.

      The problem today is more with interrupt line sharing (#A, #B, #C, #D -- some motherboards add more, but four is the old spec), and cards sharing the actual interrupt and not the interrupt queue (IRQ), depending on how you place them.

      But yes, to answer your question, there's less problems, due to the parallel serial nature (now is that an oxymoron?) of the controller interface, working somewhat like SCSI does.
      At least until 4x, 8x and 16x PCI Express arrives in force, and cards starts competing and assuming that all the streams are available for THEIR card, much like some cards today think it's ok to bump up the PCI latency, cause the user SURELY must have some unpopulated slots we can steal time from...
      Yeah, when that happens, it may be hell to troubleshoot, but we'll just wait and see...

    • I don't see an issue if the system uses more recent IRQ standards, I think ACPI extends them by at least double.

      Under Windows 2000, I see IRQs ranging from 1 to 31 on my three year old PIII Xeon. My P4 has available IRQ slots going up to at least 22. I'd say there is plenty of room.
    • by cheezedawg ( 413482 ) on Thursday June 19, 2003 @10:02AM (#6242963) Journal
      PCI Express gets rid of all of the sideband interrupt signals and only uses Message Signalled Interrupts (MSI). This gets rid of any need for IRQ sharing. The only limitation of MSI is the number of interrupt vectors available in the local APIC in the CPU (currently 256).
  • Is it what the manufacturers think we want? The traditional Hard Drive is still the main componant in the PC slowing everything down, yet the manufacturers still keep increasing CPU, and BUS speeds and increase noise and heat levels.
    • you can go out and get a fast drive array (I think 2 scsi u160 drives with 10k spindle speeds in a raid 0 array is enough), which will actually saturate the 32bit pci bus. If your looking for a fast disk, the technology is available. It's not cheap; but, it's available.

      So, at some point, even if you develop faster drives you're going to need a faster bus to support it.

  • Wanted: (Score:2, Offtopic)

    by swordboy ( 472941 )
    Open Laptop Chassis, monitor and power standards!

    Ding, Ding!
  • Why does every /. story need to have some little cynical tagline at the end of the intro. Why can't people just post the story, let other's read it, and formulate their own opinions? Arrgh, it's been starting to drive me nuts. /. is starting to sound more and more like a bad TV news program every day. "Everything is quiet and safe in our little suburb. OR IS IT?!"
    • Why does every /. story need to have some little cynical tagline at the end of the intro.

      I have no idea. I've submitted 2 stories which have been accepted, so you'll have to ask the editors about the picking of cynical-tagline-ending-stories...

      What I do know is that I felt raising the question would benefit the discussion, and I wanted to hear people's oppinion in comparison to my own. Questioning everything is part of my nature, i guess.
    • I think the point is to start off a discussion, since the discussions are often more interesting than the news item in the first place.
    • Why does every /. story need to have some little cynical tagline at the end of the intro. Why can't people just post the story, let other's read it, and formulate their own opinions?

      You must be new around here. Welcome to /.!
  • About time (Score:5, Insightful)

    by zensonic ( 82242 ) on Thursday June 19, 2003 @07:35AM (#6241703) Homepage
    Given that the PCI interface was introduced to the world by intel in 1992 and that we since have increased the cpu processing powers by a hundred fold (give or take a little) it is really about time that the bus catches up.
    • While I agree that the old 32bit/33Mhz PCI is severly lacking. It's not like the standard has been at a standstill. We've gotten 32bit/66Mhz and 64bit/33Mhz flavors, both of which double the bandwidth of standard PCI, and we have PCI-X (although I will fully admit to being ignorant of what the fuck it is). Although for apps like HDTV and 10Gb Ethernet, any flavor of PCI starts to lack. Additionally, these should eventually make computers cheaper/smaller due to a much lower number of traces that have to
    • What PCI device are you using that is bandwidth limited & will benefit from a faster PCI bus? I don't have anything. My video card is in an AGP bus, and I would like that to be faster, but for the items in PCI slots, I really have nothing that will gain any benefit. I would much rather have faster memory, closer or in the processor.
      • Network interface cards are already bumping into the bandwidth limits of PCI. Gigabit Ethernet will soon be cheap enough for mass deployment. 10 gigabit Ethernet is on the horizon.
      • Re:Why? (Score:5, Insightful)

        by dissy ( 172727 ) on Thursday June 19, 2003 @09:11AM (#6242427)
        > What PCI device are you using that is bandwidth limited & will benefit
        > from a faster PCI bus?

        Gigabit ethernet, soon 10gbit ethernet..
        multiple firewire buses, or even one firewire 800 bus..
        Multiple high speed graphics cards..
        Multiple SCSI or fiberchannel buses..

        > I don't have anything.

        > I really have nothing that will gain any benefit.

        Well thank you for deciding that what you need is exactly what everyone else needs and they should be happy with that :P

  • by brucmack ( 572780 ) on Thursday June 19, 2003 @07:37AM (#6241714)
    I'd say a new standard every 10 years is a pretty reasonably upgrade cycle compared to most other PC technologies...

    OK, so yes we can probably live with PCI for longer (possibly much longer), but why not introduce a new standard with better potential? It maintains complete backwards compatability with regular PCI components, so manufacturers of harware don't even have to change anything. Of course another issue is motherboard cost, but there will always be new features put into successive motherboard generations that aren't in widespread use yet... like serial ATA, gigabit ethernet, etc. And there will probably be motherboards available for a lower cost without those features as well.
  • by CaseyB ( 1105 ) on Thursday June 19, 2003 @07:39AM (#6241730)
    I'm a bit concerned about the way the cards are mounted. System bus connectors aren't just data connections -- they're structural foundations for today's giant hardware.

    How are those tiny little serial connectors supposed to support the weight of my 2007 GeForce Maxx Fury 7 video blaster with its jet turbine fan? They'll snap like twigs, I tell ya!

    • As you can see here [extremetech.com], x8 and x16 connectors will still have a good number of pins, so the connector will not actually be any smaller than the current AGP connector.
    • by dissy ( 172727 )
      > I'm a bit concerned about the way the cards are mounted.

      From the demo I saw about a year ago, the cards mount as-is and get power from the standard PCI bus, but that is all.
      Then there is a small connector on the edge of the card oposit from the back panel (Some disk cards have LED header pins here, some cards have power connectors here where a floppy power jack like USB and FW cards)

      This is where the serial PCI-X will connect, and have a thin cable connecting to whereever on the mobo it will go.

      Lat
  • Other than uncompressed HDTV (1928x1080@29.97, 4:2:2 = 124MB/s), what applications need higher than 133MB/sec? Are 3-D cards really limited by the PCI/AGP bus today? Certainly it would be nice (and reduce video card cost) to move all that on-board 3-D memory to the motherboard, but is that really a compelling enough reason all by itself?

    I'm not saying that there aren't lots of uses for a faster bus, but changing to PCI was painful (Remember EISA/VESA Local FUD? Remember motherboards with four slots, not
    • by imsabbel ( 611519 ) on Thursday June 19, 2003 @07:59AM (#6241867)
      One high end hard disc delivers 50MB+/s.
      One gigabit Ethernet card can do >80MB/s
      Together they are limited by PCI.
      Now try Raid, TV-Card with PCI-OVerlay, GFX-Cards (Yes, they need a few 100MB/s)...

      Plus remember that you NEVER EVER reach 133 MB/S with PCI. Even a single device can be happy to get 110MB with long bursts, and if you have many devices, effective total bandwith is more like 66 than 133 MB/s.
  • Linux support... (Score:4, Interesting)

    by pen ( 7191 ) on Thursday June 19, 2003 @07:42AM (#6241747)
    It looks like Linux developers are already working on support [google.com]. Also, the Inquirer reports that PCI may kill AGP? [theinquirer.net]
  • by simong_oz ( 321118 ) on Thursday June 19, 2003 @07:42AM (#6241749) Journal
    Or is this just another way to force an upgrade cycle?

    It may well be one of the intentions of it, but one thing I don't get is that with CPU speeds and hard disk capacities where they are now, the average computer buyer (which probably is not very well represented on slashdot) no longer really needs to upgrade their computer, so changing interface/slot shape/etc won't really matter to them.

    I know I'm generalising, but the only applications that really push today's computers are games (and high end scientific programs, but they're a fairly minor special case) and I would guess that most computers are not used primarily for games (ie. "serious gamers" - think families). Serious gamers will always be upgrading their computer to the latest and greatest anyway - they don't need to be forced into an upgrade cycle.

    It's getting to the point now where by the time the average family decides they need to upgrade their computer, it is easier (and maybe even cheaper) to just buy the latest middle-of-the-line computer package.

    I'd almost question whether the idea whole idea of upgrading is itself becoming obsolete for an average computer user?
    • by ZaDeaux ( 647798 )
      I agree, I haven't been forced into any upgrade cycle in the past 2 years. And the reason is because I stopped gaming. When I was a gamer I was always on the latest and greatest. But as soon as I stopped, I found my computer was fine for all other applications I could possibly want to run. Wow, that felt like I just introduced myself at an AA meeting.
    • by praedor ( 218403 )

      I used to upgrade relatively often as the performance of parts increased significantly from month to month. Quick upgrade from 386 to 486 to pentium to celeron to AMD. After that last one, to a fast Athlon, then a faster Athlon, I just quit noticing any real benefit. Sure, my kernels would build faster but even with that I have slowed down as linux kernels have just gotten plain excellent.

      I jumped up to 512MB Ram and got a big HDD and am set for quite a while, it would seem. No big performance gains f

    • Serious gamers will always be upgrading their computer to the latest and greatest anyway - they don't need to be forced into an upgrade cycle.

      It's getting to the point now where by the time the average family decides they need to upgrade their computer, it is easier (and maybe even cheaper) to just buy the latest middle-of-the-line computer package.

      I'd almost question whether the idea whole idea of upgrading is itself becoming obsolete for an average computer user?


      I imagine what they're talking of here
    • by Zapman ( 2662 )
      Realistically, this isn't targeted at the average joe (or even the average /. reader (well, not yet)). This is targeted at the HA cluster's we're spec-ing out at work:

      3 gigabit interfaces (2 in redundant failover mode, one for backups), 6 fiber channel disks on 2 diskplanes, and 2 qlogic HBA's to our EMC array.

      Each one of those cards has quite the ability to saturate a single PCI bus. Thankfully, the boxes we're putting them into have 4 different PCI busses, so we can put 1 fiber or 1 HBA onto each.
  • Question (Score:2, Funny)

    by Goody ( 23843 )
    Will the nickname for PCI Express be "PCI XP" ?
  • by IGnatius T Foobar ( 4328 ) on Thursday June 19, 2003 @07:52AM (#6241816) Homepage Journal
    Based on the direction in which mass-market computers are moving, the bus that gets exposed to the user is getting somewhat less important. Aside from gamers and tinkerers, and people who manage big servers, how many computer users ever have a need to open up the case?

    Ten years ago it was almost a given that at some point, you (or your Computer Guy) had to add or replace one of the cards -- add Ethernet, upgrade the video, whatever. Nowadays, the hardware on-board is more than sufficient, and any of those "special" accessories you get, such as storage drives for your digital camera, or a scanner, or whatever, are more likely than not going to be USB or FireWire.

    It's very likely that the mainstream desktop computer is going to move to a slotless "brick" form factor. This would have the side benefit of making it much cheaper. This form factor is available already, but it's not yet cheap because it's still considered a "specialty" unit.

    I'd also be happy to see the return of the Commodore 64 form factor -- just shove everything into the keyboard. Plug in your mouse and monitor and Ethernet, and go.
  • Or is this just another way to force an upgrade cycle?

    But I like to upgrade!

    I usually build two computers a year. If I sell my computer every six months at 75% (which is about the going price) of its original price, I can keep up experimenting with sweet new hardware.

    As an added bonus, I've built an expanding network of friends, friends' friends and practically unknown people who have been referred to me by the others. They buy my second hand computers, consult me whenever they want to buy a computer

  • by Gothmolly ( 148874 ) on Thursday June 19, 2003 @07:52AM (#6241822)
    Hmm, let's see, on a desktop PC, you have:
    some IDE controllers, each of which can push maybe 50MB/sec to the media (RAID-0) tops.
    audio, keyboard, some other I/O, maybe 1 MB/sec
    NIC, 10MB/sec tops

    Ok, so I do the math and get 61MB/sec, or just under 1/2 the bandwidth of PCI. For 90% of the PCs out there, this is sufficient. For high end boxes, you can use 64bit or 66MHz PCI, or PCI bridges.

    Tell me again why this technology is necessary?
    • by hbackert ( 45117 ) on Thursday June 19, 2003 @08:20AM (#6241977) Homepage

      Now that's a no-brainer.

      My computer is by far not a high-end box, but PCI is a (small) bottle-neck, even for me.

      Let's see: 2 IDE channels, 2 disks, that's 50 MB/s each, 1 GBit network, that's peak 100MB/s. A U2W SCSI host adapter with 1 single, very fast disk is good for 70MB/s. Then there is USB2 (everything is USB2 now) and Firewire (each 50MB/s). Adds up to (peak) 370MB/s.

      You and me and most people know, that a usual user and most unusual users like the ./ crows will never use all devices at once. But just copying data from disk to network saturates the bus.

      A simple fix is 66MHz 64Bit PCI, but those are very rare in consumer machines. So while PCI-Express might be currently overkill, I doubt simple 33MHz 32Bit PCI will be sufficient even for consumer grade computers. Just imagine 10 years ago when PCI started: most were using ISA and that was enough for most usual users. 10MBit/s Ethernet cards used less than 1MB/s. Who needs a faster bus? Only servers needed PCI (or EISA).

      Watching the long migration from ISA to PCI until ISA was (mostly) replaced, I don't expect PCI-Express to replace PCI within 5 years. And in 5 years I would bet, that PCI looks like ISA does now: slow and outdated.

      • Now that's a no-brainer.

        My computer is by far not a high-end box, but PCI is a (small) bottle-neck, even for me.

        Let's see: 2 IDE channels, 2 disks, that's 50 MB/s each, 1 GBit network, that's peak 100MB/s. A U2W SCSI host adapter with 1 single, very fast disk is good for 70MB/s. Then there is USB2 (everything is USB2 now) and Firewire (each 50MB/s). Adds up to (peak) 370MB/s.


        On almost all modern chipsets, these devices do not use the pci bus at all, unless you've put it your own add-in cards. (so, I ima
      • by captaineo ( 87164 ) * on Thursday June 19, 2003 @09:47AM (#6242818)
        The one consumer-level application where a better bus is vital is HDTV. Current buses are just barely able to handle uncompressed SDTV (20-30MB/sec). (in theory PCI gives you 133MB/sec and AGP more, but as they say, in theory there is no difference between theory and practice :).

        PCI-X will finally bring HDTV (~200MB/sec) within reach. What this means is that you'll be able to have a software-only HDTV decoder - which will make it trivial to receive HDTV broadcasts on a PC, and make HD-DVD players possible.

        At the pro level, this is just about the last thing that a $50K SGI system has over a cheap Linux PC - playback and maniupulation of uncompressed HDTV video. It's about time PCs finally caught up to "workstations" in the bus department...
      • You will also need to consider bus efficiency. PCI is something like 60-70% efficient, and PCI-X is about 90%. So 60% of PCI 64/66 is at about 370MB/s. If all your devices happens to run at the same time with your above description you just peaked. However, consumer hardware always lag WAY behind. You are not thinking of enterprise server space where 1GigE is being deployed, fibre channel running at 2GigE, and 10GigE is being developed. Even with PCI-X 2.0 with QDR you may not have sufficient bandwidth even
    • The real PCI bandwidth is usually something like 75-90MB/sec. Depending on the chipset.

      Now, add in IDE RAID cards, and SCSI cards and those along can saturate the bus. Consider that a single SCSI HD can now pump out about 70MB/sec when used in an STR intensive application.
    • Tell me again why this technology is necessary?

      (1) bandwidth

      AGP is pushed to the limits with 8x, and will not go any higher easily. PCI-Express, on the other hand, will easily start with the same bandwidth, and will have plenty of headroom for future cards.

      (2) cost

      If your audio card is only using 1MB/s, then you will use a slow x1 lane to hook it up, and your motherboard becomes way simpler and cheaper to design -- instead of routing 40 pins accross all PCI slots, you'll route 10 pins directly to the x
  • Lets faces it, this is just so we can stream porn faster - everyone knows what drives technological advances. Innovation can be measured in pron per minute...
  • This is going to finally bring around some really miniscule systems that actually have expansion slots. Very very cool, now the size limitation to a useful system is going to be the height of the CPU cooler!
  • This looks just as if they have taken the Infiniband spec and stripped out a lot of the bells and whistles which that had grown in order to become totally generic. Intel "back burnered" Infiniband about six months ago, so presumably switched there effort to this. It looks a good idea to me - I think we have been waiting too long for tightly-coupled serial links to reach general use. Inmos had them 18 years ago, but they never caught on then (because inmos wouldn't let people buy in to their several good ide
  • Does this mean that now we'll have motherboards with ISA, PCI, PCI Express (I know - they'll physically co-exist with PCI slots happily, but that doesn't alleviate the confusion to the !/. crowd) and AGPx1,2,4,8 (take your pick)? Any future motherboard that has ONLY PCI Express (this means no AGP slot, not 0 PCI slots) would be rightly considered to be less backwards compatible, and would therefore offer much less choice in terms of components, lowering its perceived market value. The perfect example here w


  • While PCI Express will probably replace PCI, the transition will probably be very long.

    As the Anandtech authors point out, there were still ISA slots on motherboards 10 years after the introduction of PCI. So I would expect that you'll still be able to use your PCI cards in new computers five years from now.
  • by thriemus ( 514728 ) on Thursday June 19, 2003 @08:15AM (#6241941)

    âoeI feel this technology looks all set to replace PCI, and we really do need some new bus technology to keep up with the bandwidth demands of today's applications. Or is this just yet another way to force us into a new upgrade cycle? When I look back at the explosion of technology within the past decade and the ever-continuing attempts to eradicate the bottlenecks that computer systems have had PCI Express is a breath of fresh air. For example lets take a look at processors; within the past ten years processor speeds have doubled every eighteen months if we go by Moores Law. Itâ(TM)s hard to believe it was a little over ten years ago that Intel released the First Pentium Chips [compsoc.net]. HDD speeds (physical read) have also increased dramatically from about 2 MB/s for a 635MB HDD to over 45 MB/s for a modern HDD. Graphics were given a face lift with the introduction of the AGP bus pushing the speeds of transfer up from PCIâ(TM)s 133 MB/s to 2.1GB/s however many systems are used for a LOT more than video rendering capabilities and are geared more towards storage markets were data access speed is of the utmost importance. 64 bit PCI gave us a boost to 266 MB/s transfer speeds to be used in conjunction with high speed U320 SCSI but even then we cannot take full advantage of the capabilities offered. PCI express opens up the horizons for computers letting us transfer substantial amounts of data in less time. This can only be a good thing. More Information * Shorter Time = Greater Efficiency Therefore I donâ(TM)t see this as another way to force us into the upgrade cycle but a good solid advancement in computers. Also, the good thing is that it is coming wither we like or not.

  • Too bad they didn't decide to just use the existing industry-standard high-speed PCI replacement, PCI-X. Then again, Intel didn't make PCI-X, so it can't be any good, right? Just like Firewire.
  • by tmoertel ( 38456 ) on Thursday June 19, 2003 @08:20AM (#6241975) Homepage Journal
    Standard PCI tops out at 133 MB/s, which is about 1000 Mb/s. Hence one active Gigabit Ethernet card can saturate a PCI bus, leaving no headroom for other I/O. With GbE becoming commodity, consumer-level technology and 10-Gigabit Ethernet on the horizon, PCI is a bottleneck to the advancement of the PC architecture.

    Server hacks like the 66-MHz PCI bus speed and 64-bit-wide PCI are neither practical nor sustainable. That's why we need something different, something like PCI Express. It raises the I/O bar enough to give us another few years of unconstrained growth of the PC architecture.

  • by foxtrot ( 14140 ) on Thursday June 19, 2003 @08:22AM (#6242000)
    We realized PCI wasn't going to be fast enough years ago-- that's why pretty much every motherboard you can buy today has an AGP socket.

    And even that wasn't fast enough, now we have AGP 8x.

    But seriously, is PCI really not fast enough for the general consumer, once he's got an AGP socket? PCI that runs on a 66MHz bus that's 64 bits wide has existed and even been available in high-end PC class hardware for years, but few of even Slashdotters have anything other than 32 bit 33MHz PCI in our home machines. The only time I ever deal with the 64 bit PCI cards is for Sun Microsystems hardware at the office.

    I don't think this is "forcing another upgrade cycle" at all-- upgrades already exist, and most of us don't have 'em.

  • You might start seeing PCI-E in servers in the CY2005-CY2006 timeframe, after PCI-X and DDR PCI-X have run their course. This isn't something that's right around the corner, regardless of the hype.
  • by phorm ( 591458 ) on Thursday June 19, 2003 @11:23AM (#6243933) Journal
    I scanned the articles checked for anything on this, but didn't find a suitable answer. Will "PCI Express" be like USB, wherein it will support the older gen hardware as well as the newer hardware - or it will only support "Express" PCI devices?

    It would be very nice to maintain a PCI port that was capable of faster speeds but still able to run old devices (somewhat like AGP 2x/4x/8x or USB 1.0/1.1/2.0 ramping up, ignoring recent USB developments).

    I still remember one of biggest pains in my backside was trying to run PC's that needed an old ISA device (Scanner interface, old ISA SCSI card, special controller card, whatever) which I have heard is a drag on the whole system. Nowadays, I've got only PCI and AGP, though my old but still very good ISA SCSI scanner is still plugged into my 1Ghz Duron (with a single ISA port).

    Will we get the best of both worlds? If express supports normal PCI, we can replace the old stuff in a jiffy. Running mixed slots again might be a pain, though.
  • Are you sure? (Score:3, Interesting)

    by NerveGas ( 168686 ) on Thursday June 19, 2003 @12:40PM (#6244854)
    PCI has been with us for around ten years now, and is rapidly running out of bandwidth

    Are you *sure* it's running out of bandwidth?

    The old-time, 10-year old 33 MHz, 32-bit PCI bus is still handles 99% of all home users just fine. However, for the more bandwidth-hungry users, you can increase the width to 64 bits. Not enough? Double the frequency. Still not enough? PCI-X will run them at up to *133 MHz*.

    Let's put some numbers to that. On a 32/33 bus, you're looking at a maximum real-world, sustained throughput of about 100 megabytes/second. Double the width, that's 200 megabytes/second. Double the frequency, that's 400 megabytes/second.

    Alrighty, then. Nearly a half of a gigabyte per second. That's awfully tough to fill. That will handle two gigabit ethernet controllers running full-tilt, and still have enough bandwidth left over that you'd need at least an INCREDIBLY fast RAID array to fill it.

    But, just for fun, let's say it's still not enough. PCI-x, at 133 MHz, will double that *again*, to a full gigabyte per second. On a single controller. You're going to have an *INCREDIBLY* tough time actually using that - you'd be very hard pressed to actually get that much to move over a network and/or disk.

    Still, you need more? No sweat. Many boards offer more than one controller. With two PCI-x controllers, that's two gigabytes/second of bandwidth. Not two gigaBITS, but rather two gigaBYTES.

    Tyan recently introduced a board that has four gigabit controllers, each on their own PCI-x controller, with an additional 64/133 controller, a 64/100 controller, and a 32/33 controller. Again, let's put some numbers to that:

    At 100 MB/s for each of the gigE controllers, that's 400 MB/s right off the bat. Add in the 64/133 controller, that's about 1400 MB/sec. Add in the 64/100, you're looking at about 2200 megaBYTES per second.

    Now, really... can *anyone* here raise their hand and say that they could actually *utilize* 2200 megabytes/second of bandwidth to the outside world, either via network or disk?

    Despite all of the ideas of the sky falling, PCI has done a very good job for the last decade, and amazingly enough, is still going strong. Strong enough that it will be quite a while before it truly NEEDS to be replaced.

    Now, when it *IS* replaced, I'd much rather see the interconnects being optical, not electrical. Instead of cracking open the case, shutting off the power, and trying to wedge yet another card inside (especially in low-height rackmounts), I'd much rather set the device on a shelf, and run a fiber patch cable over to the computer. No shutting down, and a whole lot more simple.

    steve

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...