Forgot your password?
typodupeerror
Upgrades Hardware

PCI Express 3.0 Delayed Till 2011 80

Posted by timothy
from the dreamliner-of-connections dept.
Professor_Quail writes "PC Magazine reports that the PCI SIG has officially delayed the release of the PCI Express 3.0 specification until the second quarter of 2010. Originally, the PCI Express 3.0 specification called for the spec itself to be released this year, with products due about a year after the spec's release, or in 2010."
This discussion has been archived. No new comments can be posted.

PCI Express 3.0 Delayed Till 2011

Comments Filter:
  • whats in 3.0? (Score:5, Interesting)

    by convolvatron (176505) on Thursday August 20, 2009 @01:59PM (#29135961)

    the pci sig blurb says its mostyl cleanup and the removal of 5v support

    does anyone know of anything interesting in 3.0?

  • Re:whats in 3.0? (Score:5, Interesting)

    by symbolset (646467) on Thursday August 20, 2009 @02:09PM (#29136099) Journal

    Twice as fast again. x16 is 32GB/s. They're looking to support 3 graphics cards per PC, which is cool if you're into that whole supercomputer on your desk thing, but it's going to burn at least a kilowatt.

    I'm sad we haven't seen external PCIe implemented. It was in the v2 specification. The idea of an external interconnect with that much bandwidth probably made some heavy players nervous.

  • Re:Who cares (Score:3, Interesting)

    by fuzzyfuzzyfungus (1223518) on Thursday August 20, 2009 @03:11PM (#29137011) Journal
    Being able to add channels is certainly handy; but it isn't really a substitute for increasing speeds. If it were, we'd still be using PCI-X. Particularly in space constrained systems(laptops, blades, etc.) running more connectors and more traces is neither easy nor cheap. Even in your basic desktop ATX boards, you'd be hard pressed to get much more than a 16X slot without impinging on the RAM slots, or the CPU cooler area, or some other part.

    For the moment, at least, our ability to drive wires faster at low cost seems to be increasing substantially more quickly than our ability to run more wires at low cost.
  • Re:whats in 3.0? (Score:3, Interesting)

    by seifried (12921) on Thursday August 20, 2009 @03:46PM (#29137707) Homepage

    They're looking to support 3 graphics cards per PC

    Interesting, I just read the specs on my motherboard which has 4 slots for video cards, granted with 4 slots used it's only 8x (which is ok since I live in 2d land) but with 3 or less in use they're all 16x (well, so it claims), so it would seem that's already covered.

  • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Thursday August 20, 2009 @06:30PM (#29140343) Journal

    Personally I am quite glad they are delaying it until it is fully backwards compatible. Most folks here are probably too young to remember the "fun" of having hardware not be backwards compatible. Hell during the "fun" days of proprietary everything we even had things such as "Compaq RAM" that would only work in a Compaq board and even then only with certain boards that matched the particular quirks Compaq had built in! What fun! What joy!

    At least today RAM is RAM, PCI is PCI, and USB is USB. It is so much easier now that everything is built to spec and is backwards compatible. Just the other day I had to plug my USB 2.0 thumbdrive into a USB 1.0 port on an old office machine to snatch off some files before fixing it. in the old nothing is compatible days that would have been a royal PITA to get to work, if you could get it to work at all, but thanks to the specs being backward compatible I was easily able to snatch the relevant files, even if it was slow as Xmas. And don't think it couldn't happen in the modern time, because you've never had the "fun" of AGP 2, 4, and 8 where....damn I can't remember the formula anymore. I think it was a 2 could work in a 4, and a 4 could work sometimes in an 8, but an 8 couldn't work in a 2. Something like that. I for one would rather wait and not have to remember stupid formulas like that again, thanks ever so much.

  • by Hal_Porter (817932) on Thursday August 20, 2009 @11:22PM (#29142747)

    Personally I am quite glad they are delaying it until it is fully backwards compatible.

    Umm dude this is slashdot. The correct response is "This new standard sucks. It would be 10x faster if they didn't worry about back compatibility cruft" from a bunch of people who didn't understand the old standard but have been told it was really complex.

    A good example would be x64 replacing x86. Every single nerd on the internet knows that x86 is bloated and that x64 should have started from scratch, despite the fact that a look at a picture of the die of a modern processor shows that the actually CPU core is completely dwarfed by cache. Oh and lots of people including Intel have built RISC and VLIW chips without the x86 legacy stuff and they didn't turn out to have a long term performance advantage over the 'crufty' x86. Actually the cruft on x86 is a slightly larger hardware decode unit for frequently used instructions compared to RISC. Still x86 instructions take up less room in memory and thus in the cache. It could well be that it's cheaper to build a larger decoder than it would be to increase the cache size to fit the same number of RISC instructions in it.

You are in the hall of the mountain king.

Working...