Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Networking Hardware Technology

Clash of the Titans Over USB 3.0 Specification Process 269

Ian Lamont writes "Nvidia and other chip designers are accusing Intel of 'illegally restraining trade' in a dispute over the USB 3.0 specification. The dispute has prompted Nvidia, AMD, Via, and SiS to establish a rival standard for the USB 3.0 host controller. An Intel spokesman denies the company is making the USB specification, or that USB 3.0 'borrows technology heavily' from the PCI Special Interests group. He does, however, say that Intel won't release an unfinished Intel host controller spec until it's ready, as it would lead to incompatible hardware."
This discussion has been archived. No new comments can be posted.

Clash of the Titans Over USB 3.0 Specification Process

Comments Filter:
  • Re:1394 For Life (Score:2, Informative)

    by theshibboleth ( 968645 ) on Monday June 16, 2008 @12:04AM (#23806013)
    Well Firewire is faster than USB, so people are willing to pay more for it. Plus it doesn't have quite as wide adoption as USB, so manufacturers don't make as many Firewire devices, which limits the supply.
  • by spinkham ( 56603 ) on Monday June 16, 2008 @12:11AM (#23806061)
    This is a replay of the OHCI/UHCI host controller interface standards of original USB.
    This does NOT at all effect users, only driver writers.
    What is being forked is the USB driver interface, and does not effect device compatibility at all.
    As mentioned above, there were two driver interfaces for the original USB standard, and the only people who knew were driver writers and nerds compiling their own custom kernel.
    This is blown way out of proportion, and doesn't effect 99.999% of us. Nothing to see here, move along....
  • Re:So... (Score:2, Informative)

    by mysidia ( 191772 ) on Monday June 16, 2008 @12:12AM (#23806069)

    I think we can be fairly confident if there were USB-AMD and USB-Intel, that:

    All other things being equal (no major bugs in one of the specs), USB-Intel would be the clear winner if the two standards came out about the same time, due to Intel's influence, name recognition, prestige, etc. The 5000 pound gorilla flattens the 200 pound monkey with 1 step.

    USB-ADM could win, but only if it came out far enough in advance, for products to start being designed using it.

    There's a limited market for devices of speeds even higher than USB 2.0, that's unlikely to support two standards like DVD+R and -R.

    Naturally, if both standards survived, it would be due to devices including support for both variants of USB 3.

  • Re:1394 For Life (Score:5, Informative)

    by Jeff DeMaagd ( 2015 ) on Monday June 16, 2008 @12:29AM (#23806181) Homepage Journal
    You're wrong. You are basically remembering something that's been fixed and settled a decade ago. Good job on being out of date by a decade.

    The entire royalty is something like $0.25 per device, Apple only gets a portion of that.

    The cost is in the smarts, each device requires a more complicated controller and an additional chip.
  • Re:1394 For Life (Score:5, Informative)

    by Tubal-Cain ( 1289912 ) on Monday June 16, 2008 @12:34AM (#23806207) Journal

    I've not heard of USB missile launchers either. It shoots USBs?
    http://www.thinkgeek.com/geektoys/warfare/8a0f/ [thinkgeek.com]
  • by Phong ( 38038 ) on Monday June 16, 2008 @12:47AM (#23806279)
    This isn't about competing USB 3 standards -- the spec is being designed by a group, and there is only one. This is about the design of the hardware used to implement a host controller that can comply with the spec. This is something that any company can develop if they want to, but since Intel is going to license their design of the host controller for free, most companies will just wait for that design and use it to implement USB 3.
  • by Kinky Bass Junk ( 880011 ) on Monday June 16, 2008 @12:48AM (#23806287)
    That is generally the purpose of a pop culture reference.
  • Re:1394 For Life (Score:5, Informative)

    by outZider ( 165286 ) on Monday June 16, 2008 @01:16AM (#23806433) Homepage
    FireWire requires an actual IO controller, where USB 2 relies on the CPU and the driver.

    In short -- FireWire is faster and requires far less load on the target machine. The downside is the initial cost is higher. I find it pays for itself pretty quick.
  • by tjrw ( 22407 ) on Monday June 16, 2008 @01:33AM (#23806535) Homepage
    ... and people who ran into all sorts of nasty incompatibilities in the more scary corner-areas of the spec (isochronous transfers, etc.). Microsoft remember this fun which is why they are not happy about this. I remember various issues with USB depending on whether you had and OHCI or UHCI controller.

    It is not in the interests of the consumer nor of the standard to have multiple host-controller interfaces. You may care to muse on why it might be in Intel's interests to the detriment of all others.
  • Re:1394 For Life (Score:4, Informative)

    by armanox ( 826486 ) <asherewindknight@yahoo.com> on Monday June 16, 2008 @01:36AM (#23806549) Homepage Journal
    Also, the royalties are not in effect any longer...
  • Re:1394 For Life (Score:3, Informative)

    by Poltras ( 680608 ) on Monday June 16, 2008 @01:54AM (#23806635) Homepage
    You're using a BIOS? Holy crap, I'll just remove the battery of the cmos for 5 minutes and I'm done. Or use the jumper to go faster.

    Admit it, once you have access to the computer, it's game over. Unless you encrypt the hard drive. The whole thing. And your RAM as well. And use EFI. Encrypted...
  • Re:So... (Score:4, Informative)

    by Hal_Porter ( 817932 ) on Monday June 16, 2008 @02:01AM (#23806671)

    So will this mean in the end we will have 2 competing USB standards? USB-Intel and USB-AMD?
    I think this is about host controller specs not wire protocols. So it will be like with USB 1.0 where there was OHCI and UHCI. Universal Host Controller Interface was Intel and Vias controller standard and OHCI was everyone else's. Including Microsoft. OHCI was supposed to be do more in hardware, though I don't think it made much difference in practice. But both controllers were compatible on the wire - you could easily make devices that worked with both. IIRC there were cases where the OHCI controller, because it had more informatation about the protocol could respond to information from a device inside the same frame. UHCI controllers were basically dumb and needed intervention from software on the host, so they'd respond to some device condition during the next frame, after the host stack had had a chance to think.

    But according to the USB spec both behaviours are correct since the device can't make any assumptions about what overheads exist on the host.

    I can't find the reference to device visible differences between UHCI and OHIC and in any case it was a very rare case. I did find this presentation by Intel that shows OHCI and UHCI performing almost identically despite the fact that OHCI controllers basically do the USB protocol in software and UHCI is just a bus master DMA engine attached to a serial interface with the protocol is done in software.

    http://www.usb.org/developers/presentations/pres0598/bulkperf.ppt [usb.org]

    With USB 2.0 there was a push to a unified host controller spec called EHCI. From what I can tell this spat means that there will possibly be two rival host controller specs because Intel haven't published their spec in time for other people to implement it. But I don't think that will fork the wire protocol, I think it just means that OSs will need to have two new host controller like USB 1.0 drivers rather than one like USB 2.0.

    You could argue that UHCI was a good thing since it uses less hardware and performs about the same.

    Incidentally Wikipedia writes this up based on the "Good open standards vs vile proprietary standards" meme, which seems a bit unfair. Both OHCI and UHCI are based on published specifications which are freely available. I don't know if you need to pay a license fee to implement either or both of them - I actually think you don't since USB was successful because you didn't need to pay a per port fee when it was introduced, unlike Firewire.

    http://en.wikipedia.org/wiki/OHCI [wikipedia.org]

    The difference seems to me more like a software engineer view (Microsoft want to do it all in hardware like OHCI) of the world vs a hardware engineer view of the world (Intel say do it all in software with UHCI)

  • Re:1394 For Life (Score:4, Informative)

    by punkass ( 70637 ) on Monday June 16, 2008 @02:01AM (#23806673)
    Yep, and of course they won't be updating Fireware ever again, either. http://en.wikipedia.org/wiki/FireWire#Future_enhancements [wikipedia.org]
  • by sznupi ( 719324 ) on Monday June 16, 2008 @02:11AM (#23806731) Homepage
    Same exception as compact disc and 3,5 inch floppy?
  • Re:1394 For Life (Score:5, Informative)

    by CrackedButter ( 646746 ) on Monday June 16, 2008 @03:35AM (#23807205) Homepage Journal
    How is this modded interesting, all the geeks know that FW 400 is still faster than USB 2.0 because 480mbps is theoretical and not an actual constant transfer speed like with FW400. Firewire is processor independent as well since it has its own controller whereas the main CPU is used to control USB 2, that means its transfer rate is dependent on system performance. Everything else in your post isn't bollocks though.
  • Re:1394 For Life (Score:5, Informative)

    by DDLKermit007 ( 911046 ) on Monday June 16, 2008 @06:24AM (#23807949)
    Great way to stay on the sidelines of understanding. Yes, USB 2.0 is "faster" than Firewire on paper. However, 2.0's max/burst speed of 480Mbit/s is very different from it's average speed (about 240Mbit/s), and substantially lower than Firewire's sustained speed. It's a side effect of something that relies on the host to do the heavy lifting vs a device that handles it's own heavy lifting. Not looking forward to similar crap with USB 3.0, not to mention the continuance of shitastic driver support I've always seen from USB vs Firewire.
  • Re:1394 For Life (Score:5, Informative)

    by saider ( 177166 ) on Monday June 16, 2008 @09:01AM (#23809231)
    IIRC, Firewire controllers need to be smarter than USB controllers because they might not be hooked up to a PC. For instance, your video camera might go straight to a recording deck, or some other electronic doodad. So the firewire controllers were designed to offload a lot more of the protocol to move stuff around, which made it easier to design systems. Of course this was done back before embedded controllers running Linux (and its USB stack) became cheap as dirt.

    Firewire's main advantage now is the fact that it is a point to point mechanism, not a bus. USB suffers because every so often the host must interrupt things to discover new devices. This can slow down large block transfers quite a bit.
  • Re:1394 For Life (Score:3, Informative)

    by v1 ( 525388 ) on Monday June 16, 2008 @09:08AM (#23809329) Homepage Journal
    With 1394, sometimes ripping it out at the wrong times can give you a BSOD, or even worse, damage your device.

    Sorry for your BSOD but that's not the device's problem, has nothing to do with USB or Firewire. Linux and Macintosh do not have ANY issue with hot swapping firewire.

    And I work with firewire and usb storage many times every single day at work so I believe I have a good sampling to speak on.

    I can say I've seen firewire damaged devices though... some of the cheap firewire port end cages are split stamped, and can spread if forced. This lets you plug in a firewire cable BACKWARDS if it's behind a machine you can't pull out and are groping in the dark with the cable. bad things happen here, usually shorting out the firewire port on the host, since firewire is heavily powered and doesn't like being hooked up wrong.
  • Re:1394 For Life (Score:2, Informative)

    by DarthStrydre ( 685032 ) on Monday June 16, 2008 @01:12PM (#23812625)

    Also, FireWire plugs aren't pseudo-symmetrical like USB plugs. Not having to try several times untl you figure out which side is up is a big plus.
    Agreed for 6 pin connectors. MAJOR disagreement about the 4 pin connectors. Unlike most connectors in the DB line, usb, mini usb, even HDMI, the 4 pin firewire connector does not auto-seat if you get it close enough. Trying to blindly connect a cable to one is more difficult than any other common connector, in my opinion. With USB, you have to flip it around half the time. With 4 pin firewire, even if the orientation is correct it rarely seats.

    Also, firewire support in Windows is terrible, and there are a bunch of non-compliant firewire controller chips in circulation, which pretty much doomed the standard except for DV cams on the Windows side. Delayed Write Failure anyone? I've found that replacing the Windows drivers completely with the free ones from Unibrain takes care of this issue on one laptop I have... Other people have other Voodoo that works. Sometimes...

    I love firewire when it works.

  • Re:1394 For Life (Score:2, Informative)

    by DarthStrydre ( 685032 ) on Monday June 16, 2008 @01:39PM (#23812945)

    It's not USB2 or SATA that cannibalized Firewire's supposed market... It's Ethernet.
    I have to disagree. Ethernet is good, but it does not support isochronous transfer. With Firewire, isochronous mode creates dedicated timeslots for devices that produce steady streams of data. DV and DCAM camera interfaces, multiple ADAC audio interfaces, you can theoretically load the bus to very close to 400MBit, and never have to worry about collisions or jitter, indeterminism, latency, or packet loss.

    When you compare Firewire to Ethernet, I'm assuming you are only referring to the SBP-2 protocol of Firewire (which is what hard disks use) which is a asynchronous mode.

    Note that if you have 350MBit of isochronous traffic, and use a SBP2 hard drive on the same bus... the isochronous data stream WILL NOT be affected - the hard drive will just have reduced bandwidth in the empty spaces in the schedule. You cannot say the same for Ethernet. Ethernet QOS might be able to reserve bandwidth, but does nothing for jitter or latency.
  • Re:1394 For Life (Score:4, Informative)

    by pjrc ( 134994 ) <paul@pjrc.com> on Monday June 16, 2008 @02:56PM (#23813881) Homepage Journal
    Is is true all downstream devices on a single host controller share bandwidth. But USB control transfers to enumerate devices are such a tiny fraction of the available bandwidth that their impact is virtually zero.

    The thing that does have a big impact is using 12 mbps or 1.5 mbps devices in a way that they hog the bus. Ideally, all non-high-speed transfers would be converted to 480 mbps.

    You might imagine a motherboard with 10 USB ports could communicate with all 10 independently. But that is rarely the case. Usually they all share the same bandwidth. You might expect there would be buffering for 12 and 1.5 mbps transfers, so they wouldn't hog the bus from the other 9 boths. That too is rarely the case.

    USB 2.0 hubs do buffer and convert 12 and 1.5 mbps transfers to 480 mbps. Again, you might expect a 4 port hub to properly allow 4 slow devices to share. That is sometimes the case. Better hubs have multi-TT (transaction translators, basically the USB term for a buffer). But many hubs have only a single TT, which means only one downstream 12 mbps or 1.5 mbps device can talk at once, and any others on that hub must wait until the single buffer is available.

    If the USB 2.0 spec had required all hubs to include a TT on every downstream port, and had the "root hub" (on the motherboard which provides many ports with shared bandwidth) been required to implement TTs on every port, there would have been much higher levels of satisfaction with USB 2.0.

    The when Compaq, HP, Intel, Lucent, Microsoft, NEC and Philips wrote the USB 2.0 spec, they apparently believed 480 mbps speed would soon replace 12 mbps in most devices. Requiring many TTs probably seems excessively costly to support legacy devices that would soon become obsolete. What instead happened is only certain devices requiring high speed implemented 480 mbps. Almost all others stayed at 12 mbps. Most devices that implement 12 mbps use a 48 MHz clock internally, and many low-cost silicon fabs really only supports clocks to about 60-100 MHz (especially if the chip's fab supports the extra polysilicon layers for implementing flash or eeprom).

    Let's hope they learn their lesson and require TTs in ALL cases where 480, 12 and 1.5 mbps devices could share the upstream bandwidth, especially on motherboards. If they do, USB 3.0 will probably be very nice, providing so much more shared bandwidth than necessary that hardly anybody will care if it's shared. But if they skimp and allow any sharing, anywhere, without TTs - the result will probably be a lot like USB 2.0 - very fast, but sometimes you plug in another device and all of a sudden it sucks.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...