Clash of the Titans Over USB 3.0 Specification Process 269
Ian Lamont writes "Nvidia and other chip designers are accusing Intel of 'illegally restraining trade' in a dispute over the USB 3.0 specification. The dispute has prompted Nvidia, AMD, Via, and SiS to establish a rival standard for the USB 3.0 host controller. An Intel spokesman denies the company is making the USB specification, or that USB 3.0 'borrows technology heavily' from the PCI Special Interests group. He does, however, say that Intel won't release an unfinished Intel host controller spec until it's ready, as it would lead to incompatible hardware."
Re:1394 For Life (Score:2, Informative)
This is only a concern to driver writers (Score:5, Informative)
This does NOT at all effect users, only driver writers.
What is being forked is the USB driver interface, and does not effect device compatibility at all.
As mentioned above, there were two driver interfaces for the original USB standard, and the only people who knew were driver writers and nerds compiling their own custom kernel.
This is blown way out of proportion, and doesn't effect 99.999% of us. Nothing to see here, move along....
Re:So... (Score:2, Informative)
I think we can be fairly confident if there were USB-AMD and USB-Intel, that:
All other things being equal (no major bugs in one of the specs), USB-Intel would be the clear winner if the two standards came out about the same time, due to Intel's influence, name recognition, prestige, etc. The 5000 pound gorilla flattens the 200 pound monkey with 1 step.
USB-ADM could win, but only if it came out far enough in advance, for products to start being designed using it.
There's a limited market for devices of speeds even higher than USB 2.0, that's unlikely to support two standards like DVD+R and -R.
Naturally, if both standards survived, it would be due to devices including support for both variants of USB 3.
Re:1394 For Life (Score:5, Informative)
The entire royalty is something like $0.25 per device, Apple only gets a portion of that.
The cost is in the smarts, each device requires a more complicated controller and an additional chip.
Re:1394 For Life (Score:5, Informative)
Not competing standard, competing hardware designs (Score:5, Informative)
Re:Am I the only one.... (Score:3, Informative)
Re:1394 For Life (Score:5, Informative)
In short -- FireWire is faster and requires far less load on the target machine. The downside is the initial cost is higher. I find it pays for itself pretty quick.
Re:This is only a concern to driver writers (Score:3, Informative)
It is not in the interests of the consumer nor of the standard to have multiple host-controller interfaces. You may care to muse on why it might be in Intel's interests to the detriment of all others.
Re:1394 For Life (Score:4, Informative)
Re:1394 For Life (Score:3, Informative)
Admit it, once you have access to the computer, it's game over. Unless you encrypt the hard drive. The whole thing. And your RAM as well. And use EFI. Encrypted...
Re:So... (Score:4, Informative)
But according to the USB spec both behaviours are correct since the device can't make any assumptions about what overheads exist on the host.
I can't find the reference to device visible differences between UHCI and OHIC and in any case it was a very rare case. I did find this presentation by Intel that shows OHCI and UHCI performing almost identically despite the fact that OHCI controllers basically do the USB protocol in software and UHCI is just a bus master DMA engine attached to a serial interface with the protocol is done in software.
http://www.usb.org/developers/presentations/pres0598/bulkperf.ppt [usb.org]
With USB 2.0 there was a push to a unified host controller spec called EHCI. From what I can tell this spat means that there will possibly be two rival host controller specs because Intel haven't published their spec in time for other people to implement it. But I don't think that will fork the wire protocol, I think it just means that OSs will need to have two new host controller like USB 1.0 drivers rather than one like USB 2.0.
You could argue that UHCI was a good thing since it uses less hardware and performs about the same.
Incidentally Wikipedia writes this up based on the "Good open standards vs vile proprietary standards" meme, which seems a bit unfair. Both OHCI and UHCI are based on published specifications which are freely available. I don't know if you need to pay a license fee to implement either or both of them - I actually think you don't since USB was successful because you didn't need to pay a per port fee when it was introduced, unlike Firewire.
http://en.wikipedia.org/wiki/OHCI [wikipedia.org]
The difference seems to me more like a software engineer view (Microsoft want to do it all in hardware like OHCI) of the world vs a hardware engineer view of the world (Intel say do it all in software with UHCI)
Re:1394 For Life (Score:4, Informative)
Re:Betamax theory of CE (Score:3, Informative)
Re:1394 For Life (Score:5, Informative)
Re:1394 For Life (Score:5, Informative)
Re:1394 For Life (Score:5, Informative)
Firewire's main advantage now is the fact that it is a point to point mechanism, not a bus. USB suffers because every so often the host must interrupt things to discover new devices. This can slow down large block transfers quite a bit.
Re:1394 For Life (Score:3, Informative)
Sorry for your BSOD but that's not the device's problem, has nothing to do with USB or Firewire. Linux and Macintosh do not have ANY issue with hot swapping firewire.
And I work with firewire and usb storage many times every single day at work so I believe I have a good sampling to speak on.
I can say I've seen firewire damaged devices though... some of the cheap firewire port end cages are split stamped, and can spread if forced. This lets you plug in a firewire cable BACKWARDS if it's behind a machine you can't pull out and are groping in the dark with the cable. bad things happen here, usually shorting out the firewire port on the host, since firewire is heavily powered and doesn't like being hooked up wrong.
Re:1394 For Life (Score:2, Informative)
Also, firewire support in Windows is terrible, and there are a bunch of non-compliant firewire controller chips in circulation, which pretty much doomed the standard except for DV cams on the Windows side. Delayed Write Failure anyone? I've found that replacing the Windows drivers completely with the free ones from Unibrain takes care of this issue on one laptop I have... Other people have other Voodoo that works. Sometimes...
I love firewire when it works.
Re:1394 For Life (Score:2, Informative)
When you compare Firewire to Ethernet, I'm assuming you are only referring to the SBP-2 protocol of Firewire (which is what hard disks use) which is a asynchronous mode.
Note that if you have 350MBit of isochronous traffic, and use a SBP2 hard drive on the same bus... the isochronous data stream WILL NOT be affected - the hard drive will just have reduced bandwidth in the empty spaces in the schedule. You cannot say the same for Ethernet. Ethernet QOS might be able to reserve bandwidth, but does nothing for jitter or latency.
Re:1394 For Life (Score:4, Informative)
The thing that does have a big impact is using 12 mbps or 1.5 mbps devices in a way that they hog the bus. Ideally, all non-high-speed transfers would be converted to 480 mbps.
You might imagine a motherboard with 10 USB ports could communicate with all 10 independently. But that is rarely the case. Usually they all share the same bandwidth. You might expect there would be buffering for 12 and 1.5 mbps transfers, so they wouldn't hog the bus from the other 9 boths. That too is rarely the case.
USB 2.0 hubs do buffer and convert 12 and 1.5 mbps transfers to 480 mbps. Again, you might expect a 4 port hub to properly allow 4 slow devices to share. That is sometimes the case. Better hubs have multi-TT (transaction translators, basically the USB term for a buffer). But many hubs have only a single TT, which means only one downstream 12 mbps or 1.5 mbps device can talk at once, and any others on that hub must wait until the single buffer is available.
If the USB 2.0 spec had required all hubs to include a TT on every downstream port, and had the "root hub" (on the motherboard which provides many ports with shared bandwidth) been required to implement TTs on every port, there would have been much higher levels of satisfaction with USB 2.0.
The when Compaq, HP, Intel, Lucent, Microsoft, NEC and Philips wrote the USB 2.0 spec, they apparently believed 480 mbps speed would soon replace 12 mbps in most devices. Requiring many TTs probably seems excessively costly to support legacy devices that would soon become obsolete. What instead happened is only certain devices requiring high speed implemented 480 mbps. Almost all others stayed at 12 mbps. Most devices that implement 12 mbps use a 48 MHz clock internally, and many low-cost silicon fabs really only supports clocks to about 60-100 MHz (especially if the chip's fab supports the extra polysilicon layers for implementing flash or eeprom).
Let's hope they learn their lesson and require TTs in ALL cases where 480, 12 and 1.5 mbps devices could share the upstream bandwidth, especially on motherboards. If they do, USB 3.0 will probably be very nice, providing so much more shared bandwidth than necessary that hardly anybody will care if it's shared. But if they skimp and allow any sharing, anywhere, without TTs - the result will probably be a lot like USB 2.0 - very fast, but sometimes you plug in another device and all of a sudden it sucks.