Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Networking The Internet Hardware

Terabit Ethernet Is Dead, For Now 140

Nerval's Lobster writes "Sorry, everybody: terabit Ethernet looks like it will have to wait a while longer. The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group met this week in Geneva, Switzerland, with attendees concluding—almost to a man—that 400 Gbits/s should be the next step in the evolution of Ethernet. A straw poll at its conclusion found that 61 of the 62 attendees that voted supported 400 Gbits/s as the basis for the near term 'call for interest,' or CFI. The bandwidth call to arms was sounded by a July report by the IEEE, which concluded that, if current trends continue, networks will need to support capacity requirements of 1 terabit per second in 2015 and 10 terabits per second by 2020. In 2015 there will be nearly 15 billion fixed and mobile-networked devices and machine-to-machine connections."
This discussion has been archived. No new comments can be posted.

Terabit Ethernet Is Dead, For Now

Comments Filter:
  • by rufty_tufty ( 888596 ) on Thursday September 27, 2012 @06:16AM (#41475617) Homepage

    We need terabit Ethernet NOW, not in a decade.

    You know my 5 year old nephew keeps confusing need and want too.
    How much are you prepared to pay for this desire? If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?

  • Ya well (Score:5, Insightful)

    by Sycraft-fu ( 314770 ) on Thursday September 27, 2012 @06:55AM (#41475755)

    You may discover that you can't have what you want. There are real physical limitations we have to deal with. One issue, with regards to copper Ethernet, that we are having is keeping something that remains compatible with older style wiring. Sticking with 8P8C and UTP is becoming a real issue for higher speeds. At some point we may have to have a break where new standards require a different kind of jack and connector.

    Also in terms of "data center only" devices that isn't how things work. You care what data centers use because you connect to them. There can be big advantages in terms of cost, simplicity, and latency, to stick all on one spec. So 40gbps or 400gbps could well be useful. No, maybe you don't see that to your desktop, but that doesn't mean it doesn't get used in the switching infrastructure in your company.

    Also each order of magnitude you go up with Ethernet makes the next matter less. It's going to be awhile before there's any real need for 10gbps to the desktop. 1gbps is just plenty fast enough for most things. You can use things over a 1gbps link like they were on your local system and not see much of a performance penalty (latency is a bigger issue than speed in most things at that point). I mean consider that the original SATA spec is only 1.5gbps.

    As for 100gbps, it'll take some major increases in what we do before there is a need for that to the desktop, if ever. 10gbps is just an amazing amount of bandwidth to a single computer. It is enough to do multiple uncompressed 1080p60 video streams, almost enough to do a 4k uncompressed video stream.

    Big bandwidth is more of a data center/ISP need than a desktop need. 1gbps to the desktop is fine and will continue to be fine for quite some time. However to deliver that, you are going to need more than 1gbps above your connection.

  • by ReallyEvilCanine ( 991886 ) on Thursday September 27, 2012 @07:03AM (#41475791) Homepage
    As a parent of a young one I also hear this 500×/day. But what's the cost of "Terabit now and you're safe for a decade" versus "400Gb now, then rewire & replace all your gear in 3-5 years for 750Gb (if there isn't a standards war you have to gamble on), and then do that all over again in another 3-5 years for 1.1Gb"? Because that's the kind of creep we've seen since the early days of token ring and then 10BaseT. Manufacturers certainly want the step-by-step option but the admins and engineers? Not so much so.
  • by Viol8 ( 599362 ) on Thursday September 27, 2012 @07:13AM (#41475819) Homepage

    Hardly any 10 base T systems bother with the CDMA/CD system that original ethernet had , in fact its more like a serial protocol rather than a broadcast "in the ether" one now. WHy not just give it a new name?

  • Re:Sigh (Score:4, Insightful)

    by burning-toast ( 925667 ) on Thursday September 27, 2012 @07:21AM (#41475853)

    OTOH. These standards, by sheer fact that they are referencing 1Tbs needs, are most certainly relevant to the backhaul providers and not any normal business outside of that group. Fractional or non-base 10 speeds have been common in those networks since well before the power of 10 thing came about. Once the rest of the technology catches up and makes the power of 10 thing feasible, then the standard "commodity" equipment picks it up (primarily for marketing reasons IMO). Power of 10 is convenient for math reasons, but frequently means absolutely nothing to the backhaul guys (the early adopters).

    Those businesses who purchase the regular "commodity" power-of-10 equipment really should be set for a while with the previously commoditised 10Gb links. They are performant, relatively cheap, available, run across the nation, and hard to saturate with the equipment that plugs into either side. I've worked with 8x10Gb multiplexed cross-country low-latency fiber wan links. It is a ludicrous amount of bandwidth unless you are routing other networks like a backhaul provider. I would struggle to name normal businesses which would be unable to use 10Gb links due to a lack of bandwith (for the immediate future). The needs really are different between these markets.

    As an aside, fiber may be sold commonly in 100m lengths, but that has nothing to do with the distance the light will work at properly for the speed it is rated. Some fiber / wavelength pairs are only good for a few feet. Others go km, but not with the same NIC, Fiber, Switches, or patch panels. 100m is a really shitty (too short) standard for datacenter use anyways. Frequently, we will get two cages in a datacenter at different times... and they end up farther than 100m apart making copper irrelevant for that use.

    Change is incremental like ripples, but big changes come in waves. Back-haul wants the ripples, everyone else wants the wave. I say, let them have their ripples and pay for the development of the waves. It saves both groups of consumers money so long as there aren't TOO many ripples per wave.

    - Toast

  • by Shinobi ( 19308 ) on Thursday September 27, 2012 @09:11AM (#41476607)

    "Only high performing computing and Virtualization servers use more than Gigabit links today, but TenGigabit bundles and higher bandwidth links are used on almost every large network on core connections and core to distribution."

    I know quite a few non-science professional fields that saturate gigabit to each desktop, and would go for Infiniband, or 10Gig-E if it was viable outside of big corps. Editing/compositing of HD or greater resolution movies shuffle HUUUUGE amounts of data around, and you need a decent turnaround time for the data....

  • Re:Ya well (Score:4, Insightful)

    by petermgreen ( 876956 ) <plugwash@nOSpam.p10link.net> on Thursday September 27, 2012 @09:14AM (#41476641) Homepage

    Just because the engineers have pulled two rabbits out of hats and managed to run first 1 and then 10 gigabit over slightly improved versions of cheap twisted pair cable with the 8P8C connectors (though at present afaict the cost of transciever hardware is such that for short 10 gigabit runs you are better off with SFP+ direct attach) doesn't mean they will be able to do it again.

If you want to put yourself on the map, publish your own map.

Working...