Terabit Ethernet Is Dead, For Now 140
Nerval's Lobster writes "Sorry, everybody: terabit Ethernet looks like it will have to wait a while longer. The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group met this week in Geneva, Switzerland, with attendees concluding—almost to a man—that 400 Gbits/s should be the next step in the evolution of Ethernet. A straw poll at its conclusion found that 61 of the 62 attendees that voted supported 400 Gbits/s as the basis for the near term 'call for interest,' or CFI. The bandwidth call to arms was sounded by a July report by the IEEE, which concluded that, if current trends continue, networks will need to support capacity requirements of 1 terabit per second in 2015 and 10 terabits per second by 2020. In 2015 there will be nearly 15 billion fixed and mobile-networked devices and machine-to-machine connections."
Damn the summary (Score:1, Interesting)
I'd love to see the IEEE report that attempts to guesstimate the needs of future Ethernet users.
We need terabit Ethernet NOW, not in a decade.
Re:Damn the summary (Score:5, Insightful)
We need terabit Ethernet NOW, not in a decade.
You know my 5 year old nephew keeps confusing need and want too.
How much are you prepared to pay for this desire? If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?
Re:Damn the summary (Score:5, Interesting)
And what exactly is he doing over ethernet that needs that much speed? I'm only just now looking at upgrading our small business network to gigabit. A couple of years ago the cost of a 48 port gigabit switch was pretty high, but now it's very reasonable
Re: (Score:2)
You know these port speeds are not meant to be used on access switches right?, at least on the beginning, there is no need to. Only high performing computing and Virtualization servers use more than Gigabit links today, but TenGigabit bundles and higher bandwidth links are used on almost every large network on core connections and core to distribution.
Re: (Score:2)
Sure, so options are already available in the high end for people who need it. For the original poster to say anybody really "needs" this struck me as a bit much.
Re: (Score:2)
I don't know about that: working in a company that shifts large amount of data around its internal network, having fast network access to the file servers is kind of desirable. Or at least as fast access to them as the computers can actually manage.
Re: (Score:1)
Exactly my point, which today it more than Gb speeds but no more than 10Gb. However, we do need higher speed technology for the core infrastructure of whatever core networks we are using. Call it enterprise core, service provider, or whatever you're using.
Re: (Score:3)
If you want 1Gb to 10Gb to your desktops you will want 10 times that in the core of your network where that file storage lives.
Re:Damn the summary (Score:4, Insightful)
"Only high performing computing and Virtualization servers use more than Gigabit links today, but TenGigabit bundles and higher bandwidth links are used on almost every large network on core connections and core to distribution."
I know quite a few non-science professional fields that saturate gigabit to each desktop, and would go for Infiniband, or 10Gig-E if it was viable outside of big corps. Editing/compositing of HD or greater resolution movies shuffle HUUUUGE amounts of data around, and you need a decent turnaround time for the data....
dual port 10GB runs about a grand (Score:2)
for an addin card. Which is interesting since the actual chip is something like $90 from Intel.
Re: (Score:2)
That is about right. Labor and transportation costs are usually more than materials when you are considering the cost of a product. Figure another $50 to $100 for all the other components, packaging and labelling, and you are probably around to $200. Add in the manufacturer's mark-up (3-4x) to pay for the factory and overhead, shipping, the wholesalers will want a 10-30% cut, and finally your retailer's profit of around 10%.
Also consider that Intel's $90 part started out as pennies worth of sand.
Re: (Score:2)
That's the point I'm trying to make with the GP. There is more to cost than raw materials.
Re: (Score:2)
It's not just the cards, it's switches and cabling too, and the work to put it all in, and all the admin work to ensure that it will work too.
Re: (Score:2)
Yeah, but when you add it all together, cards, switches, cabling, admin work to get it all to work as it should etc, it adds up, fast, especially when you realise that smaller studios don't have the same ability to soak up the downtime from an infrastructure upgrade that a larger studio can, it's bad enough when they have to upgrade machines and software alone.
Re: (Score:1)
As you say, we have 1G and 10G in use today, so we'll soon need more. But 100G is ALREADY a standard. I'm sure we'll need and get 1T eventually, but 100G should be enough for almost everyone for a few years.
BTW speeds of many Tbit/sec were demonstrated in the lab a good 10 years ago using optical dense wavelength multiplexing, but full product development was too expensive for what the market needed then. Also, it was aimed at large telcos, not as the cheap plug in connection that Ethernet implies, But
Re: (Score:1)
There are scientific uses for such technology. Often technology is more the limiting factor than money.
Re: (Score:2)
Yes, Howerver I think for Terabit-ethernet, There is other factors too then just money. Like the speed to process and store the data being sent. If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable. You would use a different type of bus for that.
Re: (Score:1)
Yes, Howerver I think for Terabit-ethernet, There is other factors too then just money. Like the speed to process and store the data being sent. If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable. You would use a different type of bus for that.
TCP/IP is not the only protocol that is used on ethernet. There are plenty of performance critical applications that uses ethernet without the IP layer.
Re: (Score:2)
If performance is that big of an issue, you are not going to trust your information with TCP/IP over a twisted pare cable.
And I suppose no one will ever need more than 640k of RAM. Why would you make such a silly statement?
Re: (Score:2)
SANs, NASs, transferring large video files?
Re: (Score:2)
Re: (Score:2)
Many distributed computations are network bound or require a lot of manual optimization. The faster the network, the more speedup you get from distribution. And that kind of computation is useful even just for video transcoding. Network speeds that become comparable to bus speeds really change how people can develop parallel software. But more likely, we need 1Tb networks soon just to keep up with, and support, CPUs and GPUs.
Re: (Score:2)
And what exactly is he doing over ethernet that needs that much speed? I'm only just now looking at upgrading our small business network to gigabit. A couple of years ago the cost of a 48 port gigabit switch was pretty high, but now it's very reasonable
You did see this article [slashdot.org], no?
Re: (Score:2)
Hmm.. you make an excellent point. I rescind my original comment.
Re: (Score:2)
This isn't delaying advancement, it's recognising reality.
It won't increase the cost of current technology. Gigabit is already pretty damn cheap. It might slow the advancement of terabit capable tech, it might not.
Note that I'm not saying that we shouldn't keep developing faster tech, but I was saying that currently there is no need for it at intranet level.
HD video chat? If you even want video chat in the first place, I don't see the benefit of HD. Movies sure, video chat.. not so much. I've been setting u
Re: (Score:2)
Why are you assuming "at the internet level"? That would be nice, sure, but there are lots of other uses for ethernet, between end-points that aren't that far apart. Parallel processing, e.g. Not over copper, but, say, over light links. Yes, I'm thinking of using an internet protocol, so that light can be broadcast within the cabinet, and you don't need to match connections. You receive with photocells, and broadcast with any fast light source that's fast enough and bright enough and tunable enough. T
Comment removed (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
And then there are individuals like me, with ambition, whose wants are achieved through ambition, and that shiny new car will be mine when I want it, because I have the capabilities to achieve my goals...
So you buy your shiny new car, and then another shiny new car because you want that one too, and then a McMansion because bigger is better, and you need all that extra space to store all your shiny possessions. At the end of the day are you satisfied with your shiny possessions? No, you need more shiny possessions, and your life is centered around a vapid cycle of consumerism while you sacrifice your ethics and free time to attain them.
Re: (Score:2)
This is a question of quality.
Clearly that is something you just don't get.
I like my things to do more and to be better. That's progress. That's why you're not cowering in a cave somewhere. That's why you have your cushy life and a relative certainty that you will even live long enough to enjoy it.
Re: (Score:2)
I get quality, but the level of greed and ambition for the example "shiny new car" is what is on display.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
(p.s., GrammerNazi time... The word is spelled "crumb". FTFY)
The word is spelled grammar. Also, to be more precise, you are being a spelling Nazi, not a grammar Nazi.
Re: (Score:2)
You can expect better gas mileage or pollution controls or crash survivability or fuel economy tomorrow.
The idea that progress is inevitable and expected is a well established notion of modern civilization.
Progress is not conspicuous consumption. If you're conflating the two then you're the one that's got the problem and you are in no position to look down on anyone else.
Re:Damn the summary (Score:5, Insightful)
Re: (Score:1)
Manufacturers certainly want the step-by-step option but the admins and engineers? Not so much so.
What about accountants?
Most major expenditures are depreciated over a five year term, and in many jurisdictions you then have to get rid of the fully depreciated thing. If you keep it, then you admit that it still has value—which means that you fibbed about it losing its value and getting tax credits for depreciation.
So it's all very well to say keep it for a decade, but then you have to start fiddling with your tax reporting structure, which can get quite messy for public companies. It's easier to re
Re: (Score:2)
It's a two way street.
While the cost of incrementally upgrading your equipment can be high, if you leap generation(s) you also have risk that the upgrade process will be lost amongst your staff. If that happens, then when [eventually] you do need to upgrade the process may not be as smooth, leading to extended downtime and/or extra costs (lost customers, wrong hardware, infrastructure upgrades, etc.)
The only way to know for sure is to have a cost-benefit analysis and a risk strategy tailored to your busine
Re: (Score:2)
For the most part people who buy the latestest and greatest do it to expand the obsoletness era. Get 400gb now Then in 10 years get 1 TB.
In 10 years you are most likely going to need to replace your gear anyways.
If they go 1.1Tb now and say it will take 15 years for the 2 tb. you will be running for a long time with a well under utilized connection and probably will need to replace your gear in 10 years anyways. So you spent a lot of money for underutilized gear.
Re:Damn the summary (Score:4, Funny)
Your nephew is probably about as mature as most geeks.
Re: (Score:2)
If it will cost say 4 times greater per bit to implement Terabit with current technology do you still want it?
Yes, easily. Some of us pay that now by bonding multiple channels of "current tech" together and at much worse cost/bps.
Re: (Score:2)
We need terabit Ethernet NOW, not in a decade.
My hard drive only writes at about 100MB/s so I'm good actually. Anything backbone-ish can use Fiber.
Re:Damn the summary (Score:5, Informative)
I realised I wasn't being clear about why they can't define the standard now and wait for the technology to catch up.
A standard like this is always a trade off based on the currently available technology, How fast are your analogue transistors, how much processing power do you have to do forward error correction. How fast are your ADCs/DACs to do signal shaping? This determines things like which coding schemes can you use. Also what market needs this and what costs are acceptable, for example DWDM and all the associated costs is perfectly acceptable if fibre is comparatively expensive, however even though in the 90s that would have been the only way to do 10G now we have the capability to do it electrically; designing the spec too soon and guessing is a really bad idea. We don't know how 20nm and lower process nodes are going to behave well enough to predict what their characteristics will be when this technology reaches maturity, to get that wrong is to end up with a standard that either under performs or is over expensive.
Put it another way, the processor architecture you would choose to achieve 80MFLOPS in 1976 is very different from the architecture you would choose in 2006. Telecomms has exactly the same concerns.
Re: (Score:2)
Put it another way, the processor architecture you would choose to achieve 80MFLOPS in 1976 is very different from the architecture you would choose in 2006. Telecomms has exactly the same concerns.
Maybe 2006 but not, ironically, necessarily in 2012. The vector processors of the early supercomputers are very much alive in the GPUs of clusters that incorporate GPGPU work for their FLOPS count (which includes 3 of the top 10 right now)
Re: (Score:2)
Yes and no. While there is a popular trend back towards vectorisation there are other things going on that have just as big if not a bigger effect on architecture choice.
Registers are a lot cheaper. Caches are cheaper, and multi-level caches are ubiquitous. Main memory is an order of magnitude slower than the processor which is a problem early supercomputers didn't quite have.
A large part of the problem for modern processors has become the prediction and scheduling of the instructions which vectorisation he
Re: (Score:2)
We need terabit Ethernet NOW, not in a decade.
What on earth for? For point to point bridging and interconnects you can already use Fibre at multi-terabit [wikipedia.org] interconnects. Do you have some need for a multi-point LAN to support this speed that couldn't be addressed by setting up seperatete switched VLANs
Re: (Score:2)
In other words (Score:2)
Re:In other words (Score:5, Informative)
No, unbounded latency. It'll happen, just not yet.
Sigh (Score:3)
Powers of 10.
Over copper or fibre.
At copper distances of 100m.
Call it a standard, if you like. Each time you have to upgrade, look to the next power of ten at that specification.
Because although 40Gb/s exists, it's not popular and you won't find it in your average computer supplier, ever. Sure, it's expensive to jump like that, but every technology boost is expensive and I'd rather we skipped the proprietary-data-center-only junk and leave them to their own devices and specify real-world, millions-of-businesses standards at jumps big enough to a) make a difference, b) be expensive at first but mass-market after (rather than sharing the market with half-assed solutions), c) run on the same specs at the previous generation (if not the same cables exactly, at least I can replace 100m runs with 100m runs and not worry).
Ya well (Score:5, Insightful)
You may discover that you can't have what you want. There are real physical limitations we have to deal with. One issue, with regards to copper Ethernet, that we are having is keeping something that remains compatible with older style wiring. Sticking with 8P8C and UTP is becoming a real issue for higher speeds. At some point we may have to have a break where new standards require a different kind of jack and connector.
Also in terms of "data center only" devices that isn't how things work. You care what data centers use because you connect to them. There can be big advantages in terms of cost, simplicity, and latency, to stick all on one spec. So 40gbps or 400gbps could well be useful. No, maybe you don't see that to your desktop, but that doesn't mean it doesn't get used in the switching infrastructure in your company.
Also each order of magnitude you go up with Ethernet makes the next matter less. It's going to be awhile before there's any real need for 10gbps to the desktop. 1gbps is just plenty fast enough for most things. You can use things over a 1gbps link like they were on your local system and not see much of a performance penalty (latency is a bigger issue than speed in most things at that point). I mean consider that the original SATA spec is only 1.5gbps.
As for 100gbps, it'll take some major increases in what we do before there is a need for that to the desktop, if ever. 10gbps is just an amazing amount of bandwidth to a single computer. It is enough to do multiple uncompressed 1080p60 video streams, almost enough to do a 4k uncompressed video stream.
Big bandwidth is more of a data center/ISP need than a desktop need. 1gbps to the desktop is fine and will continue to be fine for quite some time. However to deliver that, you are going to need more than 1gbps above your connection.
Re: (Score:3)
Re:Ya well (Score:4, Insightful)
Just because the engineers have pulled two rabbits out of hats and managed to run first 1 and then 10 gigabit over slightly improved versions of cheap twisted pair cable with the 8P8C connectors (though at present afaict the cost of transciever hardware is such that for short 10 gigabit runs you are better off with SFP+ direct attach) doesn't mean they will be able to do it again.
Re: (Score:1)
40/100Gbps ethernet is still fiber only.
Small correction: 40 Gbps can now be done with twinax and is MUCH cheaper that way. As a matter of fact, I just deployed it at work.
Re: (Score:2)
At the rate we're going, "8P8C" in the terabit+ category will probably end up meaning "cable with 4 pairs of single-mode fibers". When you start talking about terahertz signaling rates, a single fiber starts looking like a pair of copper wires & you start to feel like if it hasn't quite outstripped the final viable limits of what it can do, it's getting pretty damn close.
As a practical matter, wire speeds faster than 10gbps almost *have* to be treated like parallel bundles of fast, but independent bitst
Re: (Score:2)
I'm not sure you are correct there (Score:2)
10gbps is not that fast in terms of computer speed. A single lane of PCIe 3.0 is nearly 10gbps (it is 1Gbytes/sec). Memory is generally in the range of 20Gbytes/sec and up. L1 cache is over 100Gbytes/sec.
I'm not trying to say routing 10gbps is easy or anything, just that you seem to think processors are slower than they are. They deal with pretty vast amounts of data.
Re: (Score:2)
Putting into perspective just how fast 10gbps is from the perspective of a single user, in the time it takes the fastest Intel-architecture AMD64 CPU money can buy today to test a single byte already in a register and determine whether its value is zero or nonzero, an entire byte or more would fly by on the 10gbps wire.
I think you lost something in conversion there. 10gbps is 1.25GBps. Today's fastest Intel Desktop processors have 12 threads all running at 4GHz+ (My desktop is running 4.5GHz). Assuming you aren't using any of the fancy (faster) SIMD instructions and doing a simple test r,0 instruction at the byte level (actually it can do 4 bytes/32-bit words at a time, but I'm not counting that), Sandy Bridge processors can cache, decode, issue, execute, complete and sustain 3 of those per cycle per thread. Reference:
Re: (Score:2)
Oh, and just for an idea, I often copy data (not just simple test bytes for zero) around on my computer from multiple drives to other drives on my system at a much higher rate that that -- physical drives, not ram disks or the like. Granted, they are raid arrays hanging off of different disk controllers and go through the CPU to do so and still uses next to nothing CPU wise.
Re: (Score:2)
Actually, we're both kind of right. I was thinking more specifically of programmed I/O, which would absolutely outstrip even the fastest Intel-architecture CPU at 10gbps speeds, and I forgot that you wouldn't have to actually touch every single byte with the CPU... in real life, you'd have a DMA controller to buffer bits from the wire while the CPU slogged along and parsed the first few bytes, then the CPU would tell another DMA controller how to dispatch the bytes in the buffer that continued to accumulate
Re: (Score:2)
Actually, the biggest limitation for Ethernet right now isn't the wiring (all the new fan
Re: (Score:2)
The biggest issue right now is that if you want 100m, you have to increase the minimum packet size at the faster speeds - 64 bytes is barely able to meet it at GigE speeds, nevermind 10G or faster. The thing is, at the faster speeds, you can send out a minimum-sized packet and it'll be completely "on the wire" before the other end gets it
And what makes you think that is a problem? Once CSMA/CD is eliminated it really doesn't matter if packets are "completely on the wire" since they can't collide.
Re: (Score:2)
Maybe they just meant the standard should be 414.2 Gbps...and the next iteration will be 1Tbps. Sort of an A4-A3 transition, but for one dimentionsal....yeah, you're right. It's a stupid idea.
Re:Sigh (Score:4, Insightful)
OTOH. These standards, by sheer fact that they are referencing 1Tbs needs, are most certainly relevant to the backhaul providers and not any normal business outside of that group. Fractional or non-base 10 speeds have been common in those networks since well before the power of 10 thing came about. Once the rest of the technology catches up and makes the power of 10 thing feasible, then the standard "commodity" equipment picks it up (primarily for marketing reasons IMO). Power of 10 is convenient for math reasons, but frequently means absolutely nothing to the backhaul guys (the early adopters).
Those businesses who purchase the regular "commodity" power-of-10 equipment really should be set for a while with the previously commoditised 10Gb links. They are performant, relatively cheap, available, run across the nation, and hard to saturate with the equipment that plugs into either side. I've worked with 8x10Gb multiplexed cross-country low-latency fiber wan links. It is a ludicrous amount of bandwidth unless you are routing other networks like a backhaul provider. I would struggle to name normal businesses which would be unable to use 10Gb links due to a lack of bandwith (for the immediate future). The needs really are different between these markets.
As an aside, fiber may be sold commonly in 100m lengths, but that has nothing to do with the distance the light will work at properly for the speed it is rated. Some fiber / wavelength pairs are only good for a few feet. Others go km, but not with the same NIC, Fiber, Switches, or patch panels. 100m is a really shitty (too short) standard for datacenter use anyways. Frequently, we will get two cages in a datacenter at different times... and they end up farther than 100m apart making copper irrelevant for that use.
Change is incremental like ripples, but big changes come in waves. Back-haul wants the ripples, everyone else wants the wave. I say, let them have their ripples and pay for the development of the waves. It saves both groups of consumers money so long as there aren't TOO many ripples per wave.
- Toast
Re: (Score:2)
These standards, by sheer fact that they are referencing 1Tbs needs, are most certainly relevant to the backhaul providers and not any normal business outside of that group.
A lot of people would like to have just one [partitioned] network, and if you're [over?]using SAN you might have quite a lot of traffic. 1 Tb/sec divided up between a hundred or thousand active clients doesn't sound like quite so much data. On the other hand, we're still not talking about many links in most cases.
Isn't it about time we stopped calling it Ethernet (Score:5, Insightful)
Hardly any 10 base T systems bother with the CDMA/CD system that original ethernet had , in fact its more like a serial protocol rather than a broadcast "in the ether" one now. WHy not just give it a new name?
Re: (Score:3)
Re: (Score:2)
Depends how you look at it. Coax ethernet did to all intents and purposes use an RF signal and , though I'm not an electronics engineer, I can't see any reason why - interference aside - you couldn't simply have plugged it into an antenna and with some suitable RX/TX amps used it as wireless.
Re: (Score:2)
AIUI the "ether" in ethernet was an analogy coming from the fact that a shared coax cable has some aspects in common with a radio system.
However while coax ethernet shares some things in common with a radio system there are also big differences that mean running ethernet over radio would NOT be a simple matter of adding amplifiers and antennas.
1: Radio systems have FAR more loss than coax cable systems. In particular this means it is MUCH harder to detect collisions since when you are transmitting your own
Re: (Score:2)
RF and signals are not the same thing. Any signal transmitted by radio must have a carrier frequency to carry the data. Without the carrier frequency you would just be blasitng out noise in multiple bands or frequencies. Plus you also need to understand the physics of radio and antenna design to realize that an antenna for a 10MHz signal needs to be pretty big. Plus you would be stomping all over the short wave band which will piss a lot of people off inclusing various government branches including the FCC.
Re: (Score:2)
Re: (Score:3)
The name came from the original idea of it being a wireless protocol so has never made sense in any device ever sold with that name.
WiFi is ethernet, with wireless extensions. MACs, frames, etc etc etc. Before everyone knew what 802.11 was, it was even referred to regularly as wireless ethernet.
Re: (Score:2)
Hardly any 10 base T systems bother with the CDMA/CD system that original ethernet had
I don't think i've ever seen a 10BASE-T system that didn't use CSMA/CD. Switches were too expensive back then to justify a fully switched network so people used hubs and let the end nodes continue to do collision detection and retry. Also afaict the autonegotiation system needed to automatically disable CSMA/CD didn't come in until 100BASE-T was introduced (it's certainly defined in the 100 megabit section of the spec).
OTOH at higher speeds CSMA/CD is basically gone. While I know 100BASE-T hubs exist i've n
Re: (Score:2)
It's time we stop calling it "10 base T". The speed got boosted from 10mbps to 100mbps back in 1995, so there's nothing "10" about it anymore, unless you're surrounded by very bad networking equipment.
Re: (Score:3)
I'll just point out that most 10Gb switches (datacenter switching) have totally dropped 10mb and 100mb support. Or, in the case where it is supported, you get some fun knock-on effects like buffering of all switch traffic (using the CPU / memory for switching activity) on the switch instead of using the hardware fabric for direct switching. This has repercussions for latency and switch performance.
I found this out the hard way by trying to plug a (cheap?) Cisco ASA (with 100Mb ports) into Arista and Cisco N
Re: (Score:2)
WHy not just give it a new name?
It uses the same connectors, it has roughly the same design limitations, it's backwards compatible, and the operating systems treat them as if they're just the same.
What benefit would a new name have except to sow confusion? One out of a thousand IT guys knows that CMDA/CD is, much less that it's used on 10 Megabit ethernet.
Re: (Score:2)
"It uses the same connectors, "
I take it you've never seen coax ethernet then. Those connectors have nothing in common with the phone style ones used now.
If anyone can even comprehend 1 terabit (Score:2)
We ads on TV for 200gbit internet here in Sweden, yet - most people dont have anything above 4mbit. Sweden is pretty much a long forest country, and only the few big cities we have can enjoy really fast internet.
I live in a small city here, 10K+ something citizens, and Im the "lucky" one to live nearby the city core itself, so I get around 12-14mbit on a good day, this is far more than my peers get, they are lucky to hit 2mbit, and live only 2-3km away from the city core.
But you know what? I do just fine on
Re: (Score:3)
We ads on TV for 200gbit internet here in Sweden
No, you don't. You might have adverts for 200mbit internet, but not 200gbit.
Re: (Score:1)
No, you don't. You might have adverts for 200mbit internet, but not 200gbit.
Typo!
But youre right, thanks for noticing.
Re: (Score:3)
Terabit connections are what ISPs doing those big 200meg "to each customer"-links want to use to link their switches and routers together and datacenters serving "full hd" content to millions of users want to use on their internal networks. That way, instead of having to running multiple switches over multiple cables, you could do with fewer switches/routers and cabling for the same or better performance.
At home, most machines can't properly utilize gigabit ethernet as of writing this due to internal bottle
Re: (Score:2)
200 mbit per customer, means only 5 customers for a gbit. a small area in say stockholm could easily contain 500 customers, that's already 100Gbit.
Connect all of stockholm or a similar place, and you will need huge backbone connections already, and that is still on city level, not even national, let alone international
Re: (Score:2)
A Gbit connection is tiny in 2012, even 10Gb is cheap now.
Re: (Score:2)
private use, there is only one place in the world afaik that even has gbit internet, that's korea.
In europe, highest you get is 200mbit(down) in sweden/finland/norway, only in mayor cities.
Re: (Score:2)
Wrong. 1Gbit/s consumer connection available in my Stockholm suburb, SEK899/month(around $137/month IIRC)
Re: (Score:2)
Actually, fast broadband is more widespread in Sweden than you make it out to be. It depends a lot on municipalities or housing owners however. I know in Boden there are houses that have 100Mbit/s available real cheap, while the house next to them only has access to ADSL, because the individual or company owning it has not wired their house for FTTP/FTTH or similar.
Re: (Score:2)
Re: (Score:2)
I see a new CCTB certification now popping up at Cisco, that's what will drive the move to TB Ethernet.
If we ever get to these speeds with broadband (Score:3)
Verizon, Comcast and others will still prioritize traffic so that P2P will never be faster than 1Mbit/sec. because they just won't have the capacity to handle it.
Re: (Score:2)
Some of us don't need P2P, some of use have to transfer data point to point and use the same ISPs (backhaul) as the consumer. I understand your desire for P2P, since I too find it useful when downloading something that is more popular than a traditional server farm can handle. However I'd like to point out that home consumers are on the bottom tier of service with the big ISPs. This is probably just as well since the non-home use users pay significantly more.
If only we had more co-op ISPs where the members
Why sell one, when you can sell two? (Score:3, Interesting)
I manage several petabytes of storage on a large compute cluster, and we could use Terabit ethernet yesterday. Network fabric throughput is our limiting factor on pushing data out.
One senses that vendors went for the 400 Gb standard on the premise of "why sell one network upgrade when you can sell two at twice the price", and not from actually catering to customer's needs.
It's similar to the current 40 Gb/100 Gb standards. No one that I know actually wants 40 Gb. I can bond 4 x 10 Gb and get that already. But vendors want that double upgrade fee from those companies that have to have every ephemeral competitive advantage.
Re:Why sell one, when you can sell two? (Score:4, Interesting)
Yep it's definitely not a technical problem, after all getting serial data to run at 312.5 Gbps over long distances of un-shielded twisted pair copper is simple. The edges of the data are only in the 1.2 THz range after all.
Even on a PCB, 312.5 Gbps gets tricky and expensive, over long distances of fiber or copper it will be very difficult. Dropping to 400 Gbps brings it into the realm of slightly possible but still ridiculously expensive, plus at 400 Gbps you can bond just three links and get 1.2Tbps through, well probably less after overhead.
Damn CS/CE's think they know RF!
Re: (Score:2)
I care about as much for Terabit over copper, as I do for Terabit over caloric, phlogiston, or aether. Short-haul Terabit over fiber would be quite sufficient for our use-case (network never leaves the NOC, which I suspect is probably the major use case, long-haul is a smaller though higher margin market) and is *much* easier to pull off.
And FWIW, physicist, not CS/CE.
Copper? How quaint. (Score:2)
Shouldn't we pushing photons over glass by now. Fibre infrastructure has existed for decades now, isn't it time it was scaled down to individual computers and appliances?
Re: (Score:3)
It has been. You can get it at the desktop, and people do. (I recall a former coworker who did a fiber-to-the-desktop deployment for an NSA office nearly a decade ago.) It's still really really pesky to deal with, even to this day. Plastic fiber does make things a lot easier, but it has its own downsides. Terminating copper for use at gigabit speeds is finicky enough that I learned not to try. I buy manufactured patch cables, and still have the odd one fail (albeit fewer than hand-terminated cables).
Dead, for now. (Score:2)
So ... not dead then.
Does anyone remember hearing that Des O'Malley once claimed the Maastricht treaty had "been dealt, at least temporarily, a fatal blow."
And we just cracked the petabit barrier (Score:2)
Will no one think of the children (Score:2)
Re: (Score:2)
you think the primary application for ethernet is ISP use? this is for data centers and server rooms foremost.