Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware

Cheap Gigabit Ether 220

Avrice writes "National semiconductors gigbit ethernet is backwards compatible with existing systems and smart enough to fix your wiring screw ups for only $95. Maybe bandwidth (at least on the network) won't be such a problem after all."
This discussion has been archived. No new comments can be posted.

Cheap Gigabit Ether

Comments Filter:
  • by Anonymous Coward
    Sorry. Close though.

    (pluss, technically its not ring color, then tip.)

    So one end looking from bottom of rj-45:

    w/or, or, w/gr, bl, w/bl, gr, w/br, br

    other end:

    w/gr, gr, w/or, bl, w/bl, or, w/br, br You swap the orange and green. Read the specs. Road

  • by Anonymous Coward

    Ethernet does NOT fall apart at 30%! That's an old wives' tale. Even on shared media, you can get ~97% bandwidth.

    And on a switch, in full duplex mode, you get essentially the whole pipe.

    Check out: http://www.res earch .digital.com/wrl/publications/abstracts/88.4.html [digital.com]

  • Yes indeed. Interrupt loading is a very serious problem with gigabit ethernet, and even with other high-speed devices. And jumbo packets do help quite a bit. There is also a mechanism called interrupt mitigation which uses the same wire packet size but stores N packets at a time before triggering an interrupt. Either method, or both, will significantly improve interrupt loading. Unfortunately, this increases latency, a classic tradeoff.

    --TM

  • I find that 100 Mbit is more than enough most of the time; most systems aren't fast enough to fill a gigabit pipe anyway, and if you have one that is, cost probably isn't your primary concern. (Hint: 125 MB/s is faster than a standard PCI bus can deliver, is about 5 times faster than most ultra/160 scsi disks can deliver data, and thus requires a double-speed fibre-channel controller connected to a striped RAID - or at least two ultra/160 scsi controllers, and at least a 64-bit PCI, sbus, or equivalent on the host side. Not cheap.)

    No, gigabit ethernet for LANs doesn't help much. The supporting technology (hubs, switches, etc) is still prohibitively expensive, as are systems that can make effective use of it. The biggest problem today isn't LAN bandwidth; it's Internet bandwidth and the prohibitive costs associated with it. It doesn't help if I can set up a gigabit LAN for -- TM, watching crucial bugfixes trickle in over $CHEAP_UNI's dogslow link

  • It's pretty common that cat 5 wiring above drop tile and under floors is not completely compliant. The standard specifies more then just a quality of copper, but also the bends and contacts that have to be used.

    Let's see how many installations fall over when they're pushed.

    -Peter
  • Well, it's not much faster than PCI, has a theoretical 132MBS, and I know people who have tested it at over 100MBS, so close enough.

    People doing cluster computing need more bandwidth, so they will welcome cheaper gigabit.

    From your server to the switch you need more bandwith. With Linux's single threaded ip stack, being able to have one NIC rather than several is good.

    And lastly, I want to be able to stream uncompressed video around my lan at home.

  • Well, you are kind of missing the point. Right now, the stack is one at a time, and that is why the Mindcraft benchmark had 4 network cards, so that Linux would bog on it.

    I think the kernel people are a little to used to doing things the hard way (after all, they don't have any choice).

    Besides, if the code for the threads is shared (and Linux is good about that), then you will not have misses for code, just data, and you will have a fair amount of that anyways (though more with threading).

  • I want it, I want it, I want it!

    Even if I don't need that much bandwidth, maybe prices will drop because of this. I'd love it if we had a link like that to my dorm, but it's not going to happen, I'm sure. :|

    Oh, and... you reallly could build a beowulf cluster with this. Faster links help out much more than faster processors for many classes of problems. Sorry, but it's true! :)
    ---
    pb Reply or e-mail; don't vaguely moderate [152.7.41.11].
  • Cheap gig, great. More support for and old conecpt in networking. I just wish there was more support for a certain networking technology [atmforum.com] which would in turn help another related project [lrcwww.epfl.ch]. Instead we see a lot of progress in gig because its easy. Throwing bandwith and a bunch of buffer space in a switch is a crummy way to do QoS.
    Aside from the ATM vs. Gig rant, this really doesn't mean a whole lot. Yeah LANs are fast, this isn't news to anyone. I don't see anyone complaining about the network performance on their desktop at 100Mb/s switched. So whats the deal? The big step forward in networking is when we can see something that gets close to the performace of atleast 10Mb/s to residential and buinesses that doesn't cost a small fortune.
  • by BVD ( 1495 )
    Hey, I understand what you are saying about crossover cables not being as resitant to RF and crosstalk, but I'm not sure you understand how gigabit over cat 5 works.

    First off, the clock rate on 1000BaseT is not any faster than on 100BaseTX. They both run at 125Mhz. 1000BaseT gets its speed bost from running all eight wires ( not just 1,2,3,and 6 ) in full duplex. The other speed boost comes from borrowing the compression tech. form the V.90 modem spec.
    Therefor, since the clock rate is not increased, if the patch cable works for 100BaseTX, it should work for Gigabit over Cat 5.

    Now, that being said, I don't think you even need a x-over cable for 1000BaseT. Since all the pairs are full duplex ( in 10BaseT & 100BaseTX they is one transmit pair and one Rx pair ) crossing over pairs would seem useless.
  • by BVD ( 1495 )
    Ok, you sound like you actually know a little about the subject. Do you mind answering a few questions for me?

    Question 1 -- Can both interfaces send data across one pair of wires at the same time? If they can, then that sure sounds like full Duplex to me. Do you have a link to something on 'Duel Duplex'?

    Question 2 -- Are you sure that 'PAM5x5x5x5' encoding was not also used in the V.90 stuff. I have read about the 1000BaseT encoding from several different sources, and they all mentioned it as being the same as used in the V.90 spec? Maybe the modem people got their Ideas from 100BaseT2. Again, a link would be helpful.

    Thanks,
    Bill
  • You have a good point there. The PCI bus (Ala PC architecture) is capabel of 132 MB/Sec max.

    32bit @ 33Mhz == 132MB/sec

    Which would mean that if you would never be able to hit 100% utilization in a standard PC.

    On the other hand on a PCI-X bus (64bit @ 66Mhz) it would be quite nice.
  • Umm, no. You need another controlling chip on there besides, not to mention all that other icky NIC circuitry. $95 for the chip will probably translate into $400 or $500 for a NIC. Hubs and switches will probably be Way Too Expensive.
  • Ahh,

    usually (I haven't looked at a license agreement in ages) you're ok as long as you've got enough licenses to cover the number of simultaneous users. You may or may not have to implement a license management system (i.e. users are actually locked out from using the app if all licenses are used)

  • This is still going to be pretty pricy. The 95 dollar price tag is for a chip, not for a PCI board. Add in engineering and manufacturing costs and the price will put it beyond casual "Ooh, I've got a gigabyte network in my house" costs.

    There are even cheaper pure 1000baseT chips out there. The only really noteable thing about the National Semiconductor chip is that it can talk down to 10/100 base T.

  • You'd have to purchase enough licenses to cover your users to make it legit. One copy of MS Office or Windows wouldn't cut it.

    I'm not sure what the business or technical justification would be for serving VMware in this manner. From a zealocy standpoint, sure, you get to say "We're a Linux shop" but there would be no real technical merit to it or probably even a sound business case.

    You still need N Windows licenses, and M licenses for your applications plus a license for VMware (maybe N of these too) and a beefy enough server to handle it.
  • it's nice that you'll at least have the potential to fill PCI, though. hell, 100Mbits/s on your average box won't saturate an ISA slot (just bought an ISA 100baseTX for an old p75 the other day) since it only works out to 12.5MBytes/s full potential. the only real purpose of putting one on PCI is it gets the data off the bus more quickly than ISA, eating less cycles.
    now, if they'd just make an AGP network card for my file server... bwahahahahahaha...

    -l
  • multithreading is a double-edged sword. it's easier to understand conceptually, but there's a reason why torvalds is against it in his kernel: cache misses. when you have several threads competing for the same cache resource, you are going to have cache misses MUCH more often because the kernel can't arrange things nearly as well as in the case of a single thread. even in the case of SMP, most low- to mid-range SMP boxen have a singular cache to share, so even though the threads could run nicely on separate CPUs, the cycles lost in cache misses would far outweigh the benefit of having separate threads on separate CPUs.

    -l
    kernel traffic reader
    http://kt.linuxcare.com/
  • then ask Linus, dude. why do you think i provided the kernel traffic link?

    -l
  • I get pretty tired of the tripe which is spoken about computer hardware on this site. The PCI bus does NOT max out at 133 MB/s. The current fastest PCI implementation is 533 MB/s with 64-bit/66-MHz PCI, which is not all that exotic.

    It is true that the 33Mhz 32bit PCI bus maxes out at ~132Mbytes/sec (33 mil 4 byte transfers per second, or 132 mil bytes transfered per second). It is also true that many devices don't handle one transfer per cycle, esp at the begining of a burst. It is also true that burst sizes are quite limited. Lastly it is also true that almost all PC machines with PCI buses have the 33Mhz 32bit PCI busses (plus the AGP "bus" which is pretty similar to the PCI bus).

    There are 64bit PCI buses. Lots of current generation SPARCs have them. Some of the older Alphas (and the newr ones) have them. There is a 66Mhz PCI bus. In fact most PCI buses that arn't 33Mhz 32bit PCI busses are both 66Mhz and 64bits wide. It's not hard to buy them. I think some of the high-end very expensave "server class" PC machines even have them. But it ain't common. (and yes a 66Mhz 64bit PCI bus should move something like 533Mbytes/sec66 times eight is 528 after all).

    Still if you pick a PCI bus at random from the world odds are better then 1000 to one that it is a 33Mhz 32bit PCI bus.

    It is a bit like saying SCSI maxes out at 10 MB/s. It is not generally true.

    Indeed it is. But it is also unlike it. A LVD-SCSI disk seems to cost pretty much the same as a non-LVD SCSI disk. A LVD-SCSI controler (~40Mbytes/sec...or is it 80? I think 40 "narrow", and 80 "wide") costs maybe four times as much as a plain old "Fast SCSI" controler (~10Mbytes/sec). A "plain old" PCI motherboard costs almost nothing. Under $100 easy. I doubt you could find a motherboard that does 64bit 66Mhz PCI for $400, or even $1000.

    In short I agree that there is a faster PCI. I agree that that makes it clear what the migration path is. I disagree that it is very relivant in a discussion about a $90 consumer PCI card, at least not this year.

  • Yep my Novell server kick the shit out of about anything, Novell rocks with Gigabit.
    Still I think it's the Bus and hard drives that is keeping my server from it's full gigabit potential.
    My old Gigabit switch was realitvaly slow, a Intel 510T, it also had lots of problems.
    One day at about 10 AM it just died I was freaked.
    Need a switch bad, so I got the only on I could find RIGHT NOW. A Summit 48 from Extreme Networks.
    Now thats a swich I can't push around.
    It also cost 4 times what the 510T cost. :>
  • True and not true,
    I have gigabit, using fiber, to a realy great switch, Performance to the 100 Mbit workstations is OK; I know that the poor server, was weezing when only 100Mbit, and now the server rarely gets over 10% cpu utilization.
    The Disks are what is holding it back now (I think). now one of my Linux boxs has fiber too, and when I transfer files between it and the server, then it groans a little. I always thought the Yellowfin card was a little suspect, or maybe the NCP tools, I am not sure and of course the workstation don't bother it either.

    The thing is Gigabit is nice, Gigabit is fast, but most certainly not a cure all, faster hard drives lots and lots of a faster kind of RAM and MUCH faster bus would be a much better cure.

    Sorry about the tangents.
  • ...someone sends a stack of them to Yahoo! to deal with their bandwidth issues? Or maybe Ebay? Or while we are at it, Slashdot could use them...
  • How long until we can get Gigabit INTERnet? :) Wouldn't that be cool? :)

    Anybody remember that weird proposal about renaming the prefixes for binary orders of magnitude because they are not based on powers of ten as the original Kilo/Mega/Giga/Tera prefixes "were intended"? They wanted to make it stupid names like Kebi/Mebi/Gebi/Tebi instead.

    Imagine: It could have been Gebibit Ethernet. (Would that have been Geh-bee or Gee-bee?)

  • I mean gigabit speeds TO YOUR HOUSE.

    Think BIGGER!

  • Ah, yes, good point. I obviously didn't think it all the way through! :)
  • Hey! Reading fortune files late at night *is* research, dammit. I resent your slur; just because he's trawled the digital bible don't assume he's representative of the type. I'd refer you to the book of CRM7 but I can't be bothered to grep it.
  • I'd have to agree with another poster - the signalling on ethernet is not like SCSI where you can have multiple devices communicating at different speeds - you have to run at the lowest speed or you get collisions, data corruption, and other nastiness. Plus the card may not even work because it can't get a link "beat" from the hub. Very bad - you NEED collision detection.
  • I run a 100Mbps lan at my house, and it's a star topology. I can atest that the most data I ever pushed was ~8Mpbs over a long period of time ( say a few hours ). My PCI tv tuner pushes said data over my 66Mhz PCI bus to the netgear 10/100 nic to another netgear nic of the same model at full duplex.

    I'm not the guy that figures up numbers for what it *should be able to do - I test. For moving streaming video I never get more than said ~8Mbps -- would gigabit ethernet help someone like me? I mean my bus must be too slow to push all that data over a 100Mbs connection already. Also, would a ppro ( socket 8 ) with ~8 PCI slots make a good gigabit "hub" for a star topology or would it crush the machine's bus bandwidth?

  • I think ill pick up a few cards, and be the first to have a gigabit router at work. (Running linux baby!)

    Oh wait, no drivers. (; Nah, hopefully they ship with card..
  • Ok, IS NSI going to sell a card too? Or just this chip... Now post on /. when we can get a Gigabit ether card!
  • by PD ( 9577 )
    So, would the $95 chipset translate to initial real world prices of maybe $150 a card?

    And, just how do these things work with existing systems. Say I've got a network with 2 of these cards, plus a humble Linksys 10Mb NE 2000 clone card. Would the gigabit cards talk to each other really fast, but slow down only when talking to the NE2000 clone? Or does the entire network run at 10Mb when the NE2000 is on it.
  • http://www.broadcom.com/docs/PR990511.html
  • ...that's its just the chip, right? $95 per 1000 of 'em. After design costs, board production, company overhead and profit, etc. etc., you'll probably be paying $400 a pop for a card.
  • Uhhhm, thats just the readers submission.. note the Quotations, right? see them.. that means its a Quote.. gee, go figure

  • On 100 Mbit ethernet, a crossover cable will usually only give you 10Mbit.

    Turn off autodetect in the driver settings, and explicitly set it to 100Mbit, full duplex.
    --

  • Somewhere around, I have a three port Farallon dongle-hub that does this. (Two ports 10BT, one port Mac AAUI.)

    Since the Farallon unit was pretty cheap when it was in production, I always thought it was strange that more hubs don't autodetect a crossover cable. Nice to know this might become a standard feature.
    --
  • I remember that a lot of problems cropped up when trying to do 100Mbit Ethernet on existing wiring, which only barely could manage 10Mbps. What will happen to most of the wiring already laid out. Will it have to be thrown out? I remember hearing about a Cat-6 cable. Will we have to upgrade our networks?

    Also, anybody got information on how collision handling is done on this new architecture? I would suppose that, being a gigabit ethernet, it would surely see much more usage than a 100Mbps one, and being also much higher speed, there should be more collisions.
  • If ever, but when such cards will be available at near '100mbit' prices, then who's to say that a low-cost setup will not be made with a PC with four of those cards in it, acting as a router, switch, or hub.

    Sure, the theoretical max of 133MBps of the PCI bus is low for a switch backplane for 4 1gig cards, and the latency will not be a winner, but it will beat a 100mbit switch in quite a number of circumstances.

  • I need some of these... anyone seen these for sale online?
  • Quick thing to clear up, they are selling the chip for $95. In bulk. That means that if a motherboard manufacturer decides to integrate it into a motherboard, it will add $95 to the cost of the board. And that actual gigabit cards will cost much more. (Heck, the Intel 82559 10/100 chip costs only about $30 by itself, but a NIC that uses it costs around $150!)

  • The article specifically states that this will work with current Cat5 wiring. So, no, we won't have to upgrade the physical wires in the networks. As others have pointed out, of course the switches/hubs will have to be upgraded. As to collision handling, I think the idea is to use switches, which solves that completely, as far as I know. I thought only hubs had problems with collisions, but I could be wrong...


    Supreme Lord High Commander of the Interstellar Task Force for the Eradication of Stupidity
  • But what if I want to render on the server? That would sure prevent most sorts of cheating!
  • They put 250 MBPS on each of the four pairs in the CAT-5 cable.
  • by Xenu ( 21845 )
    I've been told that you should multiply the parts cost by a factor of 3 to 5 to get the retail price of the finished product. A gigabit Ethernet card is not going to be cheap.
  • Why would your network only be slow in January? ;->
  • Heck, the Intel 82559 10/100 chip costs only about $30 by itself, but a NIC that uses it costs around $150!

    Pricewatch [pricewatch.com] claims $40-$50, not $150.

  • They put 250 MBPS on each of the four pairs in the CAT-5 cable.

    Err, 500 mbits/sec on each of the two send pairs (the other two pairs are for recieve)

  • If you want a nice whitepaper/technical document on gigabit over copper/802.3ab you could do a lot worse than to check this [3com.com] out.

  • by nhw ( 30623 )

    Sure, you can get a fairly inexpensive gigabit ethernet card, but how much is the hub gonna cost. You can only connect 2 computers through a Null Cable (cross over).

    You're not going to get hubs for this stuff, you're only going to get switches.

    As far as I know, no vendor currently has a 1000BaseT product out there, so it's difficult to say exactly what the cost will be. But, for comparison's sake, the SuperStack II 9000SX, which is an 8-port 1000BaseSX (short haul multimode fibre gigabit) switch, retails for about £10,000, which is approximately $16,000.

    Which means that this stuff isn't going to be 'string it around the bedroom for the Quake deathmatch' fodder for a few years to come.

  • Well, no, actually...

    Gigabit ethernet actually runs at 10^9 bits per second, so it would still be gigabit ethernet.
  • by nhw ( 30623 )

    On 100 Mbit ethernet, a crossover cable will usually only give you 10Mbit. I assume the same will be true with gigabit. You probably won't get full speed out of a crossover cable.

    I'm not sure what you're talking about; there's no good reason why a crossover cable wouldn't give a full speed connection between two 100Mbit/s NICs.

    I've managed networks with plenty of 100BaseFX inter-switch links which are not much more than glorified cross-over cables, and had no problems at all.

    All that a hub does (well, this isn't strictly true on newer hubs, but...) is repeat the signal; if anything, you should get faster speeds out of crossover cables, as you can run them full-duplex.

    In fact, using a crossover cable should be even faster than using a switch in pure performance terms, as there's no switching delay.

    What's the technical basis of your assertion?

  • Well, gamers don't really need bandwidth so much as they need low latency. Generally 10Mb is more than sufficient; the only real advantage is that 100Mb cards potentially can deliver the (small) packets a bit faster.
    It is true that game companies may start using much more bandwidth, when it's available, but I don't see that happening much anytime soon because they also want Internet gaming to be possible, and over the net it's not really common to get 100Mb.
    Businesses can start playing around with stuff like videoconferencing I guess.
  • The Hub uses an internal switch to connect the two backplanes. Try hokking up a system with a packet capture utility running on it. If the system has a 10MB NIC you will only see the 10MB traffic, nothing coming from the 100MB systems will show up. The same in reverse if the packet capture system has a 100MB NIC
  • The term 10/100 hub is misleading Products like this are really 2 hubs and a 2 port switch. All 10Mb devices run on one hub and 100Mb ones on the other hub and the switch bridges between them. Thus collisions on any 10Mb port are seen by other 10Mb devices and collisions between 100Mb devices are seen by other100Mb devices. The rate conversion from 100->10 is a royal pain because if you want to do it well you need a lot of RAM built into the switch. If you buy some cheap equipment you will notice that 100/10 peformance can be terrible because the switch causes 100Mb collisions to do primitive flow control, affecting all connected 100MB devices. For 1000Mb hubs are obselete I'd think, pure switching architectures are the only way to go.
  • This is misleading, this chip is only a PHY (physical interface) and needs another chip called a MAC (Media access controller) to function as a NIC. It would also need transformers for electrical isolation and the RJ45 and PCB, probably a whole bunch more little stuff as well. Just for comparison a 100/10Mb PHY is about $5 in volume right now (probably less) and most solutions for 100/10 are now integrated into a single chip that sells for less than $10. We are a LONG way from gigabit at home!
  • ya.thats a micro$hit solution. if it doesnt work, throw hardware at it until it does. this isnt a bad thing however. its given us cheap and powerful hardware.
  • hehehe.. I guess he forget to use the 'Preview' feature :)
  • Hello,

    Here is my idea. As someone here pointed out, the PCI bus maxes out somewhere just around 100mb/second of bandwidth (I've somewhere about an extension, PCI-X or something; anyone know anything about this?)

    My idea is why not use the AGP bus for something like this? I know that AGP is a lot faster than PCI. I guess the only problem is that most PCs today with an AGP slot use it for their graphics card (prehaps this is a reason to introduce motherboards with multiple AGP slots :)

    Maybe this is totally unnessecary and the PCI bus is fast enough to provide the 128mb ('b' as in byte) a second speeds of a Gigabit Ethernet.

    Maybe no one would actually need to this one of these networks at that extreme high speed. But what if these types of speed increases continue, I believe it might be plausible.

    Could anyone hear with more knowledge in the hardware aspect of things comment?
  • Hello again,

    I really wouldn't say I have a complete understanding of network cards work, but would the fact that AGP devices can directly write to memory be of help in a network of this speed?

    I'm mainly thinking here about the potentially needed on-board buffers. Or is this not a consideration, and the data is just off loaded over the bus in time for the next packets?

    Just another thought...
  • Well first the $95 is only for the Chip, so it will have to be assembled into a NIC, which will raise price a bit (still way less expensive than what is currently available).

    About hubs and switches. If I understand the press release right, their chip is pretty much a big DSP, and can/will be used with NICs, Hubs, Switches. Of course the main probleme for GigSwitches is the backplane speed, just try and imagine the internal speed of a 24port GigSwitch (24x1000MB = Arghh ! ).

    But I think the main problem we will see around is for implementation in current networks. Because I remember reading a while back about the lack of "true" specs for Cat5. Mostly, that you can find Cat5 in different copper diameter. And having a lot of different "brand" of Cat5 on the network may be quite a pain to diagnose the problems.
    Still it's pretty impressive that they were able to nudge 40m more into the specs.

    I'm really looking forward to getting GigNICs and good street prices.

    Murphy(c) - Nope My name isn't openSource :)
  • Seriously, I've used FoxPro for Windows running with Novell and saturated a 100MB backbone with only 40 users, back in 1995.

    Yes, we need more front-end piping, like full-scale DSL (1440), but an overtaxed server is no fun for anyone, especially with old apps that don't execute server-side but push all the bits across the Network.

    This is good news, $99 for Gigabit Ethernet - we need more of this!

  • All this is great until you realize that you are going to have to do something with that data. It doesn't do any good to pull off 1 GBps over the AGP port only to discover that your IDE drive can't write data anywhere near that fast, or that you can't even push the data down to the disk controller that fast. Heck, even doing some sort of simple calculation with that much data is going to overrun your processor in no time.
  • Anybody making cables without at least some sort of tester deserves to have to spend a weekend searching for a bad cable. :)
  • (1 gigabit ~= 125 MB/s, assuming no headers/full network usage, etc.)

    No, gigabit (and 100b, 10b...) ethernet refers to the raw number of bits you can spew over the wire. It includes all preamble and postamble.

  • Crash course in network wiring:

    Time to update your crash course. Gigabit Ethernet uses all 8 wires and a form of encoding/compression to achieve its speed.

    you are, however, correct in terms of 10bT and 100bT networks. :-)

  • I work at major manufacturer of microprocessors. When we ask, "Is this cable bad?" we DON'T hook it up to a continuity tester, we hook it up to a test rig that measures impedance at the operating frequency in question (1 to 2.5 gigabits). At these frequencies, its feakin' voodoo trying to keep the signal from radiating off the wire like an antenna.

    Gigibit ethernet is a trick, you're right, but in 99% of cases it is not connector problem, it's usually the cable itself went bad for one of several reasons. If you're using good grade cable and your crimper and ends are of good quality, your cable will be fine.

    You can't stop the cable from radiating by crimping any better (unless you really blow at making cables). The cable will radiate more than spec allows if you've got sharp bends, mismatches pairs or poor twist. Your fancy-schmanzy cable tester doesn't test for one of the biggest causes of cable failure: stress. Binding cables with tie-wraps too tightly or bending them too sharply often gives you the problems at gigabit speeds that you refer to. Note that fiber has the same "bend radius" problems that copper has, but for different reasons.

    From a techical standpoint, there is only so much to go wrong with a cable connection. As long as you're crimping right you'll be mostly safe. Far greater problems come from the way the wire is treated when installed, as mentioned above.

    As far as searching all weekend for a bad cable: how the hell are you doing your installs? Computer A can talk but computer B is flaky. Well it sure ain't the backbone connections, check the connection from the switch to the computer in question. Use a network analyzer. It's not difficult..

    Gigabit ethernet will give the average-joe cable maker headaches beyond his wildest dreams if he doesn't learn why it's different.

  • 1024/8 == 128MB/s

    My ATA hard dive bursts up to 33MB/s (13MB/s sustained).

    Perhaps it's most useful when used in confunction with a busy file server.
  • I say the entire market should drop copper based products and go 100% fibre optic. Start massively mass producing it to jack prices down and make it cheap enough to have in the home.
    And while they're at it, replace all phone and cable networks (to our homes) with fibre too. It'll be necessary if we're ever to have the massive multi-media "global village" corporations like so much to advertise about. Some of us are still on dial-up, dammit.
    Phone and Cable corps. could get together (it'll be a cold day in hell) and split the cost for this, then compete for our business over shared lines.

    Do I know what I'm talking about? No. But it sure as hell would be nice.
  • would be interesting to see what it is doing with the extra pairs of cables.
    My understanding of 1000BASE-T is that it uses all four pairs, transferring 250 Mbps on each. This matches the statement in the NatSemi press release that a quad transformer is needed. This means that unlike 100BASE-TX, you won't be able to run full-duplex.
    and switches are supposed to just pass it through.
    Switches don't pass the extra pairs through. They're left open. (How would the switch know which port to pass them through to?)
  • I get pretty tired of the tripe which is spoken about computer hardware on this site. The PCI bus does NOT max out at 133 MB/s. The current fastest PCI implementation is 533 MB/s with 64-bit/66-MHz PCI, which is not all that exotic.

    It is a bit like saying SCSI maxes out at 10 MB/s. It is not generally true.

    -jwb

  • Fixing the wiring screwups will only make the screwups bigger. Didn't C++ teach us this? First you had C, where you could shoot yourself in the foot... then you had C++ with it's encapsulation which made shooting yourself in the foot more difficult.. but when you do you blow your whole foot off.

    Here's a thought: How about informing the user their network admin #$@!'d up the wiring and refuses to run along with a detailed description of WHY it doesn't run. We should not be letting things like network wiring be done improperly ... it leads to sloppiness and ignorance.

  • Err, 500 mbits/sec on each of the two send pairs (the other two pairs are for recieve)

    According to the 3COM white paper [3com.com], all four pairs are used. Hybrids are used, like in a telephone, to allow simultaneous transmit and receive on each pair.

  • This is something I've been thinking about for a while. How fast would a network have to be before it becomes faster for one system to swap to another's physical memory than to a local disk?

    Heh heh...my roommate better start paying close attention to his memory usage...or it might start disappearing :)
  • The best part is, with these chips you won't even need the null cable. Just use a regular patch cable, and the chip will fix the "wiring mistake". Kind of cool. Now, if only the auto-negotiate doesn't suck...
  • As a quick guess. I expect to see them come out at about 3-5X the chip price till one gets over 10,000+/Month volume productions. The prices will drop to about 2X. Companies need their profit.

  • by QuMa ( 19440 )
    Actually, apart from the fact that token ring works rather differently (it relies on a virtual token being passed from host to host), this could be arranged in a star shape to, provided you have enough pci/agp/whatever slots.
  • Yeah.. I remember....
    except, Gigabit ethernet = 1000000000 bits/second. and is not based on a power of 2.. so why change it?
    Powers of 2 only apply, generally, to memory.
  • If your existing wiring could barely handle 10Mbps, it wasn't cabled to Category 5 standards. In short, it must have been crap.

    A lot of problems cropped up when people started trying to do 100Mbps with shitty cat5 installs (ie: using category 5 cable, but not installing properly) or older category 4 (or 3, I forget) cable... thinking 'it should work, the plug is the same'.
  • If your existing wiring could barely handle 10Mbps, it wasn't cabled to Category 5 standards. In short, it must have been crap.

    A lot of problems cropped up when people started trying to do 100Mbps with shitty cat5 installs (ie: using category 5 cable, but not installing properly) or older category 4 (or 3, I forget) cable... thinking 'it should work, the plug is the same'.

    Also, unless I am mistaken, the collision detection is *still* the same as 10/100 networks. The mechanism doesn't change, though the timings do, and the distance requirements change, I bet, probably require a shorter segment again.
    The inter-frame gap will be very large compared to the frame size.. hence the maximum speed between any 2 hosts (say, even through a crossover) will probably be a good chunk less than a gigabit, say, 750Mbps....
    And the backoff mechanism will still be binary exponential backoff.... so the behaviors are the same.. just different timings.

  • This is not surprising. I dont' have all the timings in front of me, but the general gist is this.

    First, though it looks like a star to you, it's a bus network. I guess if there were a switch instead of a hub, you might get away with calling it a star.

    As for your 8Mbps.. that's actually about as high as it's theoretically possible to go. Maybe a wee bit faster, and here's why.

    the 100 in 100base (and the 10 in 10base) both describe the signalling rate (or bit rate) of the BASEband medium (the ether in ethernet...). This is different than describing the rate at which 2 hosts can transmit.
    What this means is that the ethernet, as a single baseband channel, has bits clocked onto it at precicely 100Mhz (or 10), one bit per cycle.
    Now, as part of that standard, there is a mandatory delay any transciever must obey after putting a frame on the channel. In 10Mbps, this 96 bit times, or 9.6 microseconds. I may be a bit off here, but in 100base, this number is *still* arond 9.6 microseconds, as it is a number based in the time it takes for packets to traverse the network from one end to the other; in the case of 100base, 9.6 microseconds = 960 bit-times.
    Each ethernet frame consists of an 8 byte 'preamble' (used to synchronize the receiver), the frame header (6 byte source, 6 byte dest) the type/length field (2 bytes) and in the end, a frame check sequence, like a checksum, of 4 bytes. That makes 26 bytes of information, not related to the ethernet data payload, plus a 120 byte inter-frame gap (remember, each bit takes the exact same amount of time on the ethernet, so we can use bits/bytes to reference time).
    That makes a total of 146 bytes of non-data. If we add to that, say, the IP header, and a UDP header (assuming we are streaming video, with no handshaking, like TCP, as that would mean the response packets would *also* tie up the channel further), you can see that, given the maximum ethernet data payload is 1500 bytes, we are at over 10% of that as overhead.
    This would put the theoretical maximum at around ( I calced it once..) 89%.
    Of course, if there is *any* other activity *at all* on your ethernet, this number goes down even further. If you are doing FTP or something, it goes way down....

    Now.. I did all this from memory, i'm not 100% sure about the Inter-frame gap on 100base, though I'm sure in 10base. This, of course, doesnt' take into account full-duplex ethernet either...
  • They quote the transcievers (CHIPS) at $95/ea in quantities of 1000...

    This is not at all the same as saying the *retail cards* will be anywhere near that price.
    As such a new thing, it wouldn't surprise me if the cards were hundreds of dollars..
  • by Biolo ( 25082 )
    THe answer to that is.. it depends.

    ON a mixed 10/100 network just now we use 3Com hubs and switches. If you attatch a 10Mb card to a port then the card and the port run at 10Mb. If you then attach a 10/100 card to another port then that port will run at 100Mb (assuming things are configured correctly). The two machines can still communicate, the hub does the rate conversion. Obviously the maximum transfer rate between the two machines is governed by the slower NIC, but if you had a second 10/100 on there then the two faster machines will communicate at 100Mbps despite the presence of the 10Mb NIC on the same segment. First time I saw this working was a real "wow" experience Don't ask how it wall works, I have no idea, but it simply does. I would guess 3com must have some bridging logic for each port, after all the 10Mb NIC could never get to see all the traffic between the two 100Mb NICs running at full tilt, but I have never seen any problems caused by this, and our network is 50:50 10:100. Presumably 3Com could manage the same trick with 1000Mb.

    3Com is simply a vendor whose equipment I know from personal experience, I'm sure some other vendors equipment can do the same trick.
  • Boy, isn't that the truth. We use roaming profiles here where I work, and our network is S-L-O-W as molasses in January.


    Hey Rob, Thanks for that tarball!
  • PCI 2.2 only supports bus speeds of up 264MBytes/sec if your lucky

    264MBytes/sec is more than double 1000Mbits/sec.

    Just to review:
    gigabit ethernet = 1000000000 bits per second = 1000mbits/sec = 125mbytes/sec
    standard pc pci (33mhz 32-bit) = 33333333 transfers per second (+/- 1%ish) * 4 bytes per transfer = 133ish mbytes/sec
    mac pci (66mhz 32-bit) = 66666666 transfers per second (+/- 1%ish) * 4 bytes per transfer = 266ish mbytes/sec
    alpha and others' pci (66mhz 64-bit) = 66666666 transfers per second (+/- 1%ish) * 8 bytes per transfer = 533ish mbytes/sec

    (I say +/- 1% because the clock chip on the average PC isn't at all accurate - your Celeron 466 might actually be running at 463 or 470mhz.)

    Furthermore, you don't have to have a computer that can soak the ethernet to get an improvement in speed out of gigabit ethernet over 100mbit ethernet. You just need a computer that can push the packets out faster than 100mbits/sec.

    It seems pretty clear that the average celeron box is capable of this.

  • I remember that a lot of problems cropped up when trying to do 100Mbit Ethernet on existing wiring, which only barely could manage 10Mbps. What will happen to most of the wiring already laid out. Will it have to be thrown out? I remember hearing about a Cat-6 cable. Will we have to upgrade our networks?

    It's my understanding that the 802.3ab gigabit over copper standard is intended to work on standard Cat-5 cabling, so there shouldn't be any need to replacing your existing cables. On the other hand, it does use all four pairs of the cable, so faults that may not have been evident beforehand might turn up...

    Cat-6 cable certainly exists, and I believe there's also a Cat-7 standard (with individual routing channels through the cable for the individual pairs?), not to mention Cat-5e. I think quite a lot of the demand for this cable is drummed up by the vendors and installers of cable plant.

    Also, anybody got information on how collision handling is done on this new architecture? I would suppose that, being a gigabit ethernet, it would surely see much more usage than a 100Mbps one, and being also much higher speed, there should be more collisions.

    Collision handling in gigabit ethernets is a functional irrelevancy; although the 802.3z standard does have provision for shared-media networks, the last time I checked there were no products (nor any scheduled) that supported it.

    Basically, if you're looking at gigabit ethernet, you're looking at a full duplex, switched network.

    For what it's worth, from a technical perspective, I seem to remember that the collision detecting version uses a carrier extension to allow the network to have a useful radius (i.e. in order to avoid late collisions). The carrier extension was for some moderately significant number of bit-times, which could (theoretically) lead to pretty trashy performance with small packet, high load networks.

    But, as I said before, it's not like you care, as your gigabit ethernet network is all going to be switched.

  • This is four wires we're talking about here, not a programming language.

    Crash course in network wiring:

    1. There are two possible types of twisted-pair Ethernet cables: 1) straight through (for connecting a card to a hub), and 2) crossover, or null (for connecting 2 NICs together, this is the same thing as crossing over Rx and Tx to allow two DTEs to communicate)

    2. This new chip can automatically detect which cable is being used, and set itself up automagically. Now you can use both correct types of cable interchangably. Eventually (hopefully), we can simply just buy straight-through cable all the time, for all situations.

    This also makes upgrading from a PC-to-PC network to a hub network very simple as you won't have to completely recable.

    It's more of an interoperability thing if you ask me.
  • get an OC49
  • According to the Gigabit Ethernet group draft standard IEEE 802.3ab [gigabit-ethernet.org], Gigabit copper 1000base TXshould run on all decent Cat 5 installations.

    It does this by running single duplex over all four pairs at 125 MHz. The coding is changed to increase the number bits per symbol from 0.8 to 1.25. Simple wiring screw-up like mixing-up tip & ring are already handled by most 100baseTX ethernet transceivers. But crossover-vs-not isn't, and split pairs are unfixable.

    Your Cat5 working 100baseTX is supposed to run 1000baseTX just fine. But it won't if you've left pairs unconnected, or stole them for a second run or phone. Poor crimping might also hurt.

    That said, the real question is what you can do with all that bandwidth. Most hard-disks cannot sustain even 10 MB/s that 100baseTX provides. And it's hardly a high spped internet solution. It only runs 100m from the hub. The real problem with internet has always been interbuilding: the last mile between cable heads and user buildings.

  • RCN [rcn.com] is doing just this. They are laying fiber out to people's houses... I think the only copper part is between you and the box, and a single box only servers some 100 homes. I believe that each box gets 12 pairs of fiber.

    They are going to provide a single solution for everything - TV, phones, internet, etc. It is expected to be very very fast.

    I know this because they bought the company I was working for last summer (an ISP here in the SF Bay Area) and this is what they told us. But don't worry, they said it's fine to tell the world. :b

    IIRC, this service is going to be availible here (in the bay area) as well as in the Boston area.
  • I know exactly how to make a crossover cable. From memory:

    Brown - Brown White, Green, Blue White - Blue, Green White, Orange - Orange White

    Other Side:

    Green - Green White, Brown, Blue White - Blue, Brown White, Orange - Orange White.

    Splitting the center pair, and keeping the blue in the middle reduces crosstalk, and the pairs are matched with transmit and recieve.

    I still sometimes only 10 Mbit. Maybe some of the ethernet cards I've used are crap. I'll take your word for it and try a little harder next time. :-)
  • On 100 Mbit ethernet, a crossover cable will usually only give you 10Mbit. I assume the same will be true with gigabit. You probably won't get full speed out of a crossover cable.

    I believe the top speed for the firewire spec is 400 Mbit. I'm not sure if all devices, or ports, support 400 Mbit, but that's what's in the spec.

    Also, the EV6 bus for the Athlon is 200 Mhz, with separate switching for RAM and PCI. I wonder if that might be a good solution.
  • , most low- to mid-range SMP boxen have a singular cache to share,

    Let's call a dual Pentium 2/300 a mid-range SMP box, shall we?

    A Pentium 2 processor has 512K of level2 cache (running at clk/2 - 150 MHz) on the cartridge. Are you suggesting:
    • When operating SMP they don't use cache?
    • When operating SMP they ignore their cache and use some slow cache on the motherboard? (which they can get at at 66MHz)
    • When opearting SMP one uses it's cache and the other one uses the first guy's cache too?

  • Also, anybody got information on how collision handling is done on this new architecture? I would suppose that, being a gigabit ethernet, it would surely see much more usage than a 100Mbps one, and being also much higher speed, there should be more collisions.

    Yeah, and the faster those packets move, the more likely they'll be damaged when they smack into each other!
  • For just doing point to point stuff like this it seems more efficient to just use a 100 or even 10 megabit switch.. And hook THOSE together with a nice fat gigabit uplink.
  • Actually, U2W tops out at 80MB/s. That's a limitation of the SCSI bus, NOT the PCI bus. And throughput tends to be non-linear as you increase the number of devices on a SCSI bus. I've seen graphs of some controller tests as the number of drives increased -- most controllers started to suck at 5 drives. (Of course, this was several years ago -- long before U2W, LVD, U160, and fiber channel.)

    The advantage of 64bit/66MHz PCI is for hardware (read: very large cached) RAID controllers [the Mylex extremRAID 3000 comes to mind] and multiport Gigabit ethernet cards.
  • I want.

    Seriously, Cheap Gigabit ethernet could really help out in the office setting. With Buttloads of server space, you could actually implement those roaming profiles on NT, for instance, without clogging your network to bits.
    ---
  • Finally, something I can actually say "Wouldn't it be cool to build a Beofwulf system with these puppies!" about... :-\

    Jack

  • by XNormal ( 8617 ) on Friday February 11, 2000 @12:30PM (#1283739) Homepage
    This type of low cost high-speed connectivity could bring the benefits of a SAN (Storage Area Network) architecture to those who can't afford a FibreChannel based system.

    The storage server can be based on PC architecture with a stripped-down linux kernel, emulating FibreChannel over gigabit ethernet. It has no notion or filesystems, users or anything like that - it is optimized to just ships disk sectors to the network at maximum performance.

    The application servers can be diskless or use their local disks only for swap and caching. One ethernet interface will connect to the internet and another will support access to the SAN. Replacing or upgrading such servers is easy when they store no state information.

    XFS is capable of letting two or more systems share access to the same disk at the sector level.
    I don't know if the linux port of XFS will support this feature, but assuming it does this could be very useful for this kind of applications.


    ----
  • by snack ( 71224 ) on Friday February 11, 2000 @11:44AM (#1283740) Journal
    Sure, you can get a fairly inexpensive gigabit ethernet card, but how much is the hub gonna cost. You can only connect 2 computers through a Null Cable (cross over).

    (First?)

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...