Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

'Killer' Network Card Actually Reduces Latency 292

fatduck writes "HardOCP has published a review of the KillerNIC network card from Bigfoot Networks. The piece examines benchmarks of the product in online gaming and a number of user experiences. The product features a 'Network Processing Unit' or NPU, among other acronyms, which promise to drastically reduce latency in online games. Too good to be true? The card also sports a hefty price tag of $250." From the article: "The Killer NIC does exactly what it is advertised to do. It will lower your pings and very likely give you marginally better framerates in real world gaming scenarios. The Killer NIC is not for everyone as it is extremely expensive in this day and age of "free" onboard NICs. There are very likely other upgrades you can make to your computer for the same investment that will give you more in return. Some gamers will see a benefit while others do not. Hardcore deathmatchers are likely to feel the Killer NIC advantages while the middle-of-the road player will not be fine tuned enough to benefit from the experience. Certainly though, the hardcore online gamer is exactly who this product is targeted at."
This discussion has been archived. No new comments can be posted.

'Killer' Network Card Actually Reduces Latency

Comments Filter:
  • by zappepcs ( 820751 ) on Saturday December 09, 2006 @06:23PM (#17178310) Journal
    But the only real concern in making a killer NIC is keeping all the processing off of the CPU and bus. If the CPU/MB can shuffle packets at and from the NIC at the speed of the data bus, then it can't get much faster unless you want to offload protocols to the NIC etc.

    A killer NIC? LOL what a phrase... Aren't there several of these Nicolas guys in jail already? right next to the killer Bobs and killer Joes.... sheesh
  • by the_humeister ( 922869 ) on Saturday December 09, 2006 @06:34PM (#17178394)
    What's more interesting is that the card is actually a single-board computer with PowerPC processor and 64 MB of RAM!
  • Re:How ... (Score:1, Interesting)

    by Anonymous Coward on Saturday December 09, 2006 @06:46PM (#17178552)
    Most network chips these days have checksum and TCP layer offloading.

    This article is pure BS. If anything, this card probably increases the latency because of the additional layer of software involved on the card itself.
  • Probably not... (Score:3, Interesting)

    by eklitzke ( 873155 ) on Saturday December 09, 2006 @07:10PM (#17178842) Homepage
    It's not like a computer sends you some data and the network card is immediately able to reply. To formulate a response, it probably needs data from the CPU, e.g. about your position, your health, or whatever it is that you need to transfer back and forth in a game.

    An ICMP echo reply is totally different though. Unless you have a weird firewall setup going on, it's pretty much just safe to send out the echo response as soon as you get the echo request. So in this situation, you could peg the main CPU and then have the NIC doing the mind numbingly boring task of sending out echo responses without going through the CPU, and in this case you might see a latency improvement of a few milliseconds. But in general the CPU is going to have to do some processing and formulate the correct response anyway, so having a "smart" network card doesn't help.
  • Re:How ... (Score:3, Interesting)

    by jpop32 ( 596022 ) on Saturday December 09, 2006 @07:32PM (#17179052)
    Can you list a few examples, preferably with datasheets?

    Here are two datapoints. A $10 PCI NIC, and a $100 mobo I bought lately (with an integrated NIC) feature checksum offloading. They are both GBit, so I guess you get that for free on any GBit NIC nowadays.

    Other than that, I really don't see how a NIC can decrease latencies. The latency of that first hop off your computer is below 1ms anyways.
  • by Kagura ( 843695 ) on Saturday December 09, 2006 @11:32PM (#17180962)
    Wow, Baxtered! You must really feel strongly about this product to have registered today and made this your very first post!

    What a load of BS. Be on the look-out for people pushing their stupid marketing into Slashdot's comment system. Unfortunately, with such a large userbase, it's nothing new here at the good ol' /.
  • bad idea (Score:2, Interesting)

    by Mancat ( 831487 ) on Saturday December 09, 2006 @11:36PM (#17180998) Homepage
    I recall reading a technical overview of this card a few months back. Apparently, it's running Linux of some sort on its host processor. So, how awesome would it be if some remote vulnerability affected the card, allowing someone to implant a rootkit on the device? Now all of your raw network traffic can be captured, your machine can be joined to a botnet, etc. and you'll probably have absolutely no way of knowing about it.

    Granted, most people that will use this NIC (the few who do) probably aren't communicating a whole lot of sensitive data. Still, the whole thing just looks like a disaster waiting to happen.
  • by Anonymous Coward on Saturday December 09, 2006 @11:39PM (#17181020)
    You're wrong on somethings..

    1) bunch of blah and stuff about memory. Since your explanation is memory->application->CPU->kernel memory->protocol stack->CPU memory->NIC driver->bus (basically, it was hard to follow with all the fud), you obviously have no idea how an OS works (I can't think of any modern, common OS's that have such a path). None of this happens as you describe, they are all parts, but the flow is nothing like you describe. See LKML for 2.6 on network programming if you want to see how this works on Linux, which is relatively transparent http://lkml.org/lkml/2005/5/17/78 [lkml.org] also you can look at BSD.

    2) The PCI Bus is irrelevant for gigabit ethernet (which is about the only network controller commonly in production, legacy stuff like 10/100 is more common- but is almost out of production) and for faster types (10GE or myrinet or infiniband), totally irrelevant. The 32bit PCI bus limit is about at gigabit speeds, and it is shared with everything else on the PCI bus- therefore suboptimal:

    http://www.codepedia.com/1/PCI+BUS [codepedia.com]

    PCI-X and gigabit controllers directly off the Controller chipsets is how networking is mostly done now.

    3) blah blah, network slower than computers (ridiculous depends on the network and computer exclusively- in consumer computers it swings in a pendulum, when 100Mb came out most of the stuff in the PC couldn't keep up- it was faster to install over the network than from CD ROM because the CD drive was slower, it is going through that again with gigabit- most consumer PCs disk systems can't even approach filling gigabit). Then some conflation about what QoS, and policing can do... QoS only helps if the pipe is full:

    http://en.wikipedia.org/wiki/Quality_of_service [wikipedia.org]

    or

    http://www.cisco.com/univercd/cc/td/doc/cisintwk/i to_doc/qos.htm [cisco.com]

    4) ISP and stupidity. ISP's may or may not be stupid. They are driven by market forces and the market force is people don't currently want to pay for a tiered service class internet. When they do, they will offer it. Technically it has been feasible for years. Read NANOG mailing list, you will see they are not stupid, but instead are in a low margin business.

    5) blah blah blah, microsecond delay, destinguishable from millisecond via a consumer computer with a common OS by a person?? hahahahah. not without a measuring device. It is possible with enough training (I suppose musicians can). Since you can buy commodity off the shelf lan gear that will turn in sub millisecond delay, I don't think spending the extra-money on low microsecond delay will help

    Bunch of pseudo-science modded up on Slash again...

    Oh and Jumbo FRAMES are commonly 9000B in size (although the term can refer to anything bigger than 1500B:

    http://sd.wareonearth.com/~phil/net/jumbo/ [wareonearth.com]

    or 9K on cisco:

    http://www.cisco.com/warp/public/473/148.html [cisco.com]
  • by KalvinB ( 205500 ) on Sunday December 10, 2006 @12:35AM (#17181362) Homepage
    There is hardware, software and internet induced latency. The best a NIC can do is improve hardware induced latency. However that is the least of it. The main thing to worry about is how to reduce the amount of time the software spends processing packet information. There's little you can do about internet latency. Every ms spent rendering the screen is an ms that causes packets to get backed up.

    I wrote a client/server app that had to deal with a rediculous amount of information about hundreds of entities moving around the screen. I found the most efficient way to keep messages being processed was to lock the framerate at 30fps and drop frames if that rate could not be maintained. When a frame is dropped the only thing that doesn't happen is that a frame doesn't get rendered. Suddenly the main look is running at thousands of iterations per second clearing out messages from the queue and processing them because it doesn't have to render a frame for a few ms. 30 ms of focused message processing will reduce lag significantly.

    If I put the emphasis on rendering frames per second the message queue would back up and eventually the app would crash because the buffer was filling up faster than it could empty it.

    Maybe instead of focusing on rendered frames per second, people should be putting more emphasis on iterations per second and getting those messages processed. At 100 fps that give 10ms to render a frame, process all the waiting messages, and perform game logic. Good luck with that. 10ms is barely enough time to just render a frame.

    I bet gamers would have a better on-line experience if they'd lock the rendered frame rate to free up more processing power to handle packets. However, I don't think any modern games allow that. Locking the frame rate typically means locking the entire game processing loop and that's stupid and unnecesary. It is possible to not render a frame but still do everything else.
  • Lowering pings (Score:5, Interesting)

    by Dirtside ( 91468 ) on Sunday December 10, 2006 @04:15AM (#17182566) Journal
    Back in the heyday of Quake II, me and a friend who made the Quake Superheroes and Quake Superheroes II mods put in a superpower that would (ostensibly) reduce your ping time, using some kind of technobabble handwaving. Everyone was convinced that it worked, too, because when you used it, the ping times listed in the player screen would indeed be lower for you!

    What almost no one knew was that the mod API allowed you to simply edit those values on the fly. :) I don't know if anyone ever caught on, but it was funny watching people argue over whether you should take a "real" superpower like flying or teleportation, or try to improve your ping :)
  • Re:How ... (Score:3, Interesting)

    by pe1chl ( 90186 ) on Sunday December 10, 2006 @05:59AM (#17182962)
    In Linux, type "ethtool -k eth0" to see if your card does it. Many systems I use have onboard Intel controllers and they all support it.
  • by tqbf ( 59350 ) on Sunday December 10, 2006 @12:06PM (#17184734) Homepage

    Network-layer, Deering-model multicast is never going to happen. It has nothing to do with ISP business models and everything to do with simple technical feasibility:

    • Routing table management is an issue even in unicast (which is why I can't get an ASN and advertise a /27, even though that would be incredibly convenient and useful). But multicast addresses individual pieces of content --- video streams, game rooms, chat channels --- and requires diverse, interdomain routing for each. This is akin to demanding BGP advertisements for every popular page on the web.
    • The protocols for interdomain routing barely exist and have never been proven; no production network relies on them. Even interior routing for multicast is in flux; just a few years ago, the model changed to single-sender, which simplifies routing but changes the service model so only one source can efficiently send data.
    • Forward error correction may work for streaming media, but it's a disaster for tiny, discrete updates, and outside of FEC there are no proven ideas in multicast reliability. "Scaleable Reliable Multicast" isn't a protocol, it's a position paper from the early '90s. The well-known current multicast reliability protocols all require infrastructure support: strategically deployed "repair" servers.

    There isn't even an agreement among protocol designers about what multicast is supposed to accomplish anymore. BitTorrent is taking a lot of the steam out of it; so are unicast solutions to streaming media that prove that multicast is inessential. Multicast gets used tactically inside of some networks, but if you're on the same LAN as your other players, the network is already plenty fast for gaming even with unicast.

    Forget about multicast.

  • Column inches (Score:3, Interesting)

    by tepples ( 727027 ) <tepples.gmail@com> on Sunday December 10, 2006 @02:38PM (#17185996) Homepage Journal

    Some people are used to writing for the 1.8 inch columns of typical newspaper layout [wikipedia.org], which does use more paragraph breaks than copy elsewhere because 25 to 35 words fill an inch. The 25em column width of Slashdot's comment entry area before the CSS makeover encouraged similar behavior.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...