'Killer' Network Card Actually Reduces Latency 292
fatduck writes "HardOCP has published a review of the KillerNIC network card from Bigfoot Networks. The piece examines benchmarks of the product in online gaming and a number of user experiences. The product features a 'Network Processing Unit' or NPU, among other acronyms, which promise to drastically reduce latency in online games. Too good to be true? The card also sports a hefty price tag of $250." From the article: "The Killer NIC does exactly what it is advertised to do. It will lower your pings and very likely give you marginally better framerates in real world gaming scenarios. The Killer NIC is not for everyone as it is extremely expensive in this day and age of "free" onboard NICs. There are very likely other upgrades you can make to your computer for the same investment that will give you more in return. Some gamers will see a benefit while others do not. Hardcore deathmatchers are likely to feel the Killer NIC advantages while the middle-of-the road player will not be fine tuned enough to benefit from the experience. Certainly though, the hardcore online gamer is exactly who this product is targeted at."
correct me if I'm wrong... (Score:4, Interesting)
A killer NIC? LOL what a phrase... Aren't there several of these Nicolas guys in jail already? right next to the killer Bobs and killer Joes.... sheesh
The more interesting thing (Score:3, Interesting)
Re:How ... (Score:1, Interesting)
This article is pure BS. If anything, this card probably increases the latency because of the additional layer of software involved on the card itself.
Probably not... (Score:3, Interesting)
An ICMP echo reply is totally different though. Unless you have a weird firewall setup going on, it's pretty much just safe to send out the echo response as soon as you get the echo request. So in this situation, you could peg the main CPU and then have the NIC doing the mind numbingly boring task of sending out echo responses without going through the CPU, and in this case you might see a latency improvement of a few milliseconds. But in general the CPU is going to have to do some processing and formulate the correct response anyway, so having a "smart" network card doesn't help.
Re:How ... (Score:3, Interesting)
Here are two datapoints. A $10 PCI NIC, and a $100 mobo I bought lately (with an integrated NIC) feature checksum offloading. They are both GBit, so I guess you get that for free on any GBit NIC nowadays.
Other than that, I really don't see how a NIC can decrease latencies. The latency of that first hop off your computer is below 1ms anyways.
Re:correct me if I'm wrong... (Score:2, Interesting)
What a load of BS. Be on the look-out for people pushing their stupid marketing into Slashdot's comment system. Unfortunately, with such a large userbase, it's nothing new here at the good ol'
bad idea (Score:2, Interesting)
Granted, most people that will use this NIC (the few who do) probably aren't communicating a whole lot of sensitive data. Still, the whole thing just looks like a disaster waiting to happen.
Re:correct me if I'm wrong... (Score:5, Interesting)
1) bunch of blah and stuff about memory. Since your explanation is memory->application->CPU->kernel memory->protocol stack->CPU memory->NIC driver->bus (basically, it was hard to follow with all the fud), you obviously have no idea how an OS works (I can't think of any modern, common OS's that have such a path). None of this happens as you describe, they are all parts, but the flow is nothing like you describe. See LKML for 2.6 on network programming if you want to see how this works on Linux, which is relatively transparent http://lkml.org/lkml/2005/5/17/78 [lkml.org] also you can look at BSD.
2) The PCI Bus is irrelevant for gigabit ethernet (which is about the only network controller commonly in production, legacy stuff like 10/100 is more common- but is almost out of production) and for faster types (10GE or myrinet or infiniband), totally irrelevant. The 32bit PCI bus limit is about at gigabit speeds, and it is shared with everything else on the PCI bus- therefore suboptimal:
http://www.codepedia.com/1/PCI+BUS [codepedia.com]
PCI-X and gigabit controllers directly off the Controller chipsets is how networking is mostly done now.
3) blah blah, network slower than computers (ridiculous depends on the network and computer exclusively- in consumer computers it swings in a pendulum, when 100Mb came out most of the stuff in the PC couldn't keep up- it was faster to install over the network than from CD ROM because the CD drive was slower, it is going through that again with gigabit- most consumer PCs disk systems can't even approach filling gigabit). Then some conflation about what QoS, and policing can do... QoS only helps if the pipe is full:
http://en.wikipedia.org/wiki/Quality_of_service [wikipedia.org]
or
http://www.cisco.com/univercd/cc/td/doc/cisintwk/
4) ISP and stupidity. ISP's may or may not be stupid. They are driven by market forces and the market force is people don't currently want to pay for a tiered service class internet. When they do, they will offer it. Technically it has been feasible for years. Read NANOG mailing list, you will see they are not stupid, but instead are in a low margin business.
5) blah blah blah, microsecond delay, destinguishable from millisecond via a consumer computer with a common OS by a person?? hahahahah. not without a measuring device. It is possible with enough training (I suppose musicians can). Since you can buy commodity off the shelf lan gear that will turn in sub millisecond delay, I don't think spending the extra-money on low microsecond delay will help
Bunch of pseudo-science modded up on Slash again...
Oh and Jumbo FRAMES are commonly 9000B in size (although the term can refer to anything bigger than 1500B:
http://sd.wareonearth.com/~phil/net/jumbo/ [wareonearth.com]
or 9K on cisco:
http://www.cisco.com/warp/public/473/148.html [cisco.com]
Different types of latency (Score:3, Interesting)
I wrote a client/server app that had to deal with a rediculous amount of information about hundreds of entities moving around the screen. I found the most efficient way to keep messages being processed was to lock the framerate at 30fps and drop frames if that rate could not be maintained. When a frame is dropped the only thing that doesn't happen is that a frame doesn't get rendered. Suddenly the main look is running at thousands of iterations per second clearing out messages from the queue and processing them because it doesn't have to render a frame for a few ms. 30 ms of focused message processing will reduce lag significantly.
If I put the emphasis on rendering frames per second the message queue would back up and eventually the app would crash because the buffer was filling up faster than it could empty it.
Maybe instead of focusing on rendered frames per second, people should be putting more emphasis on iterations per second and getting those messages processed. At 100 fps that give 10ms to render a frame, process all the waiting messages, and perform game logic. Good luck with that. 10ms is barely enough time to just render a frame.
I bet gamers would have a better on-line experience if they'd lock the rendered frame rate to free up more processing power to handle packets. However, I don't think any modern games allow that. Locking the frame rate typically means locking the entire game processing loop and that's stupid and unnecesary. It is possible to not render a frame but still do everything else.
Lowering pings (Score:5, Interesting)
What almost no one knew was that the mod API allowed you to simply edit those values on the fly.
Re:How ... (Score:3, Interesting)
ISPs aren't "bloody stupid" about multicast (Score:4, Interesting)
Network-layer, Deering-model multicast is never going to happen. It has nothing to do with ISP business models and everything to do with simple technical feasibility:
There isn't even an agreement among protocol designers about what multicast is supposed to accomplish anymore. BitTorrent is taking a lot of the steam out of it; so are unicast solutions to streaming media that prove that multicast is inessential. Multicast gets used tactically inside of some networks, but if you're on the same LAN as your other players, the network is already plenty fast for gaming even with unicast.
Forget about multicast.
Column inches (Score:3, Interesting)
Some people are used to writing for the 1.8 inch columns of typical newspaper layout [wikipedia.org], which does use more paragraph breaks than copy elsewhere because 25 to 35 words fill an inch. The 25em column width of Slashdot's comment entry area before the CSS makeover encouraged similar behavior.