Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Network Hardware Technology

Researcher Shows How GPUs Make Terrific Network Monitors 67

alphadogg writes "A network researcher at the U.S. Department of Energy's Fermi National Accelerator Laboratory has found a potential new use for graphics processing units — capturing data about network traffic in real time. GPU-based network monitors could be uniquely qualified to keep pace with all the traffic flowing through networks running at 10Gbps or more, said Fermilab's Wenji Wu. Wenji presented his work as part of a poster series of new research at the SC 2013 supercomputing conference this week in Denver."
This discussion has been archived. No new comments can be posted.

Researcher Shows How GPUs Make Terrific Network Monitors

Comments Filter:
  • That's it? (Score:5, Informative)

    by drcheap ( 1897540 ) on Friday November 22, 2013 @12:38AM (#45488261) Journal

    So in violation of /. convention, I went ahead and read TFA in hopes that were would actually be something more than "we solved yet another parallel computing problem with GPUs." Nope, nothing. Not even some useless eye candy of a graph showing two columns of before/after processing times.

    And the article just *had* to be split into two pages because it would have killed them to include that tiny boilerplate footer on page one. What a fail...at least it wasn't a blatant slashvertisement!

    • by timeOday ( 582209 ) on Friday November 22, 2013 @12:50AM (#45488305)
      They said it achieves a speedup of 17x, here is the graph:

      CPU: X
      GPU: XXXXXXXXXXXXXXXXX

    • by edibobb ( 113989 )
      In defense of the researcher (and not the author of the article), it was just a poster presented at a conference, not a published paper.
    • Re:That's it? (Score:5, Informative)

      by NothingMore ( 943591 ) on Friday November 22, 2013 @03:50AM (#45488867)

      I saw this poster at the conference and I was not impressed and in fact it was one of the weaker posters that I saw at the conference (it was light on details and had some of the information on the poster when talking about GPU's in general was not entirely accurate). It is really a poster that should not have been at SC at all. While it is interesting in the network sense the amount of data they can process is not anywhere close to the amount that is actually flowing through these large scale machines (up to 10 GB/sec per node) and there was no information about scaling this data collection (which would be needed at extreme scales) to obtain meaningful information to allow for tuning of network performance.

      This poster should have been at a networking conference where the results would have been much more interesting to the crowd attending. Also of note, IIRC the author was using a traditional GPU programming model for computation that is not efficient for this style of computation. The speedup numbers would have been greatly improved by using a RPC style model of programming for the GPU (persistent kernel with tasking from pinned pages). However this is not something I totally fault the author for not using since it is a rather obscure programming technique for GPU's at this time.

      • by gentryx ( 759438 ) *

        However this is not something I totally fault the author for not using since it is a rather obscure programming technique for GPU's at this time.

        Good point. I guess this will change once Kepler GPUs are widely adopted and CUDA 6.0 is published: With Kepler you can spawn Kernels from within the GPU and unified virtual addressing will make it easier to push complex data structures into the GPU (according to the poster these appears to be some preprocessing happening on the CPU).

      • by Anonymous Coward

        As a PhD in computer networking, I'll tell you, it would have been easy to publish on SC, than other reputable networking conferences. This article to me is non news.

    • He also has no clue about ASICs, lets take a look at this line: "Nor do they offer the ability to split processing duties into parallel tasks,"
      If there is one thing you can do on an ASIC, it's parallelisation. Application specific cores are small, very small, standard multi-project wafer run technologies have a good number of metal layers so routing isn't too problematic, etc. So you can actually fit a whole lot of cores on a small silicon area in a modern technology. The main issue is the cost of the hard
  • by postmortem ( 906676 ) on Friday November 22, 2013 @01:07AM (#45488357) Journal

    NSA already does this, how else you think they process all that data?

  • wishful thinking (Score:1, Insightful)

    by Anonymous Coward

    "Compared to a single core CPU-based network monitor, the GPU-based system was able to speed performance by as much as 17 times"

    Shouldn't "researchers" know better how to execute benchmarks in such a way that a comparison between a CPU and a GPU actually makes sense and is not misleading? Why didn't they compare it to a 12 or 16 core CPU to show that it is only marginally better and requires programming in OpenCL or CUDA? Why didn't they take a 2P system and show that it is actually performing worse? In tha

    • by Anonymous Coward

      Shouldn't "researchers" know better how to execute benchmarks in such a way that a comparison between a CPU and a GPU actually makes sense and is not misleading?

      If the goal is hard science, then that would make sense. But when the goal is to wow the press, grab attention, and whore in the media, then no... that would be the opposite of what you'd want.

    • Why didn't they compare it to a 12 or 16 core CPU to show that it is only marginally better and requires programming in OpenCL or CUDA?

      "Compared to a six core CPU, the speed up from using a GPU was threefold." If the 12-core CPU is twice as fast, that's 1.5x, and for a 16-core, that's 1.12x.

    • In practice, most people who publish results of a new algorithm ported to GPU do not have a version well-optimized for CPU, or aren't that good at optimization in the first place. I've had several cases where I could make the CPU version faster than their GPU version, despite them having claimed a x200 speed-up with the GPU.
      If you have a fairly normal algorithm in terms of data access and your speed-up is bigger than 4, you're probably doing it wrong.

  • Here is a URL to a presentation [fnal.gov] on the issue of GPU-Based Network Monitoring.

    BTW, with PF_RING and a DMA-enabled NIC driver (PF_RING DNA [ntop.org]), one should have no problems capturing 10 Gbps on a single CPU modern server. I can capture/playback 4.5 Gbps no problem using this with four 10kRPM HDDs - 8 drives should give you 10 Gbps rate capture/playback.

    • by cez ( 539085 )
      I just demo'd Fluke Networks's TruView system that does 10Gb/s stream to disk, 24 TB array of 26 1TB hard drives... was very nice, not cheap though. 2 Xenon 16 Core CPUs if memory servers and a whole crap load of pretty analysis and correlations between Netflow & SNMP data... scary cool with the VOIP module.
  • http://lss.fnal.gov/archive/2013/conf/fermilab-conf-13-035-cd.pdf [fnal.gov]

    They're using M2070 (Fermi) GPUs. Kepler would perform even better, the latest one has > 6GB of memory.

  • by Ihlosi ( 895663 ) on Friday November 22, 2013 @04:12AM (#45488937)
    ... the main task of GPUs is floating point calculations, and I doubt you need many of those when monitoring networks. Wrong tool for the job.

    It's like saying that GPUs are "terrific" for Bitcoin mining, until you realize that they require one or more orders of magnitude more power for the same amount of processing than specialized hardware. And network monitoring is probably a common enough task that it's worthwhile to use hardware tailored to this particular job.

  • As I understand, there are at least 2 purposes for monitoring the network: debugging and spying. I believe that due debugging is already built-in. But spying is a concern, especially since the Russian authorities have required the ISPs to preserve ALL data traffic in their network for 12 hours for further investigation. What about NSA?

  • massively parallel system is suited to massively parallel tasks.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...