Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Graphics Software Hardware

Facts and Fiction of GPU-Based H.264 Encoding 79

notthatwillsmith writes "We've all heard a lot of big promises about how general-purpose GPU computing can greatly accelerate common tasks that are slow on the CPU — like H.264 video encoding. Maximum PC compared the GPU-accelerated Badaboom app to Handbrake, a popular CPU-based encoder. After testing a variety of workloads ranging from archival-quality DVD rips to transcodes suitable for play on the iPhone, Maximum PC found that while Badaboom is significantly faster than X264-powered Handbrake in a few tests that require video resizing, it simply can't compare to the X264-powered Handbrake for archival-quality DVD backups."
This discussion has been archived. No new comments can be posted.

Facts and Fiction of GPU-Based H.264 Encoding

Comments Filter:
  • makes sense to me (Score:2, Interesting)

    by perlchild ( 582235 )

    Wouldn't archival-quality backups be actual MPEG instead of H.2 or whatever? I mean if you're archiving, why go lossy?
    Is it just a badly-designed test?

    • Re:makes sense to me (Score:5, Informative)

      by Silverlancer ( 786390 ) on Thursday September 11, 2008 @09:49PM (#24973333)
      All MPEG formats (including H.264) are lossy; if you want lossless, use HuffYUV, Lagarith, or FFV1 (or one of a countless variety of similar proprietary formats, such as Sheer YUV). Of course, this will give far larger file sizes, for obvious reasons.
      • Re:makes sense to me (Score:5, Informative)

        by Silverlancer ( 786390 ) on Thursday September 11, 2008 @09:52PM (#24973357)
        And it seems that I made a slight oversight here also; --qp 0 in x264 (in the standard as qpprime_y_zero_transform_bypass_flag) is set, H.264 can indeed be a lossless format too, making it the only MPEG video format with a lossless mode.
      • Re: (Score:3, Informative)

        by perlchild ( 582235 )

        I was referring to the idea of encoding a lossy format in another lossy format, resulting in further losses. Not necessarily just the loss of the original lossless-to-lossy. Sorry if I was unclear.

        Seriously, why encode twice? And why rate performance on how fast you can lose bits?

        • Re: (Score:3, Informative)

          Since Badaboom is a baseline-only encoder, I would guess one of its main markets would be to backup movies in a format that can be played by iPods or similar.
      • Re:makes sense to me (Score:5, Informative)

        by evilviper ( 135110 ) on Thursday September 11, 2008 @10:15PM (#24973537) Journal

        All MPEG formats (including H.264) are lossy;

        H.264/AVC includes lossless compression as well as lossy. The same is true for the wavelet based "snow" codec. Still, I'd recommend FFV1 for best compression, as long as you don't need the video to be playable by all the standard H.264 decoders out there.

        if you want lossless, use HuffYUV, Lagarith, or FFV1 (or one of a countless variety of similar proprietary formats, such as Sheer YUV). Of course, this will give far larger file sizes, for obvious reasons.

        This test is about reencoding from a DVD to H.264/AVC. If you want lossless quality, you need only copy the MPEG-2 stream... Reencoding to a lossless format will dramatically increase the file size, without any quality improvement.

    • Re:makes sense to me (Score:5, Informative)

      by evilviper ( 135110 ) on Thursday September 11, 2008 @10:11PM (#24973515) Journal

      Wouldn't archival-quality backups be actual MPEG instead of H.2 or whatever?

      You may have a point, or you might not. Depends on the definition of "archival", and your specific purpose for doing so. I imagine most historians who deal with digital data would scoff at your conflating the terms used to describe their work, with some home user who just wants to back-up their DVDs...

      There's certainly going to be loss, when encoding from MPEG-2 DVDs to H.264. But considering how ridiculously large DVD video is for the relatively small amount of data it contains, I'd say a tiny drop in quality is generally acceptable in exchange for reducing the storage space required for near-as-high-quality backups of your DVDs in (eg.) 1/10th the space.

      Don't quote me on that, though, it's just a hypothetical example. I just recently finished explaining, here, why H.264 isn't all that much more effective than MPEG-2 where indistinguishable/high-quality (rather than just "watchable") is desired: http://slashdot.org/comments.pl?sid=956141&cid=24940379 [slashdot.org]
      In fact, you could probably re-compress a DVD with MPEG-2 (instead of H.264) and get equivalent quality at almost equally low data-rates, simply because the DVD producer's MPEG-2 encoders are terrible, and the settings they use (GOP size, fixed resolution/black borders, high frequency noise, etc.) waste a LOT of the bitrate on things which really don't improve visual quality.

      And to be a bit pedantic... H.264 is, in fact "MPEG". It's MPEG-4 AVC (Part10), while DVDs use MPEG-2.

      • Re:makes sense to me (Score:5, Informative)

        by Anonymous Coward on Friday September 12, 2008 @01:41AM (#24974701)

        I don't know what your source is, but MPEG-2 can't even APPROACH MPEG-4 AVC quality at the same bitrate (at low bitrate), and MPEG-4 AVC can produce a much more compact file for a specified quality (such as where DVD-quality or better). On the other hand, MPEG-4 is much more recent, and takes an order of magnitude more processing power to encode and decode. MPEG-4 uses much improved intraframe compression, variable-size macroblocks, and more advanced descriptions of block motion. Even if we drop the issue of MPEG-2 support for B-frames and limits on P/B frames per GOP (limited by the MPEG-2 profiles, which could be ignored), MPEG-4 is much more efficient at removing redundant information. Finally, MPEG-4 adds more advanced entropy coding for the final lossless compression of coefficients, etc after lossy compression is performed -- the CAVLC coding is an improvement on MPEG-2's standard variable-length coding. CABAC's arithmetic coding is even more efficient than CAVLC.

        MPEG-4/AVC was intended to deliver comparable quality to MPEG-2 at half of the bitrate, and certainly succeeds at low bitrates. At higher bitrates (near-perfect picture quality), you certainly would have been right about the Advanced Simple Profile for MPEG-4 (used in Divx, Xvid, etc), but AVC should still be more efficient.

        Incidentally, the MPEG-2 profile allowed in DVDs was picked to ease the work of the decoding hardware (savings on cost for consumers), at the cost of compactness. The fixed resolutions, bit rate limitations (both max and min bitrates), and GOP limits make it much easier to create a compatible hardware decoder. Yes, they can sometimes significantly decrease compression, but they made early DVD players marketable. Within these significant limitations, the studio-grade encoding software and technicians are PHENOMENAL at delivering maximum quality. If you're used to consumer grade MPEG-2 encoding, something like the pro version of Cinema Craft Encoder is a revelation (an expensive one though -- nearly $2K). See if you can sniff up a trial or demo, and compare the output quality to premiere.

        • Re: (Score:3, Interesting)

          In my experience HCEnc, a freeware encoder (not open-source though), tends to beat CCE quality-wise. Most of Doom9 seems to agree, though I don't think the differences were too dramatic.
        • Re: (Score:3, Informative)

          by evilviper ( 135110 )

          Yours is the kind of response that I hate getting the most. You obviously didn't bother to read my post all the way through, AND most certainly didn't follow the link I provided where I explained everything in detail...

          Yet, you spend time on a lengthy, indignantly reply, where you proceed to waste both your and my time, with questions I've already answered, in-depth. It only makes it more sad to know that your pointless rant got modded up. Anyhow, I'm going to skip those which you could already have read

      • Oh no, not the scoffing of historians. That's almost as bad as the whispered derision of computer nerds.

  • by Silverlancer ( 786390 ) on Thursday September 11, 2008 @09:47PM (#24973321)
    To begin with, x264 blows the water out of Badaboom in terms of speed when similar settings are used. Badaboom appears to use the rough equivalent of --aq-mode 0 --subme 1 --scenecut -1 --no-cabac --partitions i4x4 --no-dct-decimate in terms of x264 commandline... its no wonder its "fast" when they compare it to x264 on far slower settings!

    GPU encoders won't be able to compete with CPU encoders until they either get a lot faster (in which case they'll compete in the "high performance" market) or they get much better quality, since at sane settings x264 unsurprisingly blows Badaboom out of the water quality-wise, too. Until then, the product is not only completely proprietary but furthermore simply inferior, and they're going to have a very hard time marketing it.
    • by evilviper ( 135110 ) on Thursday September 11, 2008 @10:34PM (#24973693) Journal

      To begin with, x264 blows the water out of Badaboom in terms of speed when similar settings are used.

      If you'd RTFA, you'd see this disparity is repeatedly mentioned, and they attempted to make a fair comparison.

      In a direct comparison, using as close to the same visual quality settings as we could, Handbrake's circa February 2008 X264 codec actually beat the Elemental encoder by almost a minute. Image quality was roughly the same; we've included several stills below so you can directly compare the results.

      • Re: (Score:3, Informative)

        Yet that line brings up yet another problem--they're using the absolute latest software from Elemental, but are using a 7-month-old version of x264 that is lacking an enormous number of recent improvements. Its anything but a fair test.
      • Speaking of the test before matching actual load vs. using defaults, am I the only one perturbed by the same movie being encoded twice by different engines but at the same fixed bitrate ...somehow coming out at different sizes? ...and by a couple hundred megs!
        constant unit time of media * (constant unit data / constant unit time) == inconsistent unit data
        • by afidel ( 530433 )
          Eh, I know that for LAME unless you specify strict cbr you get an average bit rate that attempts to be close to what you specified. If one encoder tries to be below the target bitrate and the other attempts to provide better quality at the expense of larger file size I can see how they would diverge, even significantly.
    • Re: (Score:2, Funny)

      4 --aq-mode 0 --subme 1 --scenecut -1 --no-cabac --partitions i4x4 --no-dct-decimate in terms of x264 commandline... its no wonder its "fast" when they compare it to x264 on far slower settings!4

      Do I lose nerd points for this looking like spanish?

      • Re: (Score:2, Funny)

        by NorQue ( 1000887 )
        It should actually look like arcane command line magic to you. Points deducted.

        This is what spanish looks like:

        "esto parece como español"

        ;)
      • Re: (Score:2, Funny)

        by Ed Avis ( 5917 )

        As long as you can read some Spanish text and it looks to you like assembly language for some long-dead processor, you retain your nerd points.

  • The CPU usage of the program when used with a good video card is 25% on my quad core machine, implying it is CPU bound right now. That means if they can get the CPU overhead down, even a little bit, they will stand to get huge gains.

    • Re: (Score:3, Insightful)

      by Enry ( 630 )

      Wait, what?

      If the CPU were running at 100%, then it would be CPU bound. Perhaps you meant to say it's GPU bound?

      • Re: (Score:2, Informative)

        by Stumpeh ( 665508 )
        But not if it's only running on a single core. Then it'd obviously max out at 25% on a quad core machine, provided he's got nothing else running.
  • Obvious (Score:5, Interesting)

    by evilviper ( 135110 ) on Thursday September 11, 2008 @09:58PM (#24973401) Journal

    This is the most obvious and boring insight they could possibly offer... Everyone with the slightest interest knows this already.

    The low quality of hardware-based video encoder cards is a very well-known fact, and those MPEG encoders cards are just ASICs on a PCI card, almost exactly the same hardware as your video card.

    The point of offering up APIs for GPUs, and AMD's attempt to integrate the GPU ASIC with the CPU via HyperTransport, is aimed at improving things, however.

    x264 does a good job because it's an open source project, with several skilled and interested individuals continually tweaking the code to improve quality and performance. Once hardware-based video encoding routines aren't hidden in closed-source firmware on a dedicated card, the same development effort can step up and improve HARDWARE encoding now, exactly as they have with software.

    Not only can quality be significantly improved, you can expect performance to improve significantly as well, even with greater quality. The initial implementation of any codec is always relatively poor performing, and low quality, so this wouldn't even be an insightful observation if it was comparing x264 with any other software based encoder... The only difference is that a new software h.264/AVC encoder would be SLOWER than x264, as well as being much lower quality.

  • To know how the next pixel should be compressed you must know the statistical likelihoods of the previous pixels. So compression is a really linear operation. You could have threads that work from each keyframe of the video independently but that still isn't ideal for graphics cards.
    From the CUDA guide-
    "Every instruction issue time, the SIMT unit selects a warp that is ready to execute and issues the next instruction to the active threads of the warp. A warp executes one common instruction at a time, so f
    • by SeekerDarksteel ( 896422 ) on Thursday September 11, 2008 @10:45PM (#24973787)
      Uh...you space multiplex rather than time multiplex to parallelize encoding. Motion estimation, e.g., is quite parallelizable.
    • Most of the operations in video encoding are most definitely parallelizable on both a large and small scale, both in regards to frame-based threading, used by many encoders and decoders, but also especially in regards to SIMD (x264 has tens of thousands of lines of handwritten assembly).
      • Fair enough, it does indeed use SIMD instructions.
        One thing I notice though is that the SIMD instructions are used for the modelling the data and creating statistical probabilities for what the next lot of data will be. Other aspects such as the arithmetic/variable length encoding are very linear.
        So it follows a loop
        {
        Get data block (linear)
        Model data (SIMD-able)
        Statistically Predict (SIMD-able)
        Entropy encode (linear)
        Write encoded data block (linear)
        } while( there's data )


        That entire loop mu
        • That isn't really how video encoding works; the only "probabilities" are in the CABAC entropy encoder, which is handled via the method of arithmetic coding (which indeed isn't SIMD'd).
      • by philipgar ( 595691 ) <<ude.hgihel> <ta> <2gcp>> on Friday September 12, 2008 @04:09AM (#24975357) Homepage

        uh huh, tens of thousands of lines of asm....

        ~/x264-snapshot-20080812-2245/common/x86$ wc -l *.asm
              165 cabac-a.asm
                91 cpu-32.asm
                51 cpu-64.asm
              437 dct-32.asm
              223 dct-64.asm
              316 dct-a.asm
              874 deblock-a.asm
              659 mc-a2.asm
              933 mc-a.asm
              428 pixel-32.asm
            1615 pixel-a.asm
              600 predict-a.asm
              383 quant-a.asm
              968 sad-a.asm
              519 x86inc.asm
              124 x86util.asm
            8386 total

        • Please upgrade, you are missing 433 lines.
        • Re: (Score:3, Informative)

          x264 uses an abstraction method in order to lump enormous amounts of assembly into very small amounts of space. But when all the macros are expanded, it gets much, much, much larger. For example, almost all the SSE/MMX assembly is abstracted away into macros, so a few macros can be used to take a single generic function and expand it into SSE or MMX. Same with 32-bit vs 64-bit. When you expand it all fully, it is indeed tens of thousands of lines.
          • Yeah but those tens of thousands of lines aren't exactly hand-coded then are they? It appears the developers have only hand-coded the ~9k lines as listed above.

            Still, that is a fair amount of assembly code done by hand, relative to most modern programs written in 3rd and 4th generation languages (that might use only a handful of hand-coded assembly).

            • They were hand coded before we used the macros to abstract them (which involved deleting over half the code!). Of course, that's not all of it, since a significant amount of assembly has been written since the abstraction was done. Though you're also not counting the Altivec assembly, which is rather significant also.
    • So if you have code that isn't SIMD-able you are really only using 1/32 available threads per unit of branching code.

      In addition to what's already been said, there are other techniques that can be used when your code does in fact need to branch. For example, you can take BOTH paths, and then later pick the result from the path you want. This is common when you have lots of parallel hardware, whether made for you in a GPU, or in hardware you're designing yourself, like an ASIC or FPGA. So if you have

      if( A ) {
      Z = B + C;
      } else {
      Z = B - C;
      }

      then you have instructions (or hardware) that perform B+C, separate ins

  • Apples and Oranges (Score:1, Interesting)

    by Louis Savain ( 65843 )

    Comparing a GPU, an SIMD (single instruction, multiple data) vector processor, to a CPU, a superscalar sequential processor, is like comparing apples and oranges. Sure, they are both fruits but they don't taste the same. Using the term 'general-purpose' to describe a GPU is pushing the limits of what a GPU is. Certainly, it can run general-purpose programs but much faster at running what it was designed to run, data-parallel applications. A GPU does not have to have a fast clock because it makes up for it b

    • Comparing a GPU, an SIMD (single instruction, multiple data) vector processor, to a CPU, a superscalar sequential processor, is like comparing apples and oranges.

      To be fair, modern superscalar CPUs, particularly x86 (or x86-64), have extensively optimized SIMD units, in addition to their sequential/general purpose operations. The very reason Core2 outperformed its Opteron counterparts is because of much better SIMD performance. That generally means SSE instructions, but there are other options as well. A

      • Re: (Score:3, Informative)

        by Bert64 ( 520050 )

        Yes, Core2 seems to have much better SSE units than the AMD chips, but this only really manifests itself when running code optimized to use SSE... And that's usually hand optimized assembly, as compilers aren't generally good at generating SSE code yet.

        John the ripper SSE2 mode on a core2 is 2-3 times faster than the generic compile...
        John the ripper SSE2 mode on an AMD (tested on a quad core phenom and dual core opterons) is slightly slower than the generic compile with gcc 4.3 and -O3.

        The core2 beats a si

        • compilers aren't generally good at generating SSE code yet.

          GCC certainly isn't, but GCC is more or less the slow dog in the race. ICC does quite a bit better.

          And it doesn't necessarily have to be hand written ASM. Intrinsics seem to be gaining a bit more popularity in modern programs.

          The big question is, how much of the code you run is optimised for the SSE units found in modern processors, and how much of it uses it at all?

          I'd bet a significant portion of the CPU-intensive programs out there, particularl

    • by Silverlancer ( 786390 ) on Thursday September 11, 2008 @10:56PM (#24973877)
      This isn't 1990 anymore; CPUs have SIMD just as graphics cards do. A modern CPU doing even a brute-force exhaustive motion search can come out on par with a GPU in terms of performance. And if you use sequential elimination instead of a brute-force search (which gives a mathematically equivalent output), a single Core 2 Quad can outperform a quad-SLI set of top-end graphics cards. Sequential elimination, however, despite being SIMD-able, is not well-suited to the threading model of CUDA and similar APIs, and so probably cannot be implemented reasonably on a GPU.

      This concept applies to many algorithms--the brute-force method is easily implementable on a GPU, but a faster and algorithmically smarter method is not well-suited to such an architecture.
  • It will take at least another 18 months before GPU encoding becomes seamless and the ideal solution for most users.

    Intel is working on its own GPU, I am sure that they will exploit multimedia handling capabilities (video/photoshop) as one of the selling points of that GPU.

  • Comment removed based on user account deletion
    • Re: (Score:3, Insightful)

      by MacColossus ( 932054 )
      So if you had tried Handbrake before posting you would see you don't need to first rip the dvd's. You wouldn't have to buy slysoft. You furthermore would be able to choose ipod, psp, etc as a setting for output.
    • by Chris Snook ( 872473 ) on Thursday September 11, 2008 @11:17PM (#24973973)

      So you paid money for a GUI that selects command-line options?

      I'm in the wrong line of work.

      • There is a huge business made around building payware GUIs that (often silently, without giving any credit, sometimes violating GPL/LGPL) do nothing but use open-source tools to do their work. This is especially true in video encoding where there is are almost no cheap proprietary tools--only the extremely widely used open source solutions and extremely expensive "professional" ones (with some rare exceptions like DivX and Nero). Usually these GUIs are much worse than the free ones, but a sucker's born ev
        • Re: (Score:3, Informative)

          by yuna49 ( 905461 )

          You mean, like these?

          http://ffmpeg.mplayerhq.hu/shame.html [mplayerhq.hu]

          I happened to look at ConvertXtoDVD the other day. While ffmpeg itself is licensed under the LGPL, ConvertXtoDVD also appears to use both libpostproc and libswscale which are both GPL. The ffmpeg licensing page [mplayerhq.hu] states, "If those parts get used the GPL applies to all of FFmpeg."

          I don't see any LICENSE.txt file nor any mention of the GPL or the LGPL in the version of the product I downloaded. Running strings against the binaries looking for things l

      • Comment removed based on user account deletion
        • by WDot ( 1286728 )
          But if you're choosing a "profile, resolution, and quality," you could use Handbrake for free. It does all of those things. If you don't want to touch command line stuff, don't. Handbrake's GUI will generate it all for you. Plus it's open source.
  • by Animats ( 122034 ) on Friday September 12, 2008 @12:47AM (#24974459) Homepage

    They're not encoding video. They're transcoding it. They're starting from one compressed representation and outputting another compressed representation. (Now, with twice the artifacts!)

    The good test for this is football. The players, ball, and field are all moving in different directions. If the motion compensation gets that right, it's doing a very good job.

    • No, they're encoding. Transcoding means you're reusing syntax elements from the original video to inform the encoder; i.e. you're not entirely decoding it (not repeating all the process of encoding). What they're doing is encoding, because they're decoding it entirely into a raw video stream, and then sending that into the encoder.

      I wouldn't say football's real challenge is motion either--motion search is a rather simple part of most encoders and IMO definitely not the biggest challenge. The challenge o
      • No, they're encoding. Transcoding means you're reusing syntax elements from the original video to inform the encoder;

        No, transcode means decoding one format and encoding into another format. You may have had a program or project that took advantage of shortcuts in that process but those techniques are not part of the definition of the word transcode.

  • Tell me when I can get a PCI card with a one or more Cell co-processors to do the heavy lifting.
  • Did anyone catch what GPU/graphics card they used? The article mentions they used a Q6600 ($185) as their test CPU but it makes no mention of which GPU they ran with.

    Did they run this on an 9800GT? 8800GT? 8600?

    To make this a fair comparison they should be running the test on a system with a quadcore and the lowest end GPU for the CPU test. Then run the same comparison on a low end Intel CPU (same price as that low end GPU from above) and a GPU priced about the same as their Q6600.

    This would fit better with

"Your stupidity, Allen, is simply not up to par." -- Dave Mack (mack@inco.UUCP) "Yours is." -- Allen Gwinn (allen@sulaco.sigma.com), in alt.flame

Working...