Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Intel Hardware Technology

Intel Details Handling Anti-Aliasing On CPUs 190

MojoKid writes "When AMD launched their Barts GPU that powers the Radeon 6850 and 6870, they added support for a new type of anti-aliasing called Morphological AA (MLAA). However, Intel originally developed MLAA in 2009 and they have released a follow-up paper on the topic--including a discussion of how the technique could be handled by the CPU. Supersampling is much more computationally and bandwidth intensive than multisampling, but both techniques are generally too demanding of more horsepower than modern consoles or mobile devices are able to provide. Morphological Anti-aliasing, in contrast, is performed on an already-rendered image. The technique is embarrassingly parallel and, unlike traditional hardware anti-aliasing, can be effectively handled by the CPU in real time. MLAA is also equally compatible with ray tracing or rasterized graphics."
This discussion has been archived. No new comments can be posted.

Intel Details Handling Anti-Aliasing On CPUs

Comments Filter:
  • by Tr3vin ( 1220548 ) on Sunday July 24, 2011 @07:12PM (#36866338)
    If it is "embarrassingly parallel", why not leave it on the GPU? Makes more sense to have it running on dozens to potentially hundreds of stream processors than a couple "free" cores on the CPU.
    • They want everything to run on the CPU, and thus for you to need a big beefy Intel CPU. Remember Intel doesn't have a GPU division. They make small integrated chips but they are not very powerful and don't stack up well with the low power nVidia/ATi stuff. What they make is awesome CPUs. So they really want to transition back to an all-CPU world, no GPUs.

      They've been pushing this idea slowly with various things, mostly based around ray-tracing (which GPUs aren't all that good at).

      Right now it is nothing but

      • GPUs haven't been special dedicated hardware for several generations. Ever since OpenGL 1.4 and Direct3D 8, they have been transitioning over to more general purpose use. They still have a dedicated raster unit, but the vast bulk of the chip is just a giant array of largely generic vector units. Those units can be put towards whatever application you want, whether it be graphics, physics, statistics, etc. It's basically a larger version of the SSE and Altivec units.
      • Many of the commercial ray tracing packages have written GPU-based versions that work remarkably well.

        V-Ray and mental ray, in particular, have very exciting GPU implementations. A presentation by mental images showed some very high-quality global illumination calculations done on the GPU. Once you get good sampling algorithms, the challenge is dealing with memory latency. It's very slow to do random access into memory on a GPU. mental images solved that problem by running a lot of threads, as GPU's con

      • Why is GPU not good at ray-tracing? So far as I know, they excel at tasks which exhibit massive dependency-free parallelism, and lots of number crunching with little branching. It would seem to me that this describes ray-tracing almost perfectly.

      • "They make small integrated chips but they are not very powerful and don't stack up well with the low power nVidia/ATi stuff."

        Beg to differ. My last experience with nVidia vs. Intel graphics proved otherwise. In two laptops with otherwise more or less the same hardware (Core 2 Duo P-Series processors, 4+ gigs of RAM, same chipset), the one with Intel graphics provides a much smoother video (HD and so on) experience (things like YoutubeHD or FullHD H264 in MKV), only marginally worse performance in 3D games,

      • Well... CPUs are basically an ALU with lots of problem specific dedicated hardware. Removing the GPU doesn't solve that paradigm.
    • Yeah, that is exactly what I was wondering. I suppose the idea is for systems with bad integrated graphics cards, or with mobile devices that have no dedicated graphics.
      • Ah, guess I should have thoroughly RTFA before commenting. I guess on consoles where MSAA is hard to have time for, this could be useful.
    • by grumbel ( 592662 )

      why not leave it on the GPU?

      A lot of modern Playstation 3 games use that technique as it allows to do something useful with the SPUs, while the GPU is already busy enough rendering the graphics. It also helps with raytracing, as you might need less CPU power to do anti-aliasing this way, then the proper one. When the GPU of course has some free cycles left, there is no reason to not do it there.

  • by Anonymous Coward on Sunday July 24, 2011 @07:17PM (#36866392)

    If your signal is aliased during sampling, you are toasted.
    No voodoo will help you if your spectrum folded on itself.
    So super-sample it or shut up.
    Everything else is a snake oil for unwashed masses.
    And yes, MPLAA still looks like crap in comparison to SS.

    • by cgenman ( 325138 )

      Judging by the article, MLAA is actually just a technique that looks for jagged edges, and blurs them.

      How this is better than just blurring the whole thing is beyond me. Those images look terrible.

  • by guruevi ( 827432 ) on Sunday July 24, 2011 @07:18PM (#36866408)

    If the system is 'embarrassingly parallel' and simple then the GPU would be a better use case. GPU's typically have a lot (200-400) cores that are optimized for embarrassingly simple calculations. Sure you could render everything on a CPU these days, simpler games could even run with an old school SVGA (simple frame buffer) card and let all the graphics be handled by the CPU as used to be the case in the 90's and is evidenced by the 'game emulators in JavaScript' we've been seeing lately but GPU's are usually fairly unused except for the ultramodern 3D shooters which also tax a CPU pretty hard.

    • Yes, GPU would be better. I mean, look at Intel's amazing GPU divis... oh wait. That's why they want AA on the CPU. Because, you know, they actually have CPUs that are pretty decent. AMD probably added support because of their whole Fusion platform.
      • Even if you have a good GPU it's still useful. In your typical game the GPU is generally the bottleneck, so if you can offload some stuff to the CPU all the better. That's how it done on the PS3. It has an ok GPU and a very good CPU so a lot of graphics stuff is run on the CPU. In fact, even if it was invented by Intel, I believe it was the PS3 game developers that were the driving force behind MLAA's popularization.

  • Blur (Score:5, Insightful)

    by Baloroth ( 2370816 ) on Sunday July 24, 2011 @07:21PM (#36866434)

    So, it basically blurs the image around areas of high contrast? Sounds like thats whats going on. Looks like it, too. I can understand why they are targeting this at mobile and lower powered devices: it kinda looks crappy. I might even say that no antialiasing looks better, but I'd really have to see more samples, especially contrasting this with regular MSAA. I suspect, however, that normal antialiasing will always look considerably better. For instance, normal AA would not blur the edge between two high-contrast textures on a wall (I think, since it is actually aware that it is processing polygon edges), while I suspect MLAA will, since it only sees an area of high contrast. Look at the sample image they post in the article: the white snow on the black rock looks blurred in the MLAA processed picture, while it has no aliasing artifacts at all in the unprocessed image. Its pretty slight, but its definitely there. Like I say, need to see more real world renders to really tell if its a problem at all or simply a minor thing no one will ever notice. I'll stick to my 4X MSAA, TYVM.

    • by Jamu ( 852752 )
      The article does compare it to MSAA. But the MLAA just looks blurred to me. Detail shown with MSAA is lost with MLAA. It would be informative to see how MLAA compares to simple Gaussian blurring.
      • Re:Blur (Score:5, Informative)

        by djdanlib ( 732853 ) on Sunday July 24, 2011 @07:48PM (#36866608) Homepage

        It's different from a Gaussian blur or median filter because it attempts to be selective about which edges it blurs, and how it blurs those edges.

        This technique really wrecks text and GUI elements, though. When I first installed my 6950, I turned it on just to see what it was like, and it really ruined the readability of my games' GUIs. So, while it may be an effective AA technique, applications may need to be rewritten to take advantage of it.

        • [Morphological AA postprocessing] really ruined the readability of my games' GUIs. So, while it may be an effective AA technique, applications may need to be rewritten to take advantage of it.

          Just as games and other applications supporting a "10-foot user interface" need to be rewritten with larger text so that the text is not unreadable when a game is played on a standard-definition television. The developers of Dead Rising found this out the hard way.

        • by gr8_phk ( 621180 )

          It's different from a Gaussian blur or median filter because it attempts to be selective about which edges it blurs, and how it blurs those edges.

          Anti-Aliasing is not supposed to blur edges arbitrarily. I suppose that's why this is selective, but it just seems like a crappy thing to be doing. And while it can be done by a CPU, that's probably not practical - either the CPU is busting it's ass to do rendering and doesn't really have time to make an AA pass, or the GPU did all the rendering and may as well d

          • What I saw in non-technical terms: It appeared to blur edges whose location or neighboring pixels changed from one frame to the next. Unfortunately, whenever something changed behind text and GUI elements, it went right ahead and blurred those edges as well.

            • This reminds me of a glitch in the original S.T.A.L.K.E.R. Shadow of Chernobyl game. If you turned on AA in game, the crosshairs would disappear. I've always suspected something like that was going on, where it would see the straight lines of the crosshair and blend it into the rest of the picture. I believe it worked right if you enabled AA from drivers outside the game, which reinforced my theory considerably.
        • If it blurs the text and GUI then it's poorly implemented. The AA should be applied before drawing the UI.

    • by grumbel ( 592662 )

      Some comparison screen shots [ign.com], essentially it performs extremely well on clean high contrast edges, but can lead to ugly blurring when the source image contains heavily aliased areas (i.e. small sub-pixel width lines in the background). There are also some temporal issue, as some of the artifacts it causes get worse when its animated. Overall I'd say its an clear improvement, not perfect, but when you are stuck with 1280x720 on 44" TV you are happy about any anti-aliasing you can get.

      • In those pictures it does look considerably better than no AA, thanks for pointing those out. Seems like MLAA is perfect for the PS3: it has a pretty slow and dated graphics card, but quite a lot of spare cycles on the CPU, as long as you can do it in parallel, which this can. Always amazed me how few PS3 games have AA. You're right, games without AA, especially on 720p (or *shudder* 480p) on a large TV, look absolutely shitty. One reason I love my PC: I've used AA in pretty much everything for almost 5 yea
    • MSAA looks awful - and Intel's CEO famously knocked antialiasing as being a stupid blurring technique not long ago. So, he goes with the only form of AA that literally adds no value. Cutting off their nose to spite their face?

  • Correct me if I'm wrong, but MSAA is already embarrassingly parallel, and provides for better fidelity than this newfangled MLAA.
    Yes, its faster than MSAA, but modern GPUs are already pretty good at handling real-time MSAA.
  • This is a phrase I would have reserved for myself after several too many drinks... not so much for this article ;p
  • by LanceUppercut ( 766964 ) on Sunday July 24, 2011 @07:25PM (#36866478)
    Anti-aliasing, by definition, must be performed in object space or, possibly, in picture space. But it cannot be possibly carried out on an already rendered image. They must be trying to market some glorified blur technique under the anti-aliasing moniker. Nothing new here...
    • Anti-aliasing, by definition, must be performed in object space or, possibly, in picture space. But it cannot be possibly carried out on an already rendered image.

      Ever heard of hq3x [wikipedia.org]? Or the pixel art vectorizer we talked about two months ago [slashdot.org]?

    • by debrain ( 29228 )

      Anti-aliasing, by definition, must be performed in object space or, possibly, in picture space. But it cannot be possibly carried out on an already rendered image.

      Sir –

      Anti-aliasing can be performed on a rendered image by performing image recognition i.e. vectorization. This is doable with edges of the geometric sort (i.e. straight lines, simple curves) and pre-existing patterns (e.g. glyphs of known fonts of given sizes). This result is probably an absurdity in terms of the performance, however "cannot be possibly carried out" is a bit too strong, in my humble opinion. It may be impractical, but certainly it's not impossible.

      • And what if the object you're applying your magic filter on is smaller than the available spatial resolution? (Think guard rails on a building far away). AA accurately renders these, but if they aren't properly rendered to begin with you can't recreate them without knowing what it was supposed to look like.

    • by gr8_phk ( 621180 )

      Anti-aliasing, by definition, must be performed in object space or, possibly, in picture space. But it cannot be possibly carried out on an already rendered image. They must be trying to market some glorified blur technique under the anti-aliasing moniker. Nothing new here...

      There are no pixels in object space. It's an operation on pixels. But I agree with your second half - it's some new blur method that probably isn't worth it. Nothing to see here - in fact things are harder to see.

      • There are no pixels in object space.

        A pixel in object space is a frustum. Performing anti-aliasing at this level not only can be done, but is frequently done within the VFX world. Remember that VFX shaders tend to be a single unified shader - instead of multi-stage vertex/geom/pixel - so calculations can be performed in any space you want. For a procedural-shader heavy scene, the ideal would be to get the shaders to perform the anti-aliasing for you, in object space, rather than resorting to super-sampling....

    • by hsa ( 598343 )

      I would call it Partial Gaussian Blur. Since that is effectively what they are doing. They are blurring the sharp edges of the image.

  • While I understand AA, and why we do it; but I always experience a moment's rush of absurdity when I consider it.

    Up until quite recently, with high-speed digital interfaces nowhere near what video of any real resolution required, and high-bandwidth analog components very expensive, AA was just something that happened naturally, whether you liked it or not: your not-at-all-AAed digital frame went to the RAMDAC(which, unless you had really shelled out, could likely have been a bit lax about accuracy in exc
    • by Osgeld ( 1900440 )

      its funny years ago I went on a hunt for the sharpest flat front CRT I could find to see the "edges of pixels" and would not have it any other way for old bitmap games, but on mondern high res games on modern high res monitors if you dont have it it just sprinkles the entire screen with jaggie noise.

  • by dicobalt ( 1536225 ) on Sunday July 24, 2011 @08:24PM (#36866816)
    It can work on any DX9 GPU without dedicated support. http://hardocp.com/article/2011/07/18/nvidias_new_fxaa_antialiasing_technology/1 [hardocp.com]
    • by cbope ( 130292 )

      Yeah, funny how this popped up only days after the FXAA announcement last week.

  • by bored ( 40072 ) on Sunday July 24, 2011 @11:24PM (#36867606)

    AA is a crutch to get around a lack of DPI. Take the iphone 4 at 326 DPI, it is 3 to 4x the DPI of the average craptasic "HD" computer monitor. I have a laptop with a 15" 1920x1200 screen. At that DPI Seeing the "jaggies" is pretty difficult compared with the same resolution on my 24". On the 15" can turn AA on/off and its pretty difficult to discern the difference. That monitor is only ~150DPI. I challenge you to see the affects of anti-aliasing on a screen with a DPI equivalent to the iphone 4.

    The playstation/xbox on the other-hand are often used on TV's with DPI's approaching 30. If you get within a couple feet of those things the current generation of game machines look like total crap. Of course the game machines have AC power, so there really isn't an excuse. I've often wondered why sony/MS haven't added AA to one of the respun versions of their consoles.

    • Re: (Score:2, Interesting)

      by isaac ( 2852 )

      I challenge you to see the affects of anti-aliasing on a screen with a DPI equivalent to the iphone 4.

      The eye is pretty good at picking out jaggies, especially in tough cases (high contrast, thin line, shallow slope against the pixel grid,) and where the screen is viewed from close range (my eye is closer to my phone's screen than my desktop monitor.)

      Now, I don't think antialiasing makes a huge deal to game mechanics - but it is nice to have in high-contrast information situations (e.g. google maps) regard

      • by bored ( 40072 )

        The eye is pretty good at picking out jaggies, especially in tough cases (high contrast, thin line, shallow slope against the pixel grid,)

        In my case, on the higher res displays, its not the line stepping that is the problem so much as "crawling". In other words, the position of the step is moving around in an otherwise static display. That said, I would take a 2x DPI increase in any application. Of course i'm the guy fighting to turn off clear type cause I can't stand the color bleeding.

    • by thegarbz ( 1787294 ) on Monday July 25, 2011 @02:58AM (#36868352)

      Seeing jaggies is not the only purpose of AA. The idea is also to be able to render objects that are smaller than the spatial resolution of the view. Think a long distance away you're looking at a guywire of a comms tower. You may see a row of appearing / disappearing pixels as on average the wire is rendered as smaller than a pixel width. AA takes care of this, which is far more annoying than simply a resolution issue of sharp edges on objects.

      This glorified blurring algorithm however doesn't fix this.

      • by bored ( 40072 )

        The idea is also to be able to render objects that are smaller than the spatial resolution of the view. Think a long distance away you're looking at a guywire of a comms tower.

        Yes your right, but as I suggested its a hack to get around lack of resolution. Forcing a fudge factor (a bunch of large grey pixels) in may not always be the best response. Plus, its limited by the oversampling ratio. What i'm arguing is that a display of 2x the DPI will look better than a 2x over-sample. Eventually increasing either

    • Fair point, though you miss the sub-pixel object rendering as somebody else already commented.

      But even aside from that, while we are on the phone subject:

      If you look at for example the Samsung Galaxy S2 (arguably the mobile performance king - at least until the new iPhone is out ?) does MSAA 4x without any performance hit and 16x with a very small one. I do believe we will see many mobile devices with the same GPU or its successor in future phones. Granted, it's DPI is only around ~230, not ~330 like the iP

    • by julesh ( 229690 )

      AA is a crutch to get around a lack of DPI

      No, even with higher DPI than the eye can resolve, you still need AA sometimes, as aliasing can present other problems than the jagged lines you're familiar with (e.g. moiré patterns).

  • This is image reconstruction, where additional information (not necessarily correct) is derived from a limited image.

    Close equivalents are the "font smoothing" done by the earliest versions of Macintosh for printing their bitmap graphics on a PostScript printer to draw 72 dpi 1-bit images at 300 dpi. Also I believe Microsoft's earliest subpixel font rendering, smoothtype, was done this way (not cleartype or any other modern font rendering).

    Much more complicated examples are algorithms for scaling up images,

  • How is anti-aliasing performed on the rendered image comparable to supersampling? When you supersample, you produce more pixel data which you then use to produce a reduced resolution image. When you post-process you don't get more pixel data, you just filter what you already have. Wouldn't supersampling always give better results for the same final resolution?
  • I am reminded of this :

    "Text Rendering in the QML Scene Graph"

    http://labs.qt.nokia.com/2011/07/15/text-rendering-in-the-qml-scene-graph/ [nokia.com]

    "Some time ago, Gunnar presented to you what the new QML Scene Graph is all about. As mentioned in that article, one of the new features is a new technique for text rendering based on distance field alpha testing. This technique allows us to leverage all the power of OpenGL and have text like we never had before in Qt: scalable, sub-pixel positioned and sub-pixel antialiase

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...