Forgot your password?
typodupeerror
GNOME GUI Graphics Red Hat Software Hardware Technology

GNOME Shell No Longer Requires GPU Acceleration 237

Posted by timothy
from the mutter-mutter-mutter-harumph dept.
An anonymous reader writes "The GNOME 3.0 Shell with the Mutter window manager no longer requires GPU acceleration to work, while still retaining the compositing window manager and OpenGL support. GNOME Shell can now work entirely on the CPU using the LLVM compiler via the Gallium3D LLVMpipe driver. This will be another change to Fedora 17 to no longer depend upon the GNOME3 fall-back, which is expected to eventually be deprecated and further anger GNOME2 fans."
This discussion has been archived. No new comments can be posted.

GNOME Shell No Longer Requires GPU Acceleration

Comments Filter:
  • by pavon (30274) on Sunday November 06, 2011 @03:59PM (#37967598)

    The summary is a troll (as is typical for slashdot). Gnome 2 is still included in Fedora 17. The only difference is that if you have selected Gnome 3 for your desktop (which is default), and GPU acceleration isn't working, it will now fallback to unaccelerated Gnome 3 rather than Gnome 2. Regardless of your opinion of Gnome 3, this just makes sense; it would be much more confusing to get a completely different desktop than you were expecting just because your video drivers got borked. Not to mention it is wasteful to have to install Gnome 2 as a fallback if you want to use Gnome 3.

  • by Anonymous Coward on Sunday November 06, 2011 @04:13PM (#37967710)

    GNOME is a perfect study in how not to architect a software system. Everything about it is wrong.

    The first mistake they made was trying to cobble half-assed object-oriented support onto C, rather than just using C++ or Objective-C. Everything about GObject is stupid and counterproductive. It makes writing code a real pain in the ass, since you need to use typecasting macros all over the place. Worse, this sort of code promotes library design that's slow and inefficient. To make it even worse, this style of C code is so convoluted that it is not optimized well by compilers, resulting in binaries that are far slower than they should be.

    Nonsense. GObject gives you multi-language bindings for free and if you're just an application developer it only makes your life easier. You can develop GNOME programs in C++, Python, Java or whatever suits your tastes.

    I don't think the overhead resulting from using C is substantial at all. Maybe you get more overhead than C++ by always using virtual calls but that is offset by not doing C++ magic like unnecessary constructor/destructor calls. You'll have to back this up if you want me to believe you.

    It basically goes totally downhill after that. This bullshit with GPU acceleration being required in the first place, and then this additional bullshit involving LLVM, is yet another in a long list of flaws and horrible decisions.

    I encourage all of the developers that I mentor to use GNOME and to get a good look at its internals. I just make sure that they know not to do what GNOME has done. By seeing the mistakes firsthand, it's less likely that they'll repeat them in the future with the software that they create.

    I'm not a fan of GNOME and I agree that they are headed in the wrong direction, but the problems are not at all due to GObject or C. Cut the FUD when you criticise GNOME next time.

  • this would be nice (Score:3, Informative)

    by Tyrannosaur (2485772) on Sunday November 06, 2011 @04:16PM (#37967732)
    if gnome shell were actually nice. I'm with Torvalds; switched to XFCE
  • by TD-Linux (1295697) on Sunday November 06, 2011 @04:35PM (#37967876)
    Meh, the compositor has to draw the pixels, one way or another. KDE has two backends, XRender and OpenGL. If acceleration isn't available, the XRender backend can still run in software, and is pretty fast. KDE also supports no compositing at all, but with software compositing it's becoming irrelevant.

    Note that compositing != GPU acceleration. Mac OS X has always used compositing, but it was entirely software. There are still good reasons to do so. I'll compare for you:

    No compositing, one frontbuffer: You don't get your own pixmap to draw onto. You have to send drawing commands to the display server to draw on your behalf, to prevent you from drawing wherever you want on the frontbuffer. Unfortunately, if you have something complicated to draw, the user gets to watch as the drawing happens. When drawing a new object, generally the algorithm used is to draw the background, and then draw the objects in order from back to front. This means whenever the screen is updated, the user will see flicker whenever any objects are updated because they may briefly flicker to the background color. To work around this most modern toolkits (Qt 4, GTK 3) render to a pixmap, and then just tell X to draw their pixmap when they are done. This avoids the flicker but uses a bit more RAM.

    With a compositor, the application still draws to the pixmap, but instead of requesting the X server to immediately draw their pixmap to the screen, they pass it a handle to the pixmap and the display server can draw it whenever. This makes a lot of things easier, like vertical sync and effects, as well as things like color format and color space conversion.

    Drawing the pixmap on the screen is really the same operations, no matter if compositing is on or off. And the API your compositor uses shouldn't matter too much either if the underlying implementation is optimized. The highly optimized Gallium3D blitter is going to just as good as the traditional X blitter, if not better. The only thing slowing it down in this case is the fact that OpenGL API is rather overkill for blitting, but hopefully the llvmpipe backend is optimized for this use case. And it's probably not worth it to make the compositor support two drawing APIs, like KDE, as they both end up doing the same thing anyway.

  • It's about time... (Score:5, Informative)

    by Zephiris (788562) on Sunday November 06, 2011 @04:46PM (#37967964)

    It's about time Slashdot stops accepting 'blogspam' links, such as Phoronix, instead of attributing the actual source itself. Phoronix didn't solve this, a developer did.
    A badly written Slash summary (and 'article') which just links twice to the same braindead Phoronix article (which itself is a several day old duplicate) is bad. Very bad.

    Dredged from the bottom of Phoronix:
    Mailing list post: http://lists.fedoraproject.org/pipermail/devel/2011-November/158976.html [fedoraproject.org]
    Fedora 17 feature point: https://fedoraproject.org/wiki/Features/Gnome_shell_software_rendering [fedoraproject.org]

    Personally, I have little doubt that the "anonymous reader" is Michael Larabel himself.

  • Re:Just like Macs (Score:2, Informative)

    by Anonymous Coward on Sunday November 06, 2011 @06:38PM (#37968704)

    This driver is part of Mesa. Mesa is not part of GNOME.
    This story ties into GNOME, because the driver now supports all the features required of Gnome-shell at an adequate speed.

    You're right that Apple does do something similar. Shader compilation uses LLVM and if the graphics card is missing features it gets run on the CPU. You're wrong that xcode has anything to do with this. Xcode uses Clang which is a c compiler. Clang uses LLVM, but clang has nothing to do with 3d graphics.
    I would rather people copy each other, than suffer from not invented here syndrome. As LLVM is opensource, having extra contributors should be mutually beneficial.

  • by digitig (1056110) on Sunday November 06, 2011 @06:51PM (#37968788)
    Who says it's not a verb? The Oxford English Dictionary lists it as having been a verb since at least 1818, and as being more specific than "design".
  • by Anonymous Coward on Sunday November 06, 2011 @09:27PM (#37969670)

    No compositing, one frontbuffer: You don't get your own pixmap to draw onto. You have to send drawing commands to the display server to draw on your behalf, to prevent you from drawing wherever you want on the frontbuffer. Unfortunately, if you have something complicated to draw, the user gets to watch as the drawing happens. When drawing a new object, generally the algorithm used is to draw the background, and then draw the objects in order from back to front. This means whenever the screen is updated, the user will see flicker whenever any objects are updated because they may briefly flicker to the background color. To work around this most modern toolkits (Qt 4, GTK 3) render to a pixmap, and then just tell X to draw their pixmap when they are done. This avoids the flicker but uses a bit more RAM.

    This is a bullshit way to look at it. You are confusing compositing, backingstores and backbuffers.

    Compositing means to have a procedural transformation from a window pixel to a screen pixel, and allows you to do such things as transparency and 3D windows, it implies nothing of whether or not the window pixels are cached (Backing store). Basically anything besides 1->1 blitting of pixels from window coordinates to screen coordinates is compositing.

    Backing stores, enabled in X11 with the +bs option and maybe used by default by other window systems, Wayland in particular, mean that the application renders to an offscreen buffer which is used to redraw the window during damage events (when the window is obscured), rather than calling back to the application to request it to redraw the window. They have nothing to do with tearing or removing flicker, except maybe in the degenerate case where the output device doesn't implement double-buffering or the window system operates asynchronously and decides to page-flip even though the new screen hasn't finished being drawn.

    Finally backbuffers (or double-buffering) are used by the graphics driver of the windowing system, so that the screen can be updated synchronously to the vertical-blank, to avoid both tearing artifacts when updating graphics, this is done by changing the base address the graphics hardware reads from to display a different part of the framebuffer on each alternate blank (or only when the screen contents has been updated).

    Backing stores are not actually as wonderful as people think, they increase memory usage by the size of the window (width*height*depth bits, or rounded up to the nearest page boundary, double if double-buffering is used), and the same results can be achieved much more efficiently by on-demand synchronous rendering, by placing responsibility for drawing widgets (what are known as Windows in X11) in the display system, as X Athena does (even if you think Athena i ugly, it does this right), rather than opaquely drawing bitmaps and transmitting those to the window system, as GTK and QT do. In the rare case that a backing store is necessary, because large complicated graphics are required, then backing store can be implemented in the application as you mention by using a pixmap. There is absolutely no reason why normal widgets and window layouts need to be back-stored, because if your window/widget system can not redraw all the widgets on the screen in less than 1/60 (1 blank), it is useless and pathetically slow.

    The issue in X11 is that because it's a network display protocol, and it makes certain choices to transmit low-level graphics primitives (windows and pixmaps) rather than higher-level objects like widgets and textboxes, it is very difficult to implement synchronous screen updates by on-demand drawing (or it was in the 1980/1990), so it resorts to asynchronous updates, where it redraws the screen regardless if the windows have finished drawing (in fact usually there is no update because X11 doesn't use double-buffering).

    It is entirely possible to have backing stores without compositing, compositing without backing stores, and double buffering in either the backing st

We can predict everything, except the future.

Working...