Vastly Improved Raspberry Pi Performance With Wayland 259
New submitter nekohayo writes "While Wayland/Weston 1.1 brought support to the Raspberry Pi merely a month ago, work has recently been done to bring true hardware-accelerated compositing capabilities to the RPi's graphics stack using Weston. The Raspberry Pi foundation has made an announcement about the work that has been done with Collabora to make this happen. X.org/Wayland developer Daniel Stone has written a blog post about this, including a video demonstrating the improved reactivity and performance. Developer Pekka Paalanen also provided additional technical details about the implementation."
Rather than using the OpenGL ES hardware, the new compositor implementation uses the SoC's 2D scaler/compositing hardware which offers "a scaling throughput of 500 megapixels per second and blending throughput of 1 gigapixel per second. It runs independently of the OpenGL ES hardware, so we can continue to render 3D graphics at the full, very fast rate, even while compositing."
Re:Yes, let's bring that back (Score:3, Insightful)
It's all about tradeoffs, and always has been.
Nothing has changed.
Either you write generic support which works everywhere and performs with mediocrity at best (e.g., standard Linux on a desktop), or, you optimize for a particular hardware platform and get more performance.
The thing with RP, is that it's a low-power machine, so the generic mediocre performance is pretty awful and you need to specifically optimize to make it usable.
Re:wayland (Score:3, Insightful)
X11: Being needlessly complex with today's use cases for no reason.
If X11 is so good, why isn't Android using it?
Re:wayland (Score:4, Insightful)
Re:wayland (Score:5, Insightful)
Why doesn't any device that actually requires decent GPU throughput use it, including the Mac, the PS2/3/4, etc?
Why did those developers see fit to NOT use the freely available BSD-style code out there and spend their time writing their own rendering pipelines?
For fun?
Re:wayland (Score:5, Insightful)
Amen. X seems to have the highest complexity to documentation ratio of any major software subsystem I've ever come across.
Re:Yes, let's bring that back (Score:5, Insightful)
Things like low level OS frameworks and related drivers, which require low latency, high performance, and sane memory footprints, must be ported to the architecture in a language whose compiler/linker spits out native binaries. No python/java/.NET here, because the lower the hog is in the stack, the greater the impact on latency and performance it has.
Wayland is a perfect example of this as it sits very close to the hardware with a driver between it and each device. This concept will never change because at some point the software must speak to the hardware directly no matter how the hardware is designed. If anything, the decade of sandboxed apis are a big reason why we need gigabytes of ram and microwave clocked CPUs to do basically the same things we were doing with desktops in the 90s with acceptable performance. The current situation on desktops (regardless of OS) is a sloppy waste of cycles that could either go into greater performance or power savings (or both, depending). Clean, efficient code is not, nor should it ever be, passe.
Re:Yes, let's bring that back (Score:5, Insightful)
The time when everything needed to be specifically ported to a machine to make it perform bearably or at all. How I missed having stuff not work without that extra length to go to.
On embedded hardware, that time never ended... And the rPi isn't really fast enough that you can just run in all software, or even with just the relatively feeble OpenGL hardware, and pretend.
Not to mention the Pi is only $35 and uses a few watts of power, you cant expect current laptop class performance for that price.
The OP ignores the fact that incorporating this tech into the major Pi distros and projects is only work for the developers of said projects, not end users.
End users just wait for the next software update, and then they get vastly improved graphics performance.
I fail to see what on earth is wrong with a major advance in performance to a specific piece of hardware.
I just smell the acrid stench of cynicism wafting from the general direction of the OP.
Fuck backwards compatability (Score:5, Insightful)
99% of Linux users want desktop performance, not remote desktop performance. Put that legacy remote shit into a module if you want.
Re:Nice. Let me know when Wayland has networking (Score:2, Insightful)
Especially considering that Pi would be a perfect example of a device that benefits from X11-style remote applications -- being based on a video decoder SoC, it has somewhat nice GPU but tiny CPU.
Re:wayland (Score:5, Insightful)
If you've been using Linux since 1.0 (I have since 1.2) and have never seen any X11 failings, you're either talking out of your arse or are completely blinded by unrelenting fanboy-ism.
I've seen plenty of X11 failings over the years, ranging from inability to change screen resolution on the fly for about the first decade, poor security, crashes in the video driver taking down the OS, various hacks to get things like multi-monitor or 3d support to work, etc.
Yes, some of those things have been "fixed" via various bodges, in much the same way that the average wannabe Nissan Silvia drifter will "fix" crash damage with a drill and some cable-ties.
High latency, low bandwidth, high security risk stuff like network transparency does not belong in the same process as the rendering engine. It certainly doesn't want to be running as root. Especially when the majority of people simply do not use it, and it can easily be retained via a daemon like every other platform uses.
Re:wayland (Score:5, Insightful)
Just because something is "possible" it doesn't mean it is a good idea. The fact that as per TFA wayland got 20% better power consumption BEFORE they took out a lot of un-necessary data copying should be reason enough for Linux people to sit up and take notice.
Mobile devices are future and a 20% plus reduction in power consumption whilst improving performance is nothing to sneeze at.
Re:wayland (Score:5, Insightful)
Re:wayland (Score:4, Insightful)
The talk on the state of X11 and Wayland/Weston given by one of the lead developers is a bit of an eye-opener about just how munged up X11 is at this stage.
Re:wayland (Score:2, Insightful)
Re:Yes, let's bring that back (Score:4, Insightful)
The age of low level fast optimization is all but dead.
I keep thinking that, but then keep running into situations where I have to optimize things. My coworker has been optimizing a piece of code for the last two weeks because our customers find it too slow, and this is on a 64-bit i7 with 16 gigs of RAM (some image processing stuff). There will always be things that need optimization.
Re:wayland (Score:4, Insightful)
That's nice. I have remote machines on the end of shitty 512kbit satellite links in africa. We have enterprise licensing for Windows so the costs aren't that bad. We need some level of windows infrastructure in place in any case to handle Exchange (PHBs want it) and the various mining industry tools out company uses to get minerals out of the ground.
The point is thus: irrespective of what platform you run, X11 performance when compared even to offerings by Microsoft (RDP) is just blasted into the weeds.
That, my friends should be a fucking embarassment. X11 on 10 megabit ethernet performs worse than RDP over 256kbit frame relay. It's a fucking disaster.
IF i want to replace all my end user's desktops with dumb terminals, X11 simply isn't going to cut it.
Now, I'm not saying run windows everywhere. I'm simply saying that by any metric you care to use, X11's remote performance is simply horrible. IF wayland starts a push to phase it out in favour of something that is actually usable over something slower than an ethernet LAN, this is (long term) a GOOD thing.
I'll bet you X11 stalwarts are complaining about the need to convert to IPv6 as well? If not, why not?
Re:Yes, let's bring that back (Score:4, Insightful)
It's ending right now. Clock speeds have stalled years ago. Memories are running at crawl speeds compared to CPUs. We're just buying time by increasing parallelism now, but Amdhal's law is waiting around the corner to put a stop to that, too.