Vastly Improved Raspberry Pi Performance With Wayland 259
New submitter nekohayo writes "While Wayland/Weston 1.1 brought support to the Raspberry Pi merely a month ago, work has recently been done to bring true hardware-accelerated compositing capabilities to the RPi's graphics stack using Weston. The Raspberry Pi foundation has made an announcement about the work that has been done with Collabora to make this happen. X.org/Wayland developer Daniel Stone has written a blog post about this, including a video demonstrating the improved reactivity and performance. Developer Pekka Paalanen also provided additional technical details about the implementation."
Rather than using the OpenGL ES hardware, the new compositor implementation uses the SoC's 2D scaler/compositing hardware which offers "a scaling throughput of 500 megapixels per second and blending throughput of 1 gigapixel per second. It runs independently of the OpenGL ES hardware, so we can continue to render 3D graphics at the full, very fast rate, even while compositing."
Replaces hardware lag with animation lag (Score:2)
Hopefully all the swishy fadey stuff can all be disabled, so that the speed improvement actually manifests usably.
Re:Replaces hardware lag with animation lag (Score:5, Informative)
Yup. We know lots of people don't love the shiny (or love the speed more than the shiny), so we'll be providing the ability to turn off fades and scaled window browsing. Disabling fades has the nice side effect of removing 120Mpixels/s of blending, so you can have more windows on the screen before the back of the stack falls back to 30fps (for responsiveness the front of the stack will always run at 60fps regardless of scene complexity).
Desktop is forgotten (Score:2)
Wouldn't it be great if this kind of effort was applied to the desktop?
Re: (Score:2)
Wayland is specifically for the desktop, multi-device network graphics are the least of its concerns.
Re: (Score:2)
This uses special 2D hardware you find in cell phone chips and some gaming (handheld or home) consoles. It offloads scaling, color space conversion, maybe rotation, JPEG decoding etc., maybe encoding the output of a digital camera; on a PC's graphics card you try using the video scaler but it's more limited and "fixed function".
For instance you can look at "Video Display Controller" and "Image Processor" on these diagrams (not too sure about the first one)
http://images.anandtech.com/doci/3912/boxee-02.gif [anandtech.com]
ht [anandtech.com]
Re: (Score:3)
Except said 700MHz machine is running a fairly modern and high end GPU.
The processor in it was designed for media tanks and media players - think Roku, WDTV, AppleTV, Popcorn Hour, and other such devices. The CPU load for those things is low (just enough to display a UI and handle streaming the media to the GPU). The GPU is capable of handling decent 3D performance at 1080p resolution as well as video decode and other ta
Easier fix (Score:2)
Configure your window manager to not show the windows's content when you move them.
Job done! my 386 could do that. Dunno where's Openbox's setting for that but xfwm4 has it as a checkbox in a GUI tool.
What are you waiting for, Christmas? (Score:2)
Re: (Score:3, Insightful)
It's all about tradeoffs, and always has been.
Nothing has changed.
Either you write generic support which works everywhere and performs with mediocrity at best (e.g., standard Linux on a desktop), or, you optimize for a particular hardware platform and get more performance.
The thing with RP, is that it's a low-power machine, so the generic mediocre performance is pretty awful and you need to specifically optimize to make it usable.
Re: (Score:2)
AMEN. Slackware with a custom compiled kernel on my laptop utterly decimates anyones Ubuntu install. To the point that people cant believe that it's linux running that fast.
The low grade dog-food that is the popular distros today are causing more harm than good with them being dog slow and broken all over the place for the sake of supporting everything possible.
Re: (Score:2)
What kind of changes did you make to your kernel build? And what CPU and I/O schedulers are you using?
Re: (Score:2)
Yeah I'm kind of wondering that myself. Hopefully the OP can give more details.
Re:Yes, let's bring that back (Score:5, Informative)
The time when everything needed to be specifically ported to a machine to make it perform bearably or at all. How I missed having stuff not work without that extra length to go to.
On embedded hardware, that time never ended... And the rPi isn't really fast enough that you can just run in all software, or even with just the relatively feeble OpenGL hardware, and pretend.
Re:Yes, let's bring that back (Score:5, Insightful)
The time when everything needed to be specifically ported to a machine to make it perform bearably or at all. How I missed having stuff not work without that extra length to go to.
On embedded hardware, that time never ended... And the rPi isn't really fast enough that you can just run in all software, or even with just the relatively feeble OpenGL hardware, and pretend.
Not to mention the Pi is only $35 and uses a few watts of power, you cant expect current laptop class performance for that price.
The OP ignores the fact that incorporating this tech into the major Pi distros and projects is only work for the developers of said projects, not end users.
End users just wait for the next software update, and then they get vastly improved graphics performance.
I fail to see what on earth is wrong with a major advance in performance to a specific piece of hardware.
I just smell the acrid stench of cynicism wafting from the general direction of the OP.
Re: (Score:3)
Arguably, the fact that this specific hack had to take place is a bad sign, just not one specific to Wayland, or even (particularly) to the rPi.
There are, attempts at least, at standardizing [khronos.org] the interfaces for the sorts of features that this Wayland modification used to get better performance on the pi; but they certainly aren't anywhere near where OpenGL is in terms of adoption, and so the compositing and windowing modification had to be made specifically for the 'DispManX' API used exclusively on these Br
Re:Yes, let's bring that back (Score:5, Insightful)
Things like low level OS frameworks and related drivers, which require low latency, high performance, and sane memory footprints, must be ported to the architecture in a language whose compiler/linker spits out native binaries. No python/java/.NET here, because the lower the hog is in the stack, the greater the impact on latency and performance it has.
Wayland is a perfect example of this as it sits very close to the hardware with a driver between it and each device. This concept will never change because at some point the software must speak to the hardware directly no matter how the hardware is designed. If anything, the decade of sandboxed apis are a big reason why we need gigabytes of ram and microwave clocked CPUs to do basically the same things we were doing with desktops in the 90s with acceptable performance. The current situation on desktops (regardless of OS) is a sloppy waste of cycles that could either go into greater performance or power savings (or both, depending). Clean, efficient code is not, nor should it ever be, passe.
Re:Yes, let's bring that back (Score:5, Interesting)
I'm not philosophically against clean fast code, but to your point my desktops are probably 98% CPU idle when doing a normal workload, and only really pick up when: Playing games, Playing flash, Doing a compile, Running a development server and testing. The age of low level fast optimization is all but dead. For a brief time during the smart phone revolution, pathetic CPU's were a bottle-neck, but with my N4, nothing I throw at it feels slow or choppy. It has 2GB of ram IN A PHONE. Sure limited spec and fit for purpose devices will need fast low level access to optimize, but that takes time, and quite often we're finding that hardware's faster and cheaper than wasting time optimizing for the apex solution.
Take your question again: In 10 years when our entire assortment of devices has as much horsepower as my desktop computer does today, are we really going to need significantly tight processing? I'd say the better long term solution will be making development faster and hopefully more expressive.
Re:Yes, let's bring that back (Score:4, Insightful)
The age of low level fast optimization is all but dead.
I keep thinking that, but then keep running into situations where I have to optimize things. My coworker has been optimizing a piece of code for the last two weeks because our customers find it too slow, and this is on a 64-bit i7 with 16 gigs of RAM (some image processing stuff). There will always be things that need optimization.
arguably your N4 is a palmtop computer (Score:2)
Modern smartphones are small computers that happen to have suitable hardware for accessing the voice network. It really is disingenuous to call these devices "phones", because you can still get feature phones that do basic voice/text/web with far less than 2GB (albeit with much less flexibility).
Re: (Score:3)
Low level optimization is far, far from gone. It's just what you need to optimize that has changed. CPU number crunching is no longer the bottleneck. Memory is, and drawing things on a screen is essentially mostly memory operations. And we have special hardware to handle this for us to try and relieve the bottlenecks, so now the optimizations is in how you use that hardware.
If you tried running a non-optimized CPU-only system on modern hardware, you'd go out of your mind because it was so sluggish.
Re: (Score:3)
Yes, yes it will be important to have optimized software in 10 years. Remember X was entirely usable (OMG I'm getting old) 20 years ago. That 150MHz Alpha workstation in the lab was amazingly fast, but here we are t
Re: (Score:2)
Re: (Score:2)
Also, languages like SML and F# can theoretically (and sometimes in practice do) generate code that is a lot more efficient than C, and not tied to a specific style of Von Neumann machine. That, and garbage collection has complexity bounds whereas malloc/free do not, not having managed memory leads to a lot of pointless copying (and nowadays the memory controller is what kills you so deep sharing of structure is always a huge win), etc.
And those expressive type systems happen to save programmer time too. I
Re: (Score:3)
At least until Moore's Law ends. Dunno when it will happen but assuming the continued survival of the human race there will come a time when our computers are not becoming more powerful with each generation.
For the short and moderate term you can risk relying on Moore's but in the long run all good things come to an end.
Re:Yes, let's bring that back (Score:4, Insightful)
It's ending right now. Clock speeds have stalled years ago. Memories are running at crawl speeds compared to CPUs. We're just buying time by increasing parallelism now, but Amdhal's law is waiting around the corner to put a stop to that, too.
Re: (Score:3)
Clock speed, Moore's law, what have they got to do with computing power?
Wirth's law is you enemy.
Bremermann's limit is waiting for you Goaway.
Re: (Score:2)
Re: (Score:2)
So what is low level today was high level yesterday? Sure it is nice not to have to peak and poke everything, but it seems that anything above that requires buying into somebody else's idea of the way things should be done. So, in the spirit of thought and freedom, just what does "Wayland" bring to the table?
Re: (Score:2)
Cheerleaders always gonna lead the cheer.
There. Did we get that out of our systems?
Re: (Score:2)
Re:wayland (Score:4, Interesting)
Great, more wayland propaganda. As if exploiting certain hardware features has anything to do with Wayland vs X11. Wayland: Breaking decades of backwards compatibility for no good reason.
Exactly. This article boils down to "wayland performance on pi went from suckass to very nice" which is mildly interesting but the implication that wayland rulez and X snoozes because of that is specious. There is no reason X couldn't see the same performance improvement if it too switched drivers.
Re: (Score:3)
Many of the X developers disagree with you.
Re: (Score:3, Insightful)
X11: Being needlessly complex with today's use cases for no reason.
If X11 is so good, why isn't Android using it?
Re: (Score:2)
True, with "X" multi device computing was an afterthought...Wayland takes it center stage. Or am I wrong?
Re:wayland (Score:5, Insightful)
Why doesn't any device that actually requires decent GPU throughput use it, including the Mac, the PS2/3/4, etc?
Why did those developers see fit to NOT use the freely available BSD-style code out there and spend their time writing their own rendering pipelines?
For fun?
Re:wayland (Score:5, Insightful)
If you've been using Linux since 1.0 (I have since 1.2) and have never seen any X11 failings, you're either talking out of your arse or are completely blinded by unrelenting fanboy-ism.
I've seen plenty of X11 failings over the years, ranging from inability to change screen resolution on the fly for about the first decade, poor security, crashes in the video driver taking down the OS, various hacks to get things like multi-monitor or 3d support to work, etc.
Yes, some of those things have been "fixed" via various bodges, in much the same way that the average wannabe Nissan Silvia drifter will "fix" crash damage with a drill and some cable-ties.
High latency, low bandwidth, high security risk stuff like network transparency does not belong in the same process as the rendering engine. It certainly doesn't want to be running as root. Especially when the majority of people simply do not use it, and it can easily be retained via a daemon like every other platform uses.
Re:wayland (Score:4, Interesting)
I'm... sorry?
You think SysV init scripts are in any way, shape or form moderately acceptable?!
I have a very simple refutation to that -- the collection of run scripts behind this link [smarden.org].
Go ahead -- have a look. Keep in mind that systems using those mostly one-line scripts all provide not just startup/shutdown/status, but also the ability to auto-restart on failure and lack the propensity for race conditions that pidfile-based locking almost universally used by SysV scripts is so very, very prone to.
Holding up SysV init scripts as a thing that doesn't have to be changed... it beggars belief.
Re: (Score:3)
The SysV init scripts have one huge advantage though: I can read/debug/understand them and all I need to know for that is a bit of sh(1) and coreutils. I have no use for shaving off 10s from the boot process and I don't start/stop services so fast that I could run into a race condition. I like being able to find out whether the service is today called bind9 vs named or httpd vs apache2 by simple filename completion.
Although your /. id is smaller by 3 orders of magnitude, I'll stick with scripts if you don't
Re:wayland (Score:5, Interesting)
My complaint was simpler. Hot swap monitors in 2003.
in 2003 I could unplug a monitor from my powerbook G4, and plug in a different monitor with a different resolution without causing anything other than window resizing things(and even that was done mostly automatically)
I tried that with linux in 2010 and not only did it crash out X11 but the automatic tool that was supposed to do it wouldn't restart. I didn't want to manually rerwrite x.conf every time I wanted to plugin in a different monitor(something I was doing several times a day).
To this day I miss aspects of transparent network windows. remote desktop/VNC just are not the same. However they are fast/ stable compared to X over anything but a local 100mbit lan.
I truly wish someone would rewrite X from the ground up with some new ideas on how to do the network transparency.
Re: (Score:3)
When I used Linux as my desktop (well, laptop) OS on a day to day basis (maybe four or five years ago), the only option to add a second monitor (plug my computer into my TV) using X itself was to restart X. Since that kills all your apps, that is effectively a requirement to reboot your machine just to connect to a TV.
The only way I ever got around those issues were via nVidia's proprietary drivers and control panel. They, at least, could add additional displays without killing everything. Most of the time.
Re: (Score:2, Troll)
Even the netbook that's driving my FDM printer runs X clients remotely, very nicely and Cura displays its 3D renders from the netbook to the X desktop system just fine using OpenGL remote. By the way, the netbook has NO OpenGL hardware.
On that same X desktop machine every Linux Steam game that I've tried works without any problem.
You want to re-invent the wheel, go r
Re: (Score:2, Insightful)
Re: (Score:3)
If you think X11 was "recently broken", you're deluded. It's been a steaming turd for a very long time. And whilst you can't polish a turd, you can dump it in sparkly glitter and it will look a little better than before. But it's still a turd.
Just to be pedantic; yes you cans polish a turd mythbusters busted that one.
https://www.youtube.com/watch?v=yiJ9fy1qSFI [youtube.com]
and by the way x11 is no turd.
Re:wayland (Score:5, Insightful)
Just because something is "possible" it doesn't mean it is a good idea. The fact that as per TFA wayland got 20% better power consumption BEFORE they took out a lot of un-necessary data copying should be reason enough for Linux people to sit up and take notice.
Mobile devices are future and a 20% plus reduction in power consumption whilst improving performance is nothing to sneeze at.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
if you relocate the graphics rendering to the GPU and make it perform better, then using that system and sending the rendered data to clients means not only the performance improves, but the load on the server reduces.
RDP has been doing "network transparent" viewing for a long time, its more than sufficient for all, so if we can improve things using this - we should. No need to run X just because its X.
Re: (Score:2)
Re:wayland (Score:4, Insightful)
Re:wayland (Score:5, Insightful)
Amen. X seems to have the highest complexity to documentation ratio of any major software subsystem I've ever come across.
Re: (Score:2)
I'll be generous and say there are probably all of 50 generic display device drivers written specifically for X11, probably the same for Apple, and maybe double-triple that for Windows drivers. It isn't exactly a large playing field for development efforts to just pick up from nothing, which is also why 99% of drivers are written by the manufacturer of the device.
You could always look into http://www.x.org/wiki/Development [x.org] for guidance, but in the end code is king. X Development is not simple, but neither i
Re:wayland (Score:5, Informative)
As the video and Daniel's post explain, we don't lose backwards compatibility because we can host legacy X applications in a Wayland window using XWayland. We get all of the benefits of doing top-level composition in hardware, none of the pain of writing (and maintaining) a hardware-accelerated X driver. Can you explain why anyone starting from a clean slate today would chose to accelerate X itself instead?
Re: (Score:2)
Mod Up(ton)
Re: (Score:2)
At some point, someone will have to maintain the hardware specific driver. Wayland may or may not be a cleaner api, but the work still has to be done for each device.
Re: (Score:2)
Which is why such drivers should go upstream. That's why the kernel developers want you to push your driver into the kernel - it gets maintained.
Re: (Score:3)
Yes, and the cleaner API is everything. If backwards compat can be maintained (it is) and the codebase can be a lot cleaner (it is) and perform better (it does) then why are people so anti-X replacement?
Open source is supposed to be a meritocracy, yet with all the weston hate around here you'd certainly not get that impression every time a weston thread pops up.
Re: (Score:2)
Another one with no clue about wayland (Score:2)
Of course not, since wayland uses those hardware drivers that were written for X. That's a good reuse of good code, avoiding re-invention of the wheel, and you really should have heard of that if you'd spent more then ten seconds learning about wayland instead of spouting "X sux" bullshit.
Once wayland hits new hardware that X doesn't support you get "the pain of writing (and maintaining) a hardware-accelerated driver". There's no
Re: (Score:2)
Fascinating points, except that once you've offloaded top-level composition to hardware you've claimed 90% of the benefit that you would have gained from full X hardware acceleration; even on the Pi it makes sense to use the software fallback path for all in-window rendering. I did bother to look into this a bit before opening my checkbook.
Re: (Score:2)
Are there many apps that have hardware accelerated in-window rendering on the Pi? Web browsers or photo slideshow apps or whatnot? I'm considering using a Pi for a project (display live room schedules for a convention on televisions), and while that sort of hardware acceleration isn't required, fading between screens/images/whatnot would certainly look nicer.
Re: (Score:3)
Not at present, but we're expending quite a lot of effort on getting hardware-accelerated Webkit running at the moment; Wayland is a key enabler for this.
Re: (Score:2)
So, let me get this straight... You're telling one of the founders of Raspberry Pi, who is a technical director at the company (Broadcom) writing those hardware video drivers you mention, and who was likely one of the people pushing the development work mentioned in TFA, that he needs to "spend more than ten seconds learning about wayland"?
Yes, I'm sure he's completely ignorant of such things.
Please stop pretending to be stupid (Score:2)
Of course you know this and are merely pretending to be mentally ill in order to manufacture a strawman. Why do you think such behaviour is acceptable?
Re: (Score:2)
Re: (Score:2)
Wayland's protocol and architecture allows it to serve X11 clients, through an emulated server. Improvements made to Weston as part of this engagement with the Raspberry Pi Foundation by Collabora enabled X11 applications to run seamlessly, running faster than under the legacy X.Org server.
Re: (Score:2)
Because most users own multiple device, each of which may require their own accelerator? If Wayland is to replace X, it will need to replace all the drivers needed run on all the devices that X runs on.
Re:wayland (Score:5, Insightful)
Re: (Score:2)
Rootless RDP : need to spend like $1000+ on Windows + pack of CAL licenses + terminal server licenses.
VNC : need to add a vncserver, fuck with bit depth and resolution settings, then you hijack the desktop that is running on the other machine, at the wrong resolution for your local computer. Right, this is totally not what I want to do. Maybe you can sysadmin your way around it (and even fix the lag)
X11 : ssh -X or putty + xming. No configuration, no installing something, no buying Windows Server, it just w
Re:wayland (Score:4, Insightful)
That's nice. I have remote machines on the end of shitty 512kbit satellite links in africa. We have enterprise licensing for Windows so the costs aren't that bad. We need some level of windows infrastructure in place in any case to handle Exchange (PHBs want it) and the various mining industry tools out company uses to get minerals out of the ground.
The point is thus: irrespective of what platform you run, X11 performance when compared even to offerings by Microsoft (RDP) is just blasted into the weeds.
That, my friends should be a fucking embarassment. X11 on 10 megabit ethernet performs worse than RDP over 256kbit frame relay. It's a fucking disaster.
IF i want to replace all my end user's desktops with dumb terminals, X11 simply isn't going to cut it.
Now, I'm not saying run windows everywhere. I'm simply saying that by any metric you care to use, X11's remote performance is simply horrible. IF wayland starts a push to phase it out in favour of something that is actually usable over something slower than an ethernet LAN, this is (long term) a GOOD thing.
I'll bet you X11 stalwarts are complaining about the need to convert to IPv6 as well? If not, why not?
Re: (Score:2)
Yes, it is well known that X11 over long-latency links sucks. So running remote X over a satellite link is basically the worst possible use case. But remote X on 10mbps local ethernet is significantly faster than RDP over a 256kbps satellite link.
Re: (Score:2)
For general GUI configuration stuff, RDP over 256kbit is not much different from local. Last time I ran X11 over 10 megabit ethernet (from my Linux box to a Solaris box), I saw flickering (as window content was updated), etc.
Fact is, I regularly do RDP over 512kbit satellite and whilst unpleasant it is fairly usable. I used to do it via 56k modem.
RDP isn't even the best tech out there.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
I'm not quite sure where you get the idea that I'm a windows fan. I am platform agnostic and use whatever fits the purpose.
I'm actually a FreeBSD + Mac user at home a mixed environment (FreeBSD/Linux/vSphere/Windows) at work to pay the bills.
None of my current platforms of choice can run Wayland, and currently on FreeBSD, X11 is the only option. It's still crap.
Nice try though.
Re: (Score:2)
And as to the why behind my choice of home platform? OS X server is crap. Windows (i have an install for gaming) is pretty crap from a UI perspective. Power consumption is bad (I'm on a macbook pro) as well. I bought mac hardware because the trackpad works and being aluminium the chassis still looks and feels like new after 2 years rather than being discloured/worn shiny plastic. Guess what: PC laptop enclosures are crap.
You can piss and moan about my percieved bias against anything open source all
Re: (Score:2)
Oh nice, call me a shill. It hurts man. I have multiple FreeBSD machines in production (primary NS, sendmail for a 2500 user company) have it running as a ZFS NAS, have run IPSEC endpoints with it. I've previously run Linux exclusively on the desktop for a number of years.
Anyone is free to go through my comment history, it goes back to 1997 or so and I've gone through various phases of fanboyism over the years (starting with Linux) and after exposure to a lot of different shit have become fairly agnos
Re: (Score:2)
Re: (Score:2)
I hope people will stop doing that (VNC remotes), as a contractor I have seen VNC installed "in the wild" 4 times. In all 4 cases they used a common password corporation wide. This password was stored weakly encrypted on the individual machines in registry, trivial to decrypt. At that point, it is just a matter of searching the network for the most important sounding user "Bob (CEO) Laptop" -- connect to it, watch 'em work for a minute, then open notepad and write "Can you call me at extension X, Thanks!
Re: (Score:2)
Yup, this is one of the major reasons we canned it. We originally used it because there was no in built remote assistance tool in Windows 2000 or previous.
One of the first things I did when i returned to this company was start phasing out VNC in favour of remote assistance/remote desktop and as soon as I can get it up and running properly, SCCM's remote control.
I've actually put group policy in place to disable VNC on all domain machines (several years ago), but we're pretty much free of it now due to
Re:wayland (Score:4, Insightful)
The talk on the state of X11 and Wayland/Weston given by one of the lead developers is a bit of an eye-opener about just how munged up X11 is at this stage.
Re: (Score:2)
It's not a lie at all - way to play the man rather than the ball.
Most of our users care more about local performance than they do about network transparency, so this is where we're investing our (limited) resources. People who care about network transparency can continue to use X either to or from the Pi; I don't think anyone is seriously suggesting that X is going to be replaced by Wayland in all use cases, merely that Wayland meets a need for high-performance, high-quality, low-power (and lower software c
Fuck backwards compatability (Score:5, Insightful)
99% of Linux users want desktop performance, not remote desktop performance. Put that legacy remote shit into a module if you want.
Fuck backwards rumours (Score:2)
Re: (Score:3)
Re: (Score:2)
No reason to run Linux if all you are worried about is desktop performance. Why in the world would you even consider using Linux if you can't think outside your own box?
Re: (Score:2)
"network transparancy is legacy" my ass. i remote into my debian box regularly
Sample base (Score:2)
Re: (Score:2)
Hardware acceleration is huge win.
Yeah, because, as everyone knows, X11 has no hardware acceleration, which is why it sucks and stuff.
Re: (Score:2, Insightful)
Especially considering that Pi would be a perfect example of a device that benefits from X11-style remote applications -- being based on a video decoder SoC, it has somewhat nice GPU but tiny CPU.
Re: (Score:2)
Good luck doing anything remotely bandwidth intensive or latency sensitive over the flaky as fuck USB-Ethernet on the B.
Flaky as what-now? I regularly stream 1080p blu-ray rips onto my Pi under RaspBMC and have never yet seen a problem...
-Jar
Re: (Score:2)
Wayland isn't about networking, it's about being pretty on a single device. Perhaps in the end their efforts might be incorporated into a proper networking graphical system like X though, so I earnestly encourage them to push on with their work!
Re: (Score:2)
Yup, "poorly." Not simply at speeds to be expected of an ARM11 CPU, poorly. Not "well", like a $500 system. And "hacky," whatever the hell that means.
Re: (Score:2)
Makes a great brain for my MAME cabinet... Apparently there was a LOT of "poorly" made arcade hardware up until just a few years ago.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I want to put a real-time room schedule display on televisions for a convention I work for. I could spend a few hundred dollars on a laptop for each television, or I could duct tape a $25 raspberry pi to the back of it and accomplish the same thing at a significant cost savings...
I can see a value in this sort of ultra-low-cost hardware, is it really so hard for you to? Not every use case requires high performance. In my case, cycling through a bunch of pre-made images (or perhaps I'll throw up a fullscreen
Re: (Score:2)
I can't afford one because of the need for a memory card and HDMI monitor.
Re: (Score:2)
SD cards too expensive? You can't afford an extra $5 on top of $35 for the Pi?
BTW The Pi also has composite video output if you're desperate.
Re: (Score:2)
The X1*** range is more than capable of doing basic 2D scaling and transparency in hardware... you'd have to go more than a few generations back from there to be bottlenecked for basic 2D work.
Re: (Score:2)
Since Ubuntu are no longer using Wayland, how could it possibly matter any longer?
I guess it's a troll, but I'll bite.
In case you are simply deluded, there are other Linux distributions in this wide world, and not all of them are driven by a businessman with a huge control motive. Canonical's NIH solution will be largely ignored outside Ubuntu, as have been Bazaar and Upstart.
Hope this helps.