


Intel Skylake & Broxton Graphics Processors To Start Mandating Binary Blobs 193
An anonymous reader writes: Intel has often been portrayed as the golden child within the Linux community and by those desiring a fully-free system without tainting their kernel with binary blobs while wanting a fully-supported open-source driver. The Intel Linux graphics driver over the years hasn't required any firmware blobs for acceleration, compared to AMD's open-source driver having many binary-only microcode files and Nouveau also needing blobs — including firmware files that NVIDIA still hasn't released for their latest GPUs. However, beginning with Intel Skylake and Broxton CPUs, their open-source driver will now too require closed-source firmware. The required "GuC" and "DMC" firmware files are for handling the new hardware's display microcontroller and workload scheduling engine. These firmware files are explicitly closed-source licensed and forbid any reverse-engineering. What choices are left for those wanting a fully-free, de-blobbed system while having a usable desktop?
rootkit? (Score:5, Insightful)
Q: What guarantee do we have that these binary blobs don't contain root kits?
A: None.
This really isn't acceptable. :(
Re:rootkit? (Score:5, Funny)
Q: What guarantee do we have that these binary blobs don't contain root kits?
A: None.
This really isn't acceptable. :(
Aw, c'mon! It's not like the NSA would risk vital US infrastructure, foreign trade, and financial/military/corporate/individual security by deliberately compromising the security of widely used operating systems, software, and/or encryption!
That's just crazy talk.
Strat
Re: (Score:2)
Someone could do it for them.
Well, yeah?
That's kind of the way they typically accomplish these sorts of things, is it not? It's not like you get the compromised software/encryption tools/etc directly from some NSA server farm in Alexandria, VA. Sorry if I assumed everyone took that as a given.
Strat
Re: (Score:2)
I think the reason reverse engineering is forbidden is because of Intel's new DRM scheme:
https://www.virusbtn.com/virus... [virusbtn.com]
Which by the way, among other things, this new DRM scheme would also allow malware to completely hide itself from not only AV software, but you as well. And in a perfect world (i.e. if SGX works as intel plans) nobody would be able to remove any malware that uses this.
Re:rootkit? (Score:5, Insightful)
Re: (Score:3)
You cross 9 roads and come through unharmed.
So you think about the tenth like "it's just another road... I crossed others before and nothing happened".
But this one is different: this is the one that will kill you.
And this is the binary blob that will spy on you. If you can prove it's not, JUST DO IT.
Can you prove that the microcode running in the GPU isn't a binary-blob-in-Flash that will spy on you? What makes these binary blobs special?
Re:rootkit? (Score:5, Insightful)
Q: What guarantee do we have that these binary blobs don't contain root kits?
A: None.
This really isn't acceptable. :(
This is madness. They own the hardware. If you don't trust the vendor they can still screw you in hardware. Your fucked either way.
I don't recall people bitching about CPU microcode or any of a dozen subsystems in a typical computer which run on closed proprietary firmware.
I actually think this is something we should be encouraging more of. What is dangerous is systems downloading firmware from onboard field upgradable roms because attackers have leveraged these vectors to destroy gear and persist ownage even after compromised systems have been completely wiped.
Re: (Score:2)
Re: (Score:2)
Q: What guarantee do we have that these binary blobs don't contain root kits? A: None.
This really isn't acceptable. :(
Would you feel better if the CPU/GPU came with the firmware preloaded? I agree that it's not ideal but the code is not loaded into the kernel, it's loaded into the hardware by the kernel.
Re:rootkit? (Score:5, Informative)
Nobody gets upset about the microcode that lives in ROM in the hardware, but if you have a driver that loads the microcode, suddenly everybody loses their shit. Microcode is *everywhere* and it's very rare that you ever get to see it.
Re:rootkit? (Score:5, Insightful)
Are you aware that Intel (and AMD) have binary blobs combined with strong encryption and cryptographic signatures loaded into their processors? That those blobs can change execution behavior of individual instructions with essentially* no way to detect them? Those are called microcode updates and even if you disable loading new versions of microcode in the BIOS they are delivered with a standard one in onboard ROM.
(* statistical analysis using several processors of the same stepping running in identical systems but with different microcode revisions may work, no guarantee though)
Re:rootkit? (Score:4, Insightful)
Between UEFI and SMM I consider x86 a rootkit, period.
Re: (Score:3)
Between UEFI and SMM I consider x86 a rootkit, period.
Very much this, along with the microcode/hardware issues noted above.
Pretty much, if you don't want it snoop-able, don't put that data on or connect it to/through a commercial consumer computer, especially if it/both is/are not air-gapped from the internet.
The old ways are best. Sneakernets, dead-drops, OTPs for a few examples. The hugely increased reliance on the compromise of digital communications and computer system/network technology and the funding they've necessarily curtailed in other areas as a con
Re: (Score:3)
mandate? (Score:5, Insightful)
They aren't "mandating" anything. You buy their product, and they provide some closed source software with it that you need to get some of the functions. It sucks, but it isn't a "mandate".
You might want to consider letting it not bother you too much, though. After all, these chips have been full of proprietary code in the equivalent of ROM for a long time. The fact that some of it is migrating into RAM doesn't really change things very much.
If you really don't like loading proprietary blobs from RAM, use embedded processors; they usually don't do that because it wouldn't work very well in their environment.
If you really want to run a "fully-free, de-blobbed system", you need to get an open source processor and an open source motherboard.
Re: (Score:3)
The thing is, you're likely to want to in the future. With the most recent generation, Intel's Integrated graphics is actually better than AMD's best APU graphics.
http://www.anandtech.com/show/... [anandtech.com]
Only kinda sorta (Score:4, Insightful)
Re: (Score:2)
I can get a 7850k with a good board and 8 gigs of ram for $236 bucks. The equivalent i5 setup is going to be $450.
$450? Hahaha. No. For 285 bucks you can get an i5-4460, and ASRock B85M-HDS mobo and 8GB of RAM from Newegg.
Re: (Score:2)
AMD:
AMD A10-7850K, 170$CAD [newegg.ca]
GIGABYTE GA-F2A68HM-H, 59$CAD [newegg.ca]
8GB DDR3 1600 (2x4GB), 70$CAD [newegg.ca]
Total: 299$CAD
Intel:
Intel Core i5-4460, 220$CAD [newegg.ca]
ASRock B85M, 85$CAD [newegg.ca] (the HDS is listed as possibly discontinued).
8GB DDR3 1600 (2x4GB), 70$CAD [newegg.ca]
Total: 375$CAD
Re: (Score:2)
There's a $230 or so combo for the 4460 and mobo.
Re: (Score:2)
rsilvergun picked the 7850k, Lunix Nutcase picked the i5-4460. I simply checked the prices on NewEgg.ca
Re: (Score:2)
Uh... that doesn't have the Iris Pro GPU you dope.
Right, but that's apples and oranges (Score:2)
Re: (Score:2)
He's talking about the latest i5 with Iris Pro GPUs that came out this week. The older ones aren't competitive with AMD's APUs in terms of integrated graphics performance.
Re: (Score:2)
and then you'll be wondering why it's not playing the games it was supposed to and then you notice it doesn't have the gpu you thought it had.
intel gpu's have always been two years+ old performance and they've been saying it'll be comparable to low end amd/nvidia gpu's "next year" for about 15 years. that's not a joke. the same fucking marketing promises were made about the 950 etc..
Re:Only kinda sorta (Score:5, Insightful)
No, it doesn't "hang with" AMD's latest APUs, it's about 40% faster in terms of graphics performance and roughly 100-200% faster in terms of CPU performance, all while consuming roughly half the power.
If that's not worth twice the price, I don't know what is.
Re: (Score:2)
Re: (Score:2, Insightful)
You guys are all missing the point. We all know Intel CPU is faster than AMD CPU, and that AMD has had better day.
AMD's APUs have to date had faster graphics and Intel's integrated graphics have lagged behind. The grandparent points out that the brand new i5-5675c narrows that gap or exceeds AMD integrated graphics performance.
Just the i5-5675c is going to go for $276 in quantity initially. Predictably, like any rational vendor, Intel is going to charge for the privilege of using this new silicon with go
Re: (Score:2)
They said "equivalent" not "outperform". You've added some sort of qualifier that the original person didn't to try to inflate the price.
Re: (Score:2)
Re: (Score:2)
You seem to be massively underestimating the performance of modern (or even decade old) integrated graphics.
Modern IGPs will run GTA5 at 60fps. They will happily let you connect 3 4k monitors and run them at perfectly fine refresh rates. They will let you do basically anything you want, except for run very high end games at very high end detail settings.
Re: (Score:2)
Re: (Score:2)
You don't even need an expensive one. Any one that supplies a DisplayPort port will let you hook up 4 monitors straight away to that (daisy chaining ftw).
Re: (Score:2)
Choices (Score:2, Insightful)
What choices are left for those wanting a fully-free, de-blobbed system while having a usable desktop?
How about don't use these new systems? And keep on using what you have used in the past?
I've personally fixed bugs (Score:5, Insightful)
I did kernel hacking for 10 years. I've fixed bugs in Ethernet drivers and helped document (and work around) hardware errata. I've also had to deal with trying to rebuild Nvidia drivers when the binary blob was no longer compatible with the latest kernel source. Having open-source drivers is key for those of us that actually *do* work on this stuff.
Re: (Score:3, Insightful)
Same here, though for me, it was ATA and USB HID devices. As a programmer, nothing annoys me more than running into bugs and thinking, "I could fix this in two minutes if I had the source," and not being able to fix it because I don't. I've fixed bugs in many other people's code on many occasions simply because they annoyed me.
With that said, I've never seriously entertained touching a GPU driver; I think that might very well be the special hell that Captain Reynolds was talking about. :-)
Re: (Score:2)
That's actually why the entire Free Software movement exists: RMS was pissed that he couldn't fix his printer driver because it was closed-source.
Re: (Score:3)
RMS was pissed that he couldn't fix his printer driver because it was closed-source.
He wasn't pissed because he couldn't fix the printer driver, he was pissed because Xerox wouldn't fix it. If Xerox had accepted his bug report and fixed the bug, he wouldn't have gotten mad in the first place.
Re: (Score:2)
Re: (Score:2)
That actually doesn't sound too horrible, but then again, I enjoy writing parsers, so take that with a grain of salt... or an entire salt truck.
Re: (Score:2)
I did kernel hacking for 10 years. I've fixed bugs in Ethernet drivers and helped document (and work around) hardware errata. I've also had to deal with trying to rebuild Nvidia drivers when the binary blob was no longer compatible with the latest kernel source. Having open-source drivers is key for those of us that actually *do* work on this stuff.
This is different. nVidia drivers run on the CPU, so a change to the OS makes them, and your hardware, not work. This is firmware that runs on the graphics chip. When I want to run a totally different OS running on an ARM CPu, say, the blob will still work.
Why this presumption that you need 3D acceleration (Score:2)
I don't understand this presumption that you need 3D acceleration to have a usable desktop. There are plenty of older style cards that will work just fine with desktops that don't require 3D acceleration.
You may want 3D acceleration and you may want to play games, but that isn't required for a "usable desktop."
Re:Why this presumption that you need 3D accelerat (Score:4, Informative)
That would be because any modern operating system (including most linux distros) uses 3D acceleration on a graphics card to put windows on a screen.
Re: (Score:2)
I thought that people were using Linux to get rid of windows? /duck
Re: (Score:2, Insightful)
Your dictionary pedantry adds no value and in fact obscures the issue.
Well said. Being right is never a substitute for feeling righteous.
Re: (Score:3)
Being technically correct is never a substitute for understanding that you're completely wrong when applied to the pragmatic normal solution.
Re: (Score:2)
You can run the SVGA drivers with virtually any modern 3D card. If you're that paranoid about the BLOBs, you have an option. How is that "wrong"?
It's not like Intel, AMD, or NVidia are going to start publishing the source code for their BLOBs just because you're paranoid about their contents.
Re: (Score:2)
You can run the SVGA drivers with virtually any modern 3D card. If you're that paranoid about the BLOBs, you have an option. How is that "wrong"?
It's not - but it's not practical in the context of any modern operating system's demands from a graphics card.
Re: (Score:2)
Re: (Score:2)
The "pragmatic" solution would be to stop bitching that the BLOBs are proprietary and to just use what is made available to you.
Re: (Score:2)
For such a short post, you've got plenty of wrong things.
You assume that this is only about 3D.
Now I don't know about Intel's plans but with the open source driver without the firmware blob, I can't even get my AMD card to work at more than 800x600.
No mode settings (screen resolution), no power management, no video decoding, no accelerated anything: neither 3D nor 2D.
Without the firmware blob, it's just an expensive power hungry 800x600 dumb frame buffer.
And there are _not_ plenty of cards out there.
Intel,
Re: (Score:2)
A compositing display server saves a lot of CPU, by just doing the rendering and rasterisation of windows once and then alpha blending the resulting windows. You don't need to redraw for expose events, you just composite the results. This saves even having to bring the background applications into the cache (or into RAM if they're swapped out). Within an application, you can get the same benefit, caching the rendered results of (for example) a complex data-driven view and not having to do a load of queri
Re: (Score:2)
Since when are we setting the bar so low? We had a usable desktop two decades ago, if you're willing to just toss out modern features. We also had "usable cars" and "usable airplanes" fifty years ago, but I'll bet you'd prefer to fly cross-country in a modern Boeing 777 than an old turboprop, right?
Incidentally, there's really no such thing as "3D acceleration", with the possible exception of support for geometry-specific functions on the more modern cards. Much of the power of modern GPUs is dedicated t
So (Score:2)
Re: (Score:2)
http://en.wikipedia.org/wiki/Open_Graphics_Project [wikipedia.org]. My recollection is they eventually put out an overpriced, underperforming card, and in the following 5 years progress has passed them by.
I think that the work required to make a competitive GPU company would cost far beyond $1e9, and just isn't going to happen until several years after semiconductor technology becomes stagnant, if ever.
5M backers @ $1000 each? Maybe (Score:3, Insightful)
Maybe, but you'll have to have an awfully attractive proposal and back it with heavyweight talent that already had a good reputation for delivering the goods.
For example, if a major video card vendor went belly-up for reasons not related to their tech (i.e. for plain old poor business practices) and their best coders banded together and started a kickstarter with a goal of $5B above and beyond the $1B they were personally tossing into the pot, they'd get my attention. But then again, they probably would be
Move To France (Score:2, Insightful)
Be a conehead. Use that telex machine thing. Be a pepper. Go for it. Be all you can be. Aim high. Jump in a lake. Just not a skylake. Partake of toe jam and jelly not found in any store. Worship his holiness.
Artificial hardware vs software distinction (Score:5, Interesting)
If the same blob was included in chip's ROM, nobody would think it's different from before right? The only difference here is that Intel is saving some money by not having a flashable ROM in the chip and instead having host OS provide the same blob on each boot. It's not like Windows driver gets a better blob or accesses some secret features not given to Linux developers.
If you are interested in open source hardware this is not in. But open sourcing all code running on main CPU is a significant step in itself and has many practical advantages (like being able to run/write whatever OS you want).
If community has done more with existing open hardware contributions like OpenSparc, I think we would see many new ones.
Re: (Score:2)
The only difference here is that Intel is saving some money by not having a flashable ROM in the chip and instead having host OS provide the same blob on each boot.
With the new approach, it looks easier to fold malwares in unexpected places
Critical distinction between HW & SW: user fre (Score:4, Interesting)
Yes, we would think it's different because it is different. When the functionality of that blob is in a ROM chip or circuitry, nobody can update it, including the proprietor, without hardware modification or hardware replacement. When the functionality is in software or any kind of reprogrammable device, the question becomes who is allowed to run, inspect, share, and modify that code. This is an important ethical distinction that the developmental philosophy of the younger open source movement was designed to never raise as an issue because that movement wants to pitch a message of cheap labor to businesses.
All the questions of software freedom enter the picture because you're dealing with software now. All the issues that the open source movement was designed not to raise (older essay on this topic [gnu.org], newer essay on this topic [gnu.org]) the older free software movement raised over a decade before the open source movement began.
If this code were distributed as Free Software to its users, this could be great news for all of us (even the majority of computer users who will never fully take advantage of these freedoms because they're never going to become programmers). Programmers can accomplish wonderful practical benefits like putting in interesting features, fixing bugs, learning from the code, all while being friendly with others by giving or selling services based on improving that code, and helping to keep users safe from malware all along the way.
If this code is distributed as non-free user-subjugating software (a.k.a. proprietary software), the proprietor (Intel in this case) is the only party who can inspect, share, and modify that code. And users (regardless of technical ability) are purposefully left out of controlling their own computers, which is unethical.
"forbid any reverse-engineering" (Score:5, Interesting)
"These firmware files are explicitly closed-source licensed and forbid any reverse-engineering."
Forbidding any reverse-engineering? I guess Intel will not be released this in Europe then.
Why do people even care about this? (Score:4, Insightful)
If it weren't for the fact that these binary blobs are updateable, no one would care. For example, your hard disk certainly has a "binary blob" in the form of its firmware, but because the OS isn't able to update it, no one cares and happily ignores it. However, the moment someone releases a hard drive where the OS can supply the binary blob so that the hard disk firmware is easily updated, the open source community will immediately reject this new device even though the only difference between it and the old device is that the old one, in the event of a firmware bug, could not be updated and simply remained unreliable for the lifetime of the device.
Indeed, that's probably what is happening here. Intel likely had such code in their cards all along, but previously the code was in a non-reprogrammable ROM. Now they've decided to add a new feature to their cards to allow bugs that are discovered in this code to be corrected, and everyone is simply going to complain about it. They were happy when no one could access the code and fix the bugs, but now that Intel can do it, they're not willing to accept not being able to do it themselves as well.
It's rather silly. Just imagine if the card could accept a binary blob, but refused it if it didn't match cryptographic checksums in the hardware that cannot be updated. It would be effectively the same as if the firmware were stored in a ROM in the hardware itself in that no one would ever be able to modify that code, but you can still bet that the open source community would be up in arms over not having access to the source code simply becase, whenever they can touch binary code, they're unable to accept the fact that they don't have the source to that binary code.
Re: (Score:2)
It's still a problem, but it's so minor compared to closed drivers, etc., that I too question it being that much. Needs to be noted. Needs to have people aware of it. Then we move on.
Open Source GPUs (Score:5, Informative)
An open source GPU: https://github.com/jbush001/NyuziProcessor
And its wiki: https://github.com/jbush001/NyuziProcessor/wiki
And even some peer-review: http://www.cs.binghamton.edu/~millerti/nyami-ispass2015.pdf
We could have fixed this problem a decade ago if the FOSS community had gotten behind the Open Graphics Project, but they're not as interested in FOSS-friendly graphics as they say they are. This is because most FOSS enthusiasts are more interested in gratis than they are in freedom.
Re: (Score:2)
Or maybe it's because building an open source GPU that is even remotely competitive is nearly impossible. Spinning your own silicon is very expensive and requires a lot of resources, like expensive closed source software. FPGAs and the like are not powerful enough. Even if you find a way to do it, the amount of research required to build something that even implements enough of OpenGL 2 in hardware to give half decent performance would prevent anything useful ever being released.
It's the same reason why you
Mandate open firmware, awesome idea. (Score:2)
Yes. Mandating open firmware, awesome idea. Because we want to need X different compilers compiling code for Y different cpus/mcus running Z basic OSses just to compile our kernel and use our hardware. It will make our lives so much better. Why not just mandate that those embedded cpus must run Linux themselves?
Perhaps it makes sense to differentiate between binary drivers for Linux (bad) and binary blobs running on the embedded hardware taking to opensource drivers (ok)?
No OpenBSD graphics drivers then? (Score:2)
Re: (Score:2)
There are plenty of drivers in all "free" operating systems that are pretty much "binary blobs" for example the drivers for raid cards. they were written using a NDA documentation. joe blow does not have access to these documents (including errata) and has no ability to make any changes to the driver without potentially introducing terrible bugs.
So whine all you want about "free software", the simple fact is that every "free" operating system is full of this type of unmaintainable "blob" code.
Tell me ar
Okay, a really, really silly question - (Score:2)
Has anyone found out *why* Intel is doing this?
What springs to mind maybe they're using code from a third party (i.e. video codecs, HDMI/DRM management, etc.) and *that* third party is not open source (for whatever definition of 'open' you prefer.)
If, (let me stress again, if) that's the case, then providing Intel with an open source solution that works better *might* resolve the issue.
Re: (Score:2)
3-D hardware developers are very jealous of their high-paying jobs and they want to keep the "secret sauce" to themselves so they can maintain their stranglehold on the market. They will only allow their code to be released in binary form.
see what happens with intel when AMD is out of the (Score:2)
see what happens with intel when AMD is out of the way they jack up prices and cut back on stuff first's cutting back on pci-e lanes in $350-$400 cpus now open source what is next locked boot loaders? What to boot linux you need to buy a $250+ (1 cpu) $400-$600 (2 cpu) server boards or the top of line $300-500 gamer boards
Damn you, Intel (Score:2)
Curse your sudden but inevitable betrayal!
Open source has won... and then we lost (Score:2)
While I feel the outrage over this move is probably overblown, it does vindicate the fairly extreme positions in regards to free software held by Richard Stallman. Basically the watered down idea of free software, called "open source", has taken off and really win the world over. Even Microsoft is embracing open source. Everyone sees the benefits. The problem is that they see that it can benefit their existing proprietary models quite well. So for example Microsoft, while being more open to open source and
Re: (Score:2)
This isn't a step back for open source; it's just staying the same.
Despite the great success of open source software, which I'm using to write this post, the underlying computers where this software runs have always been proprietary hardware implementations with some proprietary firmware blobs (eg, the BIOS) stored somewhere.
The fact that some companies have moved from hardware to on-board firmware stored on EEPROMs to firmware which needs to be uploaded by the driver isn't a real change , as long as the li
Back to Nvidia (Score:2)
Binary Blobs? (Score:2)
Yes, I tend to see things as black or white. And yes, I could use more exercise. But I'm working on it.
I consider reverse engineering EULAs... (Score:3)
I consider reverse engineering EULAs to be non-binding. As there are no considerations in such a contract nor does it have a reasonable duration for the contract or any option to cancel the contract. (at least most reverse engineering clauses I've read, there may be exceptions out there)
Re: (Score:2)
Reverse-engineering is often allowed anyway if it's for the purposes of integration and compatibility.
Hence Samba can happily reverse-engineer things, no matter what the Windows licence says. They can't TAKE CODE (i.e. you can have been tainted by seeing the Windows source) but reverse-engineering from a binary is another matter entirely.
And just because a contract says something's not allowed, it doesn't mean anything unless a court agrees. 99.9% of the time, it would never even get that far. Yes, it's
#TRANSLATIONFAIL# Re:mod 30wn (Score:4, Funny)
You have exceeded the limits of my universal translator, and that's even after I installed the Yodaspeak and Tamarian-metaphor-interpretation modules (side-note: the latter is huge, it has to incorporate the entirety of the Tamarian race).
Then translator did make this out though:
"Mod parent post down"
"#UNCLEARCONTEXT# Operating System"
"#UNCLEARMEANING# Possible reference to poster making many recent repeated arguments related to either the current topic of discussion, BSDI, or both, and a possible relationship between the current topic of discussion and BSDI"
"#SPECULATIONCONTINGENTONPREVIOUSUNCLEARTRANSLATION# Possible insult related to the possible many recent repeated arguments mentioned above"
If you will kindly let me know what additional modules I need to install in my universal translator, I will be able to understand you better. Thank you.
Re: (Score:2)
If you will kindly let me know what additional modules I need to install in my universal translator, I will be able to understand you better. Thank you.
The Markovian [wikipedia.org] module (although, by comparison to Mr. Shaney's posts, that was, well, rather broken Markovian; perhaps it was published by the Dissociated Press [wikipedia.org]).
Darmok and Jalad at Tanagra. (Score:2)
Re:This matters because... (Score:5, Insightful)
While I'm inclined to dismiss binary blobs as largely innocuous in most scenarios, you are oversimplifying things considerably.
1) Just because *I* don't have the time or interest to modify display firmware, doesn't mean I'm not in a position to benefit from *other people* doing so. Witness the entire Linux infrastructure, which owes its existence to the fact that the software stack of the time was NOT locked down, and critical hardware was all reasonably well documented.
2) The binary blobs are themselves dangerous - driver software is typically running with very high security clearance, and you have absolutely NO idea what is going on inside those blobs. Couple that with the fact that we now KNOW the NSA (and presumably other organizations as well) have actively recruited several major companies to collaborate in compromising the security of commodity hardware, and we're in the position of being completely unable to trust ANY binary-blob software in a security-critical scenario. Since Intel was pretty much the go-to option for decent(ish) fully open-source display accelerators, that alone validates a subset of the original question: What are our options now if we want a modern desktop that can be be audited for security?
Re: (Score:2)
The binary blobs are themselves dangerous - driver software is typically running with very high security clearance, and you have absolutely NO idea what is going on inside those blobs.
The hardware is dangerous typically running with very high security clearance, and you have absolutely NO idea what is going on inside those transistors.
Couple that with the fact that we now KNOW the NSA (and presumably other organizations as well) have actively recruited several major companies to collaborate in compromising the security of commodity hardware, and we're in the position of being completely unable to trust ANY binary-blob software in a security-critical scenario.
I KNOW there are devil worshipers operating in the world so I am "completely unable to trust" ANYONE because they may be a devil worshiper.
Without specific information what you KNOW is FUD.
Since Intel was pretty much the go-to option for decent(ish) fully open-source display accelerators, that alone validates a subset of the original question: What are our options now if we want a modern desktop that can be be audited for security?
Before the very same proprietary firmware was burnt into silicon. The only difference "now" is less ignorance.
Re: (Score:2)
Not really. See my reply above: http://slashdot.org/comments.p... [slashdot.org]
Re: (Score:2)
The binary blobs are themselves dangerous - driver software is typically running with very high security clearance, and you have absolutely NO idea what is going on inside those blobs.
Well, on thing that might not be going on inside those blobs is "running on the CPU". The Intel download page for the firmware [01.org] says of the GuC firmware:
Upload and forget (Score:2)
"The binary blobs are themselves dangerous - driver software is typically running with very high security clearance, and you have absolutely NO idea what is going on inside those blobs."
Then why not rewrite the (opensource) driver so the only thing it does is upload the binary blob to the graphics card? Then you're back to present behavior, a graphics card that runs closed-source microcode. Has anyone performed a security audit of any of today's top desktop CPUs?
The idea solution would be nothing less than
Re: (Score:2)
*If* the binary blobs are only executed on the GPU then that would seem reasonable to me. And I suppose that might be the case - it would save a few nickles worth of flash or ROM on the graphics chip, which could be relevant for budget-oriented GPUs. Usually though, when you hear of binary blobs in a driver, they're running *as part of the driver* - aka on the CPU, NOT the GPU.
Re: (Score:3)
Compromised hardware though is (potentially) far more limited in its invasiveness. The CPU and motherboard chipset has fairly unrestricted access to the entire system and must be trusted, but pretty much everything else must go through those, and generally through the OS as well, which limits the amount of nefarious activity it can get up to (or at least makes it considerably more difficult, one would hope). For example, the firmware on the video card itself will have a difficult time gaining unrestricted
Re: (Score:3, Interesting)
Unless the system has an I/O MMU, the hardware devices and any firmware they may be running have unrestricted access to RAM.
I/O MMUs were almost exclusive to server chipsets until some time ago.
Nowadays they are more common (spurred mostly by virtualization needs) but not totally universal yet. Intel likes to disable the feature in the K CPU models (which have unlocked frequency multipliers for better overclocking options).
I don't keep track of the status of phone/tablet SoCs but if I had to hazzard a guess
Re: (Score:2, Interesting)
What you say is bullshit. As someone who installed NetBSD in the 90s and have also used many different GNU/Linux versions in the past, my experience is simply that the opener a system is, the better. I've seen and had to fix countless problems with systems who have proprietary drivers and binary blobs. That just sucks. What is not open cannot be fixed by someone, and companies tend to not fix things properly.
If you haven't made this experience, then must be exceptionally lucky or are about 8 years old. The
Re: (Score:3)
My experience is that the most reliable systems are closed source proprietary systems, because the development teams are well paid and well motivated. For example look at the systems used by wall street traders, hospitals, etc These systems are really really mission critical and are almost all closed-source software packages that come with extensive support and cost big money.
But if you've never worked with large enterprise systems, you can be forgiven for your ignorance
Re: (Score:2)
The problem is if a blob isn't compatible with the open source kernel, then you have problems either license wise or by non-functioning software.
Re: (Score:2)
Re: (Score:2)
binary blob just means they're licensing the shit now because they can't make it work otherwise and the licensing deals forbid open sourcing it so other gpu companies can't look at it and steal stuff from it or RATHER the other gpu companies can't so easily look and see what's used that's under patent by them.
Re: (Score:2)
Don't hold your breath. ARM is just as hostile to open source as the other GPU vendors.
Nonvolatile memory on package (Score:2)
a WiFi vendor could, for example, make different chips for different parts of the world hard-wired for the locally-mandated frequencies, OR make one chip that can be programmed with microcode in a blob to meet local regulations.
Third option: include a small block of PROM in the chip package to store the region-specific parameters.
Another example is the programmable logic in so may systems; manufacturers can make a single PCB that can support different options and system configurations based on a different fuse pattern load - a binary blob whose differences from a microcode binary blob are insignificant.
The differences are insignificant technically but significant politically. Microcode stored on a nonvolatile memory in the CPU or chipset will work with whatever operating system is loaded. Microcode stored in volatile RAM works only with operating systems whose publisher has licensed the microcode.
Re: (Score:2)
what an awesome way to enable undetectable malware
Re: (Score:2)
Why would you trust the vendor to burn the blob into a PROM but not trust the vendor when he gives you the very same blob and tells you you can upload it from Windows or Linux or BSD etc????
Because several hardware makers have refused permission to include the required blob on operating system install media and distribute the blob from the operating system's non-free driver repository.
Which jurisdiction's law? (Score:2)
Anonymous Coward wrote:
You can't forbid what is in the law.
Which jurisdiction's law were you talking about? In the United States, there is no right to reverse engineer. In the European Union, there is no right to immigrate from the United States.