Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Microsoft Power Hardware Linux

The Linux-Proof Processor That Nobody Wants 403

Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."
This discussion has been archived. No new comments can be posted.

The Linux-Proof Processor That Nobody Wants

Comments Filter:
  • oversimplified (Score:5, Insightful)

    by kenorland ( 2691677 ) on Sunday September 16, 2012 @10:39AM (#41352425)

    ia32 dates back to the 1970's and is the last bastion of CISC,

    The x86 instruction set is pretty awful and Atom is a pretty lousy processor. But that's probably not due to RISC vs. CISC. IA32 today is little more than an encoding for a sequence of RISC instructions, and the decoder takes up very little silicon. If there really were large intrinsic performance differences, companies like Apple wouldn't have switched to x86 and RISC would have won in the desktop and workstation markets, both of which are performance sensitive.

    I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

  • by Anonymous Coward on Sunday September 16, 2012 @10:48AM (#41352491)

    Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has.
    Modern Intel processors are just RISC with a decoder to the old CISC instruction set.
    RISC beats CISC in price performance trade-off, but backwards compatibility keeps the interface the same.

  • by guidryp ( 702488 ) on Sunday September 16, 2012 @10:48AM (#41352495)

    "ARM ends up being several times more efficient than Intel"

    Wow. Someone suffered a flashback to the ancient CISC vs RISC wars.

    This is really totally out to lunch. Seek out some analysis from actual CPU designers on the topic. What I read generally pegs the x86 CISC overhead at maybe 10%, not several times.

    While I do feel it is annoying that Intel is pushing an Anti-Linux platform, it doesn't make sense to trot out ancient CISC/RISC myths to attack it.

    Intel Chips have lagged because they were targeting much different performance envelopes. But now the performance envelopes are converging and so are the power envelopes.

    Medfield has already been demonstrated at competetive power envelope in smartphones.

    http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone/6 [anandtech.com]

    Again we see reasonable numbers for the X900 but nothing stellar. The good news is that the whole x86 can't be power efficient argument appears to be completely debunked with the release of a single device.

  • x86 to blame? (Score:5, Insightful)

    by leromarinvit ( 1462031 ) on Sunday September 16, 2012 @10:49AM (#41352499)

    Is it really true that x86 is necessarily (substantially) less efficient than ARM? x86 instruction decoding has been a tiny part of the chip area for many years now. While it's probably relatively more on smaller processors like Atom, it's still small. The rest of the architecture is already RISC. Atom might still be a bad architecture, but I don't think it's fair to say x86 always causes that.

    Also, there is exactly one x86 Android phone that I know of, and while its power efficiency isn't stellar, the difference is nowhere near 4x. From the benchmarks I've seen, it seems to be right in the middle of the pack. I'd really like to see the source for that claim.

  • Re:oversimplified (Score:4, Insightful)

    by stripes ( 3681 ) on Sunday September 16, 2012 @11:13AM (#41352687) Homepage Journal

    I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.

    The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.

    P.S.:

    IA32 today is little more than an encoding for a sequence of RISC instructions

    That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)

  • by Alex Belits ( 437 ) * on Sunday September 16, 2012 @11:13AM (#41352693) Homepage

    Visual Studio

    Please, please, please, stay on Windows, we don't need your Microsoft-infected minds spreading their diseases to other systems.

  • Re:oversimplified (Score:5, Insightful)

    by lkcl ( 517947 ) <lkcl@lkcl.net> on Sunday September 16, 2012 @11:34AM (#41352871) Homepage

    I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.

    i've covered this a couple of times on slashdot: simply put it's down to the differences in execution speed vs the storage size of those instructions. slightly interfering with that is of course the sizes of the L1 and L2 caches, but that's another story.

    in essence: the x86 instruction set is *extremely* efficiently memory-packed. it was designed when memory was at a premium. each new revision added extra "escape codes" which kept the compactness but increased the complexity. by contrast, RISC instructions consume quite a lot more memory as they waste quite a few bits. in some cases *double* the amount of memory is required to store the instructions for a given program [hence where the L1 and L2 cache problem starts to come into play, but leaving that aside for now...]

    so what that means is that *regardless* of the fact that CISC instructions are translated into RISC ones, the main part of the CPU has to run at a *much* faster clock rate than an equivalent RISC processor, just to keep up with decode rate. we've seen this clearly in an "empirical observable" way in the demo by ARM last year, of a 500mhz Dual-Core ARM Cortex A9 clearly keeping up with a 1.6ghz Intel Atom in side-by-side running of a web browser, which you can find on youtube.

    now, as we well know, power consumption is a square law of the clock rate. so in a rough comparison, in the same geometry (e.g. 45nm), that 1.6ghz CPU is going to be roughly TEN times more power consumption than that dual-core ARM Cortex A9. e.g. that 500mhz dual-core Cortex A9 is going to be about 0.5 watts (roughly true) and the 1.6ghz Intel Atom is going to be about 5 watts (roughly true).

    what that means is that x86 is basicallly onto a losing game.... period. the only way to "win" is for Intel and AMD to have access to geometries that are at least 2x better than anything else available in the world. each new geometry that comes out is not going to *stay* 2x better for very long. when everyone has access to 45nm, intel and AMD have to have access to 22nm or better... *at the same time*. not "in 6-12 months time", but *at the same time*. when everyone else has access to 28nm, intel and AMD have to have access to 14nm or better.

    intel know this, and AMD don't. it's why intel will sell their fab R&D plant when hell freezes over. AMD have a slight advantage in that they've added in parallel execution which *just* keeps them in the game i.e. their CPUs have always run at a clock rate that's *lower* than an intel CPU, forcing them to publish "equivalent clock rate" numbers in order to not appear to be behind intel. this trick - of doing more at a lower speed - will keep them in the game for a while.

    but, if intel and AMD don't come out with a RISC-based (or VILW or other parallel-instruction) processor soon, they'll pay the price. intel bought up that company that did the x86-to-DEC-Alpha JIT assembly translation stuff (back in the 1990s) so i know that they have the technology to keep things "x86-like".

  • by ColdWetDog ( 752185 ) on Sunday September 16, 2012 @11:41AM (#41352943) Homepage

    So does it matter when someone sends you a .pptx file that Office 2003 freezes on? Yeah, yeah, I'm pretty sure you can get a converter, but I like telling people that if their file has an 'x' in the extension it means that it's 'experimental' and they shouldn't send it to others. They need to send the version without the 'x'.

  • by blind biker ( 1066130 ) on Sunday September 16, 2012 @12:24PM (#41353295) Journal

    For me, the year of linux on desktops is now. With Steam coming to Linux [steampowered.com], along with Crossover and pure Linux-ported games, the inevitable has happened. I'm glad Visual Studio [microsoft.com] also runs perfectly on Wine (I'm also making sure to have a party with my friends on Visual Studio 2012 Virtual Launch Party, where thousands of geeks around the globe connect together to party the release of latest Visual Studio).

    A bit of "linux on the desktop" ass-licking, followed by a big, fat Visual Studio plug.

    Ladies and gents, we have a shill. A very smart one, but a shill none the less. Modded up by a few other plants, no doubt.

  • Re:Reality check (Score:4, Insightful)

    by fm6 ( 162816 ) on Sunday September 16, 2012 @12:27PM (#41353323) Homepage Journal

    In geekland, Nobody == Nobody I Know.

  • by hack slash ( 1064002 ) on Sunday September 16, 2012 @01:14PM (#41353699)
    Don't give up hope, hundreds of thousands of people in offices across the globe have made a living whilst playing Windows Solitaire.
  • by 0123456 ( 636235 ) on Sunday September 16, 2012 @01:16PM (#41353711)

    Oh, you're right. A company the size of Intel couldn't possibly spare one or two people for a few weeks to get support for their new power management into Linux.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 16, 2012 @01:19PM (#41353723) Homepage Journal

    None of today's "RISC" processors are what John Mashey was designing when RISC was introduced.

    I agree (and wrote in the article) that ARM has complicated their own architecture, and that Atom uses a RISC-like processor and instruction translation. However, backward compatibility with all of the generations of x86 still increases the complexity of Atom quite a lot.

    Thumb (ARM's 16-bit instruction set) is itself an instruction translator to the 32-bit opcodes, adding fixed or default operands for many of the instructions.

    The SIMD instructions used by Intel, AMD, and ARM go back to Pixar's CHAP compositing hardware in the 80's.

    None of this would have been in a Stanford MIPS.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 16, 2012 @02:20PM (#41354331) Homepage Journal

    I didn't write the summary posted on Slashdot. My summary (it's probably still in the "firehose" section) was one line. The Slashdot editor just scraped the first few paragraphs of my article. You can tell the number of people who actually read my article by the discussion of PowerVR graphics. There isn't one.

    Intel's competition with ARM right now is like a doped race-horse. They are hiding the problems of their architecture by using a semiconductor process half the size of the competition. Given equal FABs, we wouldn't see Intel as competitive.

  • by im_thatoneguy ( 819432 ) on Sunday September 16, 2012 @03:32PM (#41355105)

    It also ignores the fact that in flops per watt Intel still dominates ARM.

    It's like comparing a moped to a bus and saying "see look how much more fuel efficient the moped is!"

    True... but then fill a bus with people and suddenly the mpg per person goes through the roof for the bus. You could get 300mpg per person from a bus. Good luck getting that with a moped.

    And like the introduction of plugin hybrids competing with even Mopeds for single occupancy MPG--you can also see RISC x86 chips out-competing ARM too on RAW watts. The next generation of Intel chips are going to be not only substantially faster but also on parity for watts.

    Simply stripping down technology inevitably will come back to bite you in the ass. I think the domination of ARM in the mobile space is about to evaporate within the next year on every conceivable metric.

  • by AcidPenguin9873 ( 911493 ) on Sunday September 16, 2012 @03:32PM (#41355107)

    Given equal FABs, we wouldn't see Intel as competitive.

    Intel has had a fab advantage for years, and it's only getting bigger. Ask AMD how it feels - AMD made nice gains with K8 while Intel had uarch problems (Itanium+P4), but as soon as Intel fixed that (Core2/Nehalem/Sandy/Ivy), AMD felt the pain of their fab advantage all over again, and now AMD has uarch problems AND fab disadvantage.

    Saying "given equal FABs" is a ridiculously stupid way to analyze the processor market. Real chips are what people buy, not some hypothetical ARM A15 produced on Intel's 22nm FinFET or an Atom produced in TSMC 28. If you want to talk about microarchitecture, sure, take process out of the equation. But people don't buy microarchitecture, they buy a final product. Fab advantage allows Intel to hide their uarch problems until they fix them. When the next-gen Atom (Silvermon/Valleyview) comes out, then Intel won't have uarch problems AND they will still have a massive fab advantage.

  • by Bruce Perens ( 3872 ) <bruce@perens.com> on Sunday September 16, 2012 @08:17PM (#41357485) Homepage Journal

    I think I'll engage in a technical discussion with some other of the readers.

    But you're obviously so incensed by my article, so offended, and so outraged, that it would be funnier if I just ignored you and let you steam.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...