Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Intel Microsoft Power Hardware Linux

The Linux-Proof Processor That Nobody Wants 403

Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."
This discussion has been archived. No new comments can be posted.

The Linux-Proof Processor That Nobody Wants

Comments Filter:
  • by wvmarle ( 1070040 ) on Sunday September 16, 2012 @10:36AM (#41352413)

    Nice advertisement for RISC architecture.

    Sure it has advantages, but obviously it's not all that great. After all Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back. It seems no-one can beat the price/performance of the CISC-based x86 chips...

  • by UnknowingFool ( 672806 ) on Sunday September 16, 2012 @10:47AM (#41352485)

    Some here were immediately crying anti-trust and not understanding why Intel won't support Linux for Clover Tail. It's not an easy answer but power efficiency for Intel has been their weakness against ARM. If consumers had a choice between ARM based Android or Intel based Android, the Intel one might be slightly more powerful in computing but comes at the cost of battery life. For how tablets are used for most consumers, the increase in computing isn't worth the decrease in battery life. For geeks, it's worth it but general consumers don't see the value. Now if the tablet used a desktop OS like Windows or Linux, then the advantages are more transparent; however, the numbers favor Windows are there are more likely to be desktop Windows users with an Intel tablet than desktop Linux users with an Intel tablet. For short term strategy, it makes sense.

    Long term, I would say Intel isn't paying attention. Considering how MS have treated past partners, Intel is being short-sighted if they want to bet their mobile computing hopes on MS. Also have they seen Windows 8? Intel based tablets might appeal to businesses but Win 8 is a consumer OS. So consumers aren't going to buy it; businesses aren't going to buy it. Intel may have bet on the wrong horse.

  • by vovick ( 1397387 ) on Sunday September 16, 2012 @11:02AM (#41352591)

    The question is, how much can the hardware optimize the decoded RISC microcode? Or the optimization does not matter much at this point?

  • by leathered ( 780018 ) on Sunday September 16, 2012 @11:03AM (#41352601)

    .. and the reason is not efficiency or performance.. Intel enjoys huge (50%+) margins on x86 CPUs that simply will not be tolerated by the tablet or mobile device vendors. Contrast this with the pennies that ARM and their fab partners make for each unit sold. Even Intel's excellent process tech can't save them cost wise when you can get a complete ARM SoC with integrated GPU for $7. []

  • Re:oversimplified (Score:3, Interesting)

    by Anonymous Coward on Sunday September 16, 2012 @11:13AM (#41352679)

    What really kills x86's performance/power ratio is that it has to maintain compatibility with ancient implementations. When x86 was designed, things like caches and page tables didn't exist; they got tacked on later. Today's x86 CPUs are forced to use optimizations such as caches (because it's the only way to get any performance) while still maintaining the illusion that they don't, as far as software is concerned. For example, x86 has to implement memory snooping on page tables to automatically invalidate TLBs when the page table entry is modified by software, because there is no architectural requirement that software invalidate TLBs (and in fact no instructions to individually invalidate TLB entries, IIRC). Similarly, x86 requires data and instruction cache coherency, so there has to be a bunch of logic snooping on one cache and invalidating the other.

    RISC CPUs were originally designed with performance in mind: instead of having the CPU include all that logic to deal with the performance optimizations and make them transparent, the software has to handle them explicitly (e.g. TLB flushes, cache flushes and invalidation, explicit memory barriers, etc.). As it turns out, it's much easier for software to do that job (because the programmer knows when they are needed, instead of having the CPU check every single instruction for these issues), and it makes the CPU much more efficient.

  • Re:oversimplified (Score:4, Interesting)

    by RabidReindeer ( 2625839 ) on Sunday September 16, 2012 @11:22AM (#41352761)

    This is just a rant session about Atom. Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else. This will probably get modded as trolling but if I said the opposite thing about MS it would be insightful. In my opinion this entire article is trolling.

    Well, excuse me for living.

    I boot up Windows for 3 reasons:

    1. Tax preparation
    2. Diagnosing IE-specific issues
    3. Flight Simulator (Yes, I know, there's a flight simulator for Linux, but I like the MS one OK)

    Mostly the Windows box is powered off, because those are infrequent occasions and I'd rather not add another 500 watts to the A/C load. All of the day-to-day stuff I do to make a living is running on Linux. If for no other reason that the fact that I'd have to take out a second mortgage to pay for all the Windows equivalents of the databases, software development and text-processing tools that come free with Linux. Or in some cases, free for Linux.

    If you said "Linux is terrible for almost everything else" and gave specific examples, you'd be insightful. Given however, that I'm quite happy with at least 2 of the "everything else"s (desktop and Android), lack of specific illustrations makes you a troll.

  • Reality check (Score:4, Interesting)

    by shutdown -p now ( 807394 ) on Sunday September 16, 2012 @11:34AM (#41352869) Journal

    If nobody wants it and it's a dead-end for technical and business reasons, then how come that there is a slew of x86 Win8 devices announced by different manufacturers - including guys such as Samsung, who don't have any problems earning boatloads of money on Android today?

    Heck, it's even funnier than that - what about Android devices already running Medfield?

  • by stripes ( 3681 ) on Sunday September 16, 2012 @11:34AM (#41352873) Homepage Journal

    First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC

    LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.

    There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.

    Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops.

    Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).

  • by smash ( 1351 ) on Sunday September 16, 2012 @12:34PM (#41353383) Homepage Journal

    I bet that Apple did not make the decision based on technical grounds, it was probably a business decision.

    Actually it was both; the great irony is that Apple ditched PPC and went to x86 because of better power consumption with the new intel gear. The core onwards CPUs, in terms of performance per watt, have been awesome and were far and away leaps and bounds better than anything IBM/Motorola could offer with the RISC powerPC processors. If apple didn't go x86 and tried to stick with PPC, they would have been slaughtered in the notebook market, which is/was the fastest growing personal computer segment. Neither IBM or Motorola gave a crap about making a CPU to cater to apples 10% of the portable computer market.

    There's a lot of "RISC is so much better for power!" crap floating around, and maybe in theory it is. However in practice when you take into account real world applications and the "race to sleep", having a more powerful, CISC based core with an instruction set that provides many many functions in hardware can help offset the "in theory" better power consumption of the RISC competition. That and the fact that intel has the world's best fabs.

  • by imroy ( 755 ) <> on Sunday September 16, 2012 @12:58PM (#41353579) Homepage Journal
    The ARM ISA may seem "complex" when you describe it like you have, but each instruction is still a fixed size, they all follow one of only a limited number of formats (R-type, etc), and memory is only accessed by load/store instructions. That's why many prefer the term "load/store architecture". Anyway, these things really help to simplify your instruction decoder stage and keep memory accesses simple. These in turn make it easier to implement things like pipelines, out-of-order execution, branch prediction, etc. And that's only the stuff that has been implemented in ARM so far. I wonder how long until ARM develops a core with more advanced features, like register renaming and specularitive execution, and how it will perform then compared to x86 (which already has these things).
  • by default luser ( 529332 ) on Sunday September 16, 2012 @01:56PM (#41354055) Journal

    So you decided that the best way to get this point across to the Slashdot denizens (who never read the article) is to NOT ONCE mention the weaknesses of ARM or the strengths of x86 in your summary.

    Never in all my years reading Real World Tech have I ever seen a thread or article absolutely decide the question of "which instruction set is better?" between ARM and x86 (and some of the biggest industry heavyweights weigh-in on those discussions). Does better code density trump better compiler optimization flexibility. And does it even matter when ARM introduces out-of-order in mainstream cores like the A9, and Intel keeps Hyperthreading attached to every new Atom core to deal with blocking?

    So just because you feel slighted you write this fluff piece? Bruce, you shouldn't say things that aren't true just because you didn't get what you want, and because this "locked-down" tablet ecosystem is quickly taking away the free software community's free-ride.

    And you can keep pretending all you want that Intel can't compete with ARM on performance/watt or price. I guess you weren't paying attention when Intel released the Atom Z2460 phone platform, with competitive performance and battery life []? The Xolo x900 has a street price of around $400, so you can bet Intel is charging a tiny fee for that chipset.

    Tell me again why Intel can't compete?

  • Re:oversimplified (Score:5, Interesting)

    by kenorland ( 2691677 ) on Sunday September 16, 2012 @02:05PM (#41354165)

    As it turns out, it's much easier for software to do that job

    As it turns out, that's false. Optimizations are highly dependent on the specific hardware and data, and it's hard for compilers or programmers to know what to do. Modern processors are as fast as they are because they split optimization in a good way between compilers and the CPU. Traditional CISC processors got that wrong, as well as hardcore traditional RISC processors; the last gasp of the latter was the IA64, which proved pretty conclusively that neither programmers nor compilers can do the job by themselves.

RAM wasn't built in a day.