Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Microsoft Power Hardware Linux

The Linux-Proof Processor That Nobody Wants 403

Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."
This discussion has been archived. No new comments can be posted.

The Linux-Proof Processor That Nobody Wants

Comments Filter:
  • Re:Blast in time (Score:5, Informative)

    by Pseudonym ( 62607 ) on Sunday September 16, 2012 @10:42AM (#41352445)

    Hell, I remember using an Archimedes in 1988. Odd to think that my phone now has four of them.

    Back to the topic, the border between RISC and CISC is a bit fuzzy these days. Every modern CISC chip is basically a dynamic translator on top of a RISC core. But even high-end ARM chips can do some of this with Jazelle.

    To be fair, CISC does have a few performance advantages when power consumption isn't (as big) an issue. The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely. ARM chips generally don't do out-of-order scheduling and retirement; that uses a lot of power, and is the main architectural difference between laptop-grade and desktop/server-grade x86en).

    I'd like to see what a mobile-grade Alpha processor looks like. But I never will.

  • by stripes ( 3681 ) on Sunday September 16, 2012 @10:49AM (#41352503) Homepage Journal

    Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back

    FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:

    1. Apple I/Apple ][ on a 6502 (largely classed as CISC)
    2. Mac on 680x0 (CISC) then PPC (RISC), then x86 (CISC) and x86_64 (also CISC)
    3. iPod on ARM (RISC), I'm sure the first iPod was an ARM, I'm not positive about the rest of them, but I think they were as well
    4. iPhone/iPod Touch/iPad all on ARM (RISC)

    So a pretty mixed bag. Neither a condemnation of CISC nor a ringing endorsement of it.

  • Re:Blast in time (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Sunday September 16, 2012 @10:54AM (#41352545) Journal

    Every modern CISC chip is basically a dynamic translator on top of a RISC core.

    And that's the problem for power consumption. You can cut power to execution units that are not being used. You can't ever turn off the decoder ever (except in Xeons, where you do in loops, but you leave on the micro-op decoder, which uses as much power as an ARM decoder) because every instruction needs decoding.

    But even high-end ARM chips can do some of this with Jazelle.

    Jazelle has been gone for years. None of the Cortex series include it. It gave worse performance to a modern JIT, but in a lower memory footprint. It's only useful when you want to run Java apps in 4MB of RAM.

    The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely

    That's not what my tests show, in either compiled core or hand-written assembly.

  • by UnknowingFool ( 672806 ) on Sunday September 16, 2012 @11:14AM (#41352695)

    I would argue the problem for Apple wasn't about performance but about updates, mobile, and logistics.. PowerPC originally held promise as a collaboration between Motorola, IBM, and Apple. IBM got much out of it as their current line of servers and workstations run on it. Apple's needs were different than IBM's. Apple needed new processors every year or so to keep up with Moore's law. Apple needed more power efficient mobile processors. Also Apple needed a stable supply of the processors.

    Despite ordering millions of chips a year, Apple was never going to be a big customer for Motorola or IBM. Their chips would be highly customized that none of their other customers needed or wanted and Apple needed updates every year. So neither Motorola or IBM could dedicate huge resources for a small order of chips as they could make millions more for other customers. PowerPC might have eventually come up with a mobile G5 that could rival Intel but it would have taken many years and lots of R&D. IBM and Motorola didn't want to invest that kind of effort (again for one customer). So every year Apple would order enough chips they thought they needed. If they were short, they would have order more. Now Motorola and IBM like most manufacturers (including Apple) do not like carrying excess inventory. So they were never able to keep up with Apple's orders as their other customers had more steady and larger chip orders.

    So what was Apple to do? Intel represented the best option. Intel's mobile x86 chips were more power efficient than PowerPC versions. Intel would keep up the yearly updates of their chips. If Apple increased their orders from Intel, Intel could handle it because if Apple wasn't ordering a custom part, they were ordering more of a stock part. There are some cases where Apple has Intel design custom chips for them, mostly on the lower power side; however, Intel still can sell these to their other customers.

    As a side note, as a difference in the relationship between IBM and Apple look at the relationship between MS and IBM for the Xbox 360 Xenon chip [wikipedia.org]. This was a custom design by IBM for MS, but the basic chip design hasn't changed in seven years. As such chip manufacturing has been able to move the chip to smaller lithographies (90nm --> 45nm in 2008) both increasing yield and lowering cost.

  • by Anonymous Coward on Sunday September 16, 2012 @11:24AM (#41352789)
    Just wait until these people see what "supporting Linux" means to Valve too. I run Steam on OS X and it's not the games fest that they make it out to be. Oh, to be sure there are a few great games there. But aside from CivV (which had a native OS X before Steam) just about every non-Valve game isn't supported except for a handful of "indie" games. The other day I was going down the upcoming release list and not a single major title release was being slated for OS X.

    Oh, and wait until one of them has a Steam-centric problem with their system. Steam is a bunch of sweethearts on supporting that too.

    Steam may get a few current Linux users to stop using Windows but it's not going to make anyone switch.
  • by fermion ( 181285 ) on Sunday September 16, 2012 @11:25AM (#41352795) Homepage Journal
    Most 70's era microprocessor pretty much had 50 opcode and a few registers. It was possible to memorize these all and decompile from hex in your head. I never had the mental acuity to do so, but many of my friends in high school could. By the 1980's, there was a lot of big iron that used RISC, but as I recall these had more opcodes than, say, a 6502, and I know that RISC does not just mean reduced instruction. It is a simplified instruction set. Right now I think we have a lot of hybrid chips on the market. The war between CISC and RISC has come to place where both are used as needed. In the x86 space, legacy is an issue. MS has not done what Apple does which is to say support a machine for 3-5 years, then develop something that meets current demands. The common person would not even see a RISC processor until Apple switched to the PowerPC, which brought the conflict between CISC and RISC to the public. It is interesting to have this conversation now because this was exactly what was said back them. RISC is more efficient, so the chip can be about half as fast, and still be as fast as the CISC chip.

    So this OS specific chip is nothing new, and *nix exclusion is not new. Many microcomputers could not run *nix because they did not have a PMMU. The ATT computer ran a 68K processor with a custom PMMU. Over the past 10 years there have been MS Windows only printers and cameras which offloaded work to the computer to make the peripheral cheaper.

    Which is to say that there are clearly benefits for RISC and CISC. MS built and empire on CISC, and clearly intends to continue to do so, only moving to RISC on a limited basis for high end highly efficient devices. For the tablet for the rest of us, if they can ship MS Windows 8 on a $400 device that runs just like a laptop, they will do so., If efficiency were the only issue, then we would be running Apple type hardware, which, I guess, on the tablet we are. But while 50 million tablets are sold, MS wants the other 100 million laptop users that do not have a tablet, yet, because it is not MS Windows.

  • by Dogtanian ( 588974 ) on Sunday September 16, 2012 @11:35AM (#41352881) Homepage

    Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has. Modern Intel processors are just RISC with a decoder to the old CISC instruction set.

    Exactly. Intel has been doing this ever since the Pentium Pro and Pentium II came out in the 1990s. Anyone who knows much at all about x86 CPUs is aware of this, and Perens certainly will be. That's why I'm surprised that that article misleadingly states:-

    So, we start with the fact that Atom isn't really the right architecture for portable devices (*) with limited power budgets. Intel has tried to address this by building a hidden core within the chip that actually runs RISC instructions, while providing the CISC instruction set that ia32 programs like Microsoft Windows expect.

    The "hidden core" bit is, of course, correct, but the way it's stated here implies that this is (a) something new and (b) something that Intel have done to mitigate performance issues on such devices, when in fact it's the way that all Intel's "x86" processors have been designed for the past 15 years!

    Perhaps I'm misinterpreting or misunderstanding the article, and he's saying that- unlike previous CPUs- the new Atom chips have their "internal" RISC instruction set directly accessible to the outside world. But I don't think that's what was meant.

    (*) This is in the context of having explained why IA32 is a legacy architecture not suited to portable devices and presented Atom as an example of this.

  • Re:oversimplified (Score:2, Informative)

    by MindlessAutomata ( 1282944 ) on Sunday September 16, 2012 @11:48AM (#41352993)

    How is an iPhone better? Having used both an iPhone and iPad I was far from impressed. The thing I was mostly impressed about was that I had great difficulty trying to use it, because it takes awhile to figure you you actually can't do a lot of very basic computer operations as it's very dumbed down (file system access for one--what the hell?)

  • by Truekaiser ( 724672 ) on Sunday September 16, 2012 @11:51AM (#41353021)

    arm does not make their own chips. They design the instruction sets and the silicon photo masks(look up how chips are made) but other companies make the actuall physical silicon product. Those companies can pick and choose what parts of the cpu they want to use and what instruction sets they want in it.

    to use food as a analogy, Intel is every store or restaurant that you can buy food pre made and ready to eat. arm would be like someone selling a recipe to you. it's up to you to make it, and what you put into it.

    So it's not arm's fault for not supporting linux on the nokia and apple variants of the arm v7 instruction set. It's those respective companies. So if you had enough money and access to either rent or own a cpu fab plant, you too could make your own version of a arm chip and make it only be support on haiku os for example.

  • Re:oversimplified (Score:5, Informative)

    by Zero__Kelvin ( 151819 ) on Sunday September 16, 2012 @11:53AM (#41353051) Homepage

    "Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else"

    You don't know anything about Linux. It powers all RISC / ARM based Android smartphones. It also runs on more than 33 different CPU architectures. A huge number of those platforms are embedded systems that are probably sitting in your living room and enabling you to watch TV, DVDs, Blue Ray, etc as well as listen to it all in Surround Sound [wikipedia.org].

    " In my opinion this entire article is trolling."

    To be blatantly honest, I haven't quite figured out if it is you that is trolling, or you are really just that ignorant of the facts.

  • by YA_Python_dev ( 885173 ) on Sunday September 16, 2012 @12:08PM (#41353155) Journal

    Getting back on topic: the last ARM architecture, ARMv8, is far from what was called "RISC" back in the '70s. E.g. it can run instructions of different sizes (16 vs 32 bit), it has 4 specialized instructions for AES, registers with different sizes (32, 64 and 128 bits), instructions for running a subset of the Java bytecode, a rich set of SIMD operations and specialized instructions for SHA-1 and SHA-256.

    Similarily the architecture supported by the new Atom chips (which is AMD64/x86-64 BTW, IA32 is only present for backward compatibility) is almost universally run on RISC-like processors that have instruction translators. Considering that the increased density of the x86-64 instructions usually allows to save more cache transistors than the ones required for decoding the instructions themselves, I think that the power consumption differences that we see are more due to the implementation and different traditional focus areas of ARM vs Intel/AMD than inherent differences in the instruction sets.

  • by LostMyBeaver ( 1226054 ) on Sunday September 16, 2012 @01:14PM (#41353693)
    Consoles choose RISC vs. CISC for a much simpler reason. The performance isn't really that important. It's typically an issue of endianess.

    It has become quite simple in modern times to make a CPU emulating JIT (meaning treating the binary instruction set of one CPU as source code and recompiling it for the host platform.) what is extremely expensive execution-wise is data model conversion on loads and stores. Unless Intel starts making load and store instructions that can function in big endian mode (we can only dream), data loading in an emulator/JIT will always be a huge execution burden.

    The result being that while an x86 can run rings around any of the console processors, a perfect one to one JIT can't be developed to make big-endian code run on a little endian CPU with a 1 to 1 mapping.

    As an example of this, if you look at emulators for systems that make use of little endian ARM, performance of the JIT is perfect. In fact, the JIT can sometimes even make performance better. But if you look at a modern 3.4Ghz Quad-Core Core-i7, it still struggles with emulating the Wii which is insanely low performance.

    So, don't read into RISC vs. CISC here. It's really an issue of blocking emulators in most cases.
  • Re:oversimplified (Score:3, Informative)

    by Anonymous Coward on Sunday September 16, 2012 @04:18PM (#41355591)

    For example, x86 has to implement memory snooping on page tables to automatically invalidate TLBs when the page table entry is modified by software, because there is no architectural requirement that software invalidate TLBs (and in fact no instructions to individually invalidate TLB entries, IIRC). Similarly, x86 requires data and instruction cache coherency, so there has to be a bunch of logic snooping on one cache and invalidating the other.

    Err... Not quite:

    • x86 TLBs aren't coherent with main memory; you need to do an explicit invalidate every time you change a PTE.
    • The instruction to invalidate individual TLB entries is called invlpg, and was introduced with the 486. Admittedly, it's quite slow, so it doesn't get used much, but it is there.
    • x86 has only very limited I-D cache coherence. You need to issue a serialising instruction whenever you modify anything which might have been cached in the I-cache.

    Basically, there's nothing in the x86 architecture which is both frequently used and extravagantly difficult to implement performantly, at least on uniprocessor systems (although admittedly that's partly because Intel and AMD have spent an enormous amount of research effort solving all the hard problems). On the other hand, x86 does really suck for large (>64-ish core) SMP systems, because it mandates a very strong memory ordering model, and that is difficult to implement efficiently in very large systems (and there's good reason to suspect from the theory that that won't get much better in the near future).

  • by ultrasawblade ( 2105922 ) on Sunday September 16, 2012 @04:25PM (#41355673)

    Furthermore a distinguishing feature of CISC vs. RISC is number of general purpose registers. RISC always tried to do everything in registers and treat RAM as an I/O device, instead of stuff like "load accumulator with value in RAM and write it back to RAM" or "load this register with this value from RAM, multiply it with the value in this register, then store it back to RAM." - there are many instructions like this in CISC architectures that encourage treating RAM as just as good for temporary storage as registers - which, of course, it hasn't been for a long time now.

    Intel has become more RISCy with MMX/SSE and now with the amd64 extensions that give it 8 more general purpose registers.

  • by Darinbob ( 1142669 ) on Sunday September 16, 2012 @05:06PM (#41356009)

    The win on different criteria and different goals. Intel most clearly is not a well designed instruction set for the purposes of decoding even Intel wants to dump it but their alternative chips do not sell as well. x86 is all about backwards compatibility and not much else. Yes code density is smaller but with instruction caches and buffers that is a trivial advantage compared to the cost of supporting that instruction set over the decades. Intel gets away with it because they have the resources; they can make an overly complicated chip and put hundreds of engineers on it and new fab technologies and then make up for the cost on volume.

    When you move away from the mass market dumb PC model where you don't have to be compatible with Windows and Windows applications then x86 family is not competitive. Argue all you want about how great x86 is but almost no one uses it in their products except for PCs and Macs.

  • Re:Blast in time (Score:4, Informative)

    by Darinbob ( 1142669 ) on Sunday September 16, 2012 @05:17PM (#41356095)

    Everyone has a RISC style core nowdays because RISC essentially won. People don't understand what RISC was all about though, they tend to think it's about instruction set complexity or microcoding. No, the concept is about putting the CPU resources in places where it matters, eliminating less useful parts of the processor, discarding the accepted design wisdom of the 70s, etc. RISC wasn't even that new or radical an idea, except for the big machine makers. Seymour Cray was using some RISC concepts before the RISC term was invented.

    If power consumption is not an issue then code density most likely is not an issue either. Use the space taken up by the decoder and use it for a larger instruction cache or buffer instead. The sole reason the complex decoder is there is for instruction sets that were designed to be hand written by humans. The x86 instruction set was absolutely not created with performance in mind, it was designed as a long series of backwards compatible incremental changes, starting from the original 4004 chip. Every chip since then in the ancestry kept some compatibility to make code easier to convert to the new processors.

    Yes it is true that back in the 70s and 80s when this stuff was new that memory was very small and very slow and very expensive. RISC came about in the era when memory stopped being the most expensive part of a computer.

  • Re:oversimplified (Score:4, Informative)

    by Bruce Perens ( 3872 ) <bruce@perens.com> on Monday September 17, 2012 @12:36AM (#41358963) Homepage Journal

    Oh, get off it. If I want to personally attack him, I will do more than ask "Wasn't that when Linus was working for Transmeta?".

    Linus was obviously not the only one there who believed they could get more performance out of the architecture than they actually got.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...