The Linux-Proof Processor That Nobody Wants 403
Bruce Perens writes "Clover Trail, Intel's newly announced 'Linux proof' processor, is already a dead end for technical and business reasons. Clover Trail is said to include power-management that will make the Atom run longer under Windows. It had better, since Atom currently provides about 1/4 of the power efficiency of the ARM processors that run iOS and Android devices. The details of Clover Trail's power management won't be disclosed to Linux developers. Power management isn't magic, though — there is no great secret about shutting down hardware that isn't being used. Other CPU manufacturers, and Intel itself, will provide similar power management to Linux on later chips. Why has Atom lagged so far behind ARM? Simply because ARM requires fewer transistors to do the same job. Atom and most of Intel's line are based on the ia32 architecture. ia32 dates back to the 1970s and is the last bastion of CISC, Complex Instruction Set Computing. ARM and all later architectures are based on RISC, Reduced Instruction Set Computing, which provides very simple instructions that run fast. RISC chips allow the language compilers to perform complex tasks by combining instructions, rather than by selecting a single complex instruction that's 'perfect' for the task. As it happens, compilers are more likely to get optimal performance with a number of RISC instructions than with a few big instructions that are over-generalized or don't do exactly what the compiler requires. RISC instructions are much more likely to run in a single processor cycle than complex ones. So, ARM ends up being several times more efficient than Intel."
RISC is not the silver bullet (Score:3, Interesting)
Nice advertisement for RISC architecture.
Sure it has advantages, but obviously it's not all that great. After all Apple ditched the RISC-type PowerPC for CISC-type Intel chips a while back, and they don't seem to be in any hurry to move back. It seems no-one can beat the price/performance of the CISC-based x86 chips...
Re:RISC is not the silver bullet (Score:5, Insightful)
Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has.
Modern Intel processors are just RISC with a decoder to the old CISC instruction set.
RISC beats CISC in price performance trade-off, but backwards compatibility keeps the interface the same.
Re: (Score:3, Interesting)
The question is, how much can the hardware optimize the decoded RISC microcode? Or the optimization does not matter much at this point?
Re: (Score:3)
First, a piece of terminology: The Intel term for what you call "decoded RISC microcode" is "uop". The "u" is meant to be a mu, but it's usually pronounced "u". It's short for micro-operation.
So there are essentially two kinds of optimisation available:
1. How the uops are scheduled. The CPU has a lot more freedom here than a typical RISC processor because the CPU did the code generation, rather than the compiler.
2. If the uop doesn't use a functional unit, don't generate any uops for it. The common case i
Misleading slant on mention of Atom's RISC core (Score:5, Informative)
Like I posted elsewhere, intel hasn't made real CISC processors for years, and I don't think anyone has. Modern Intel processors are just RISC with a decoder to the old CISC instruction set.
Exactly. Intel has been doing this ever since the Pentium Pro and Pentium II came out in the 1990s. Anyone who knows much at all about x86 CPUs is aware of this, and Perens certainly will be. That's why I'm surprised that that article misleadingly states:-
So, we start with the fact that Atom isn't really the right architecture for portable devices (*) with limited power budgets. Intel has tried to address this by building a hidden core within the chip that actually runs RISC instructions, while providing the CISC instruction set that ia32 programs like Microsoft Windows expect.
The "hidden core" bit is, of course, correct, but the way it's stated here implies that this is (a) something new and (b) something that Intel have done to mitigate performance issues on such devices, when in fact it's the way that all Intel's "x86" processors have been designed for the past 15 years!
Perhaps I'm misinterpreting or misunderstanding the article, and he's saying that- unlike previous CPUs- the new Atom chips have their "internal" RISC instruction set directly accessible to the outside world. But I don't think that's what was meant.
(*) This is in the context of having explained why IA32 is a legacy architecture not suited to portable devices and presented Atom as an example of this.
Re:Misleading slant on mention of Atom's RISC core (Score:5, Insightful)
It also ignores the fact that in flops per watt Intel still dominates ARM.
It's like comparing a moped to a bus and saying "see look how much more fuel efficient the moped is!"
True... but then fill a bus with people and suddenly the mpg per person goes through the roof for the bus. You could get 300mpg per person from a bus. Good luck getting that with a moped.
And like the introduction of plugin hybrids competing with even Mopeds for single occupancy MPG--you can also see RISC x86 chips out-competing ARM too on RAW watts. The next generation of Intel chips are going to be not only substantially faster but also on parity for watts.
Simply stripping down technology inevitably will come back to bite you in the ass. I think the domination of ARM in the mobile space is about to evaporate within the next year on every conceivable metric.
but when you don't have a busload of passengers .. (Score:3)
My bicycle is significantly more efficient getting me to the train station than the bus is.
I walk because it costs 150 yen or so to park the bike. That's still more efficient. I don't live close to a bus stop. Lots of people near me don't live close to a bus stop.
More than half the people going into the station at any particular time of the morning have not come in on a bus. And most buses at this station are about half-full, not operating at maximum efficiency.
The plain and simple fact is that we are not a
Re: (Score:3)
I think you are confusing Intel with AMD in the '90s.
Sure, Intel (and Motorola) were using RISC tech in their CISC designs from back in the mid-'80s. Bits and pieces of the tech. Not full (almost-)RISC cores running CISC instructions by emulation circuitry (contrary to the propoganda), but cherry-picked RISC techniques. (8 GP registers do not a RISC make.)
AMD's 64 bit CPU was the first real x86 CISC-on-RISC. (And Intel had to go cap-in-hand to AMD for that, in the end.)
Re: (Score:3, Informative)
FYI, all of Apple's iOS devices have ARM CPUs, which are RISC CPUs. So I'm not so sure your "don't seem to be in any hurry to move back" bit is all that accurate. In fact looking at Apple's major successful product lines we have:
Re: (Score:2)
iPhone and iPad are not known as powerful devices; computing power lags far behind a typical desktop at double the price. Form factor (and the touch screens) add a lot of cost.
So far RISC is only found in low-power applications (when it comes to consumer devices at least).
Re: (Score:3)
Plus printers (or at least last I checked), game consoles (the original Xbox was the only CISC in the last 2~3 generations of game consoles not to be a RISC). Many of IBMs mainframes are RISCs these days. In fact I think the desktop market is the only place you can randomly pick a product and have a near certainty that it is a CISC CPU. Servers are a mixed bag. Network infrastructure is a mixed bag. E
Re:RISC is not the silver bullet (Score:4, Informative)
It has become quite simple in modern times to make a CPU emulating JIT (meaning treating the binary instruction set of one CPU as source code and recompiling it for the host platform.) what is extremely expensive execution-wise is data model conversion on loads and stores. Unless Intel starts making load and store instructions that can function in big endian mode (we can only dream), data loading in an emulator/JIT will always be a huge execution burden.
The result being that while an x86 can run rings around any of the console processors, a perfect one to one JIT can't be developed to make big-endian code run on a little endian CPU with a 1 to 1 mapping.
As an example of this, if you look at emulators for systems that make use of little endian ARM, performance of the JIT is perfect. In fact, the JIT can sometimes even make performance better. But if you look at a modern 3.4Ghz Quad-Core Core-i7, it still struggles with emulating the Wii which is insanely low performance.
So, don't read into RISC vs. CISC here. It's really an issue of blocking emulators in most cases.
Re: (Score:2)
The summary is outright incorrect. First, RISC instructions complete in one cycle. If you have multi-cycle instructions, you're not RISC. Second, x86 processors are internally RISCy and x86 is decomposed into multiple micro-ops. Third, RISC may mean less gates for the same tasks, but it also means that some tasks get broken up into multiple tasks.
ARM doesn't scale as far as x86, so in the high end you need more cores and some tasks are less parallelizable than others. ARM should be recognized as the current
Re: (Score:2)
Re:RISC is not the silver bullet (Score:5, Interesting)
LOAD and STORE aren't single cycle instructions on any RISC I know of. Lots of RISC designs also have multicycle floating point instructions. A lot of second or third generation RISCs added a MULTIPLY instruction and they were multiple cycle.
There are not a lot of hard and fast rules about what makes things RISCy, mostly just "they tend to this" and "tend not to that". Like "tend to have very simple addressing modes" (most have register+constant displacement -- but the AMD29k had an adder before you could get the register data out, so R[n+C1]+C2 which is more complext then the norm). Also "no more then two source registers and one destination register per instruction" (I think the PPC breaks this) -- oh, and "no condition register" but the PPC breaks that.
Yeah, Intel invented microcode again, or a new marketing term for it. It doesn't make the x86 any more a RISC then the VAX was though. (for anyone too young to remember the VAX was the poster child for big fast CISC before the x86 became the big deal it is today).
Re:RISC is not the silver bullet (Score:5, Informative)
I would argue the problem for Apple wasn't about performance but about updates, mobile, and logistics.. PowerPC originally held promise as a collaboration between Motorola, IBM, and Apple. IBM got much out of it as their current line of servers and workstations run on it. Apple's needs were different than IBM's. Apple needed new processors every year or so to keep up with Moore's law. Apple needed more power efficient mobile processors. Also Apple needed a stable supply of the processors.
Despite ordering millions of chips a year, Apple was never going to be a big customer for Motorola or IBM. Their chips would be highly customized that none of their other customers needed or wanted and Apple needed updates every year. So neither Motorola or IBM could dedicate huge resources for a small order of chips as they could make millions more for other customers. PowerPC might have eventually come up with a mobile G5 that could rival Intel but it would have taken many years and lots of R&D. IBM and Motorola didn't want to invest that kind of effort (again for one customer). So every year Apple would order enough chips they thought they needed. If they were short, they would have order more. Now Motorola and IBM like most manufacturers (including Apple) do not like carrying excess inventory. So they were never able to keep up with Apple's orders as their other customers had more steady and larger chip orders.
So what was Apple to do? Intel represented the best option. Intel's mobile x86 chips were more power efficient than PowerPC versions. Intel would keep up the yearly updates of their chips. If Apple increased their orders from Intel, Intel could handle it because if Apple wasn't ordering a custom part, they were ordering more of a stock part. There are some cases where Apple has Intel design custom chips for them, mostly on the lower power side; however, Intel still can sell these to their other customers.
As a side note, as a difference in the relationship between IBM and Apple look at the relationship between MS and IBM for the Xbox 360 Xenon chip [wikipedia.org]. This was a custom design by IBM for MS, but the basic chip design hasn't changed in seven years. As such chip manufacturing has been able to move the chip to smaller lithographies (90nm --> 45nm in 2008) both increasing yield and lowering cost.
Re: (Score:2)
Unless you have energy constraints, that is. Then RISC architecture rules. Given that most computers today are smartphones (and most run Linux, some run iOS), and many other CPUs are in data-centers were energy consumption also matters very much, I think discounting RISK this way does not reflect reality. Sure, enough people do run full-sized computers with wired network and power-grid access at home and these will remain enough to keep that model alive, but RISK has won the battle for supremacy a while ago
Re: (Score:3)
Gave it up? http://m.engadget.com/2010/09/06/ibm-claims-worlds-fastest-processor-with-5-2ghz-z196/ [engadget.com]
Also, porting has much more to do with APIs than instruction set.
Re:RISC is not the silver bullet (Score:5, Interesting)
Actually it was both; the great irony is that Apple ditched PPC and went to x86 because of better power consumption with the new intel gear. The core onwards CPUs, in terms of performance per watt, have been awesome and were far and away leaps and bounds better than anything IBM/Motorola could offer with the RISC powerPC processors. If apple didn't go x86 and tried to stick with PPC, they would have been slaughtered in the notebook market, which is/was the fastest growing personal computer segment. Neither IBM or Motorola gave a crap about making a CPU to cater to apples 10% of the portable computer market.
There's a lot of "RISC is so much better for power!" crap floating around, and maybe in theory it is. However in practice when you take into account real world applications and the "race to sleep", having a more powerful, CISC based core with an instruction set that provides many many functions in hardware can help offset the "in theory" better power consumption of the RISC competition. That and the fact that intel has the world's best fabs.
oversimplified (Score:5, Insightful)
The x86 instruction set is pretty awful and Atom is a pretty lousy processor. But that's probably not due to RISC vs. CISC. IA32 today is little more than an encoding for a sequence of RISC instructions, and the decoder takes up very little silicon. If there really were large intrinsic performance differences, companies like Apple wouldn't have switched to x86 and RISC would have won in the desktop and workstation markets, both of which are performance sensitive.
I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.
Re: (Score:3, Interesting)
What really kills x86's performance/power ratio is that it has to maintain compatibility with ancient implementations. When x86 was designed, things like caches and page tables didn't exist; they got tacked on later. Today's x86 CPUs are forced to use optimizations such as caches (because it's the only way to get any performance) while still maintaining the illusion that they don't, as far as software is concerned. For example, x86 has to implement memory snooping on page tables to automatically invalidate
Re:oversimplified (Score:5, Interesting)
As it turns out, that's false. Optimizations are highly dependent on the specific hardware and data, and it's hard for compilers or programmers to know what to do. Modern processors are as fast as they are because they split optimization in a good way between compilers and the CPU. Traditional CISC processors got that wrong, as well as hardcore traditional RISC processors; the last gasp of the latter was the IA64, which proved pretty conclusively that neither programmers nor compilers can do the job by themselves.
Re: (Score:3, Informative)
For example, x86 has to implement memory snooping on page tables to automatically invalidate TLBs when the page table entry is modified by software, because there is no architectural requirement that software invalidate TLBs (and in fact no instructions to individually invalidate TLB entries, IIRC). Similarly, x86 requires data and instruction cache coherency, so there has to be a bunch of logic snooping on one cache and invalidating the other.
Err... Not quite:
Basically, there's nothing in the x86 arch
Re:oversimplified (Score:4, Insightful)
I'de say the x86 being the dominant CPU in the desktop has given Intel the R&D budget to overcome the disadvantages of being a 1970s instruction set. Anything they lose by not being able to wipe the slate clean (complex addressing modes in the critical data path, and complex instruction decoders for example), they get to offset by pouring tons of R&D onto either finding a way to "do the inefficient, efficiently", or finding another area they can make fast enough to offset the slowness they can't fix.
The x86 is inelegant, and nothing will ever fix that, but if you want to bang some numbers around, well, the inelegance isn't slowing it down this decade.
P.S.:
That was true of many CPUs over the years, even when RISC was new. In fact even before RISC existed as a concept. One of the "RISC sucks, it'll never take off" complaints was "if I wanted to write microcode I would have gotten onto the VAX design team". While the instruction set matters, it isn't the only thing. RISCs have very very simple addressing modes (sometimes no addressing modes) which means they can get some of the advantages of OOO without any hardware OOE support. When they get hardware OOE support nothing has to fuse results back together and so on. There are tons of things like that, but pretty much all of them can be combated with enough cleverness and die area. (but since die area tends to contribute to power usage, it'll be interesting to see if power efficiency is forever out of x86's reach, or if that too will eventually fall -- Intel seems to be doing a nice job chipping away at it)
Re: (Score:2)
Not just production process though. They do well in many other areas of research. The ALUs are mighty well designed. They have plenty of great work in many many areas. I really do hate the instruction set, and I'm not fond of the company, but they do really good work in so many areas.
I think if you gave them and IBM equal research budgets and aimed them at the same part of the market it would be hard to predict who would win. Any other two companies though, and the bet is clear.
Re: (Score:2)
Performance hasn't got a lot to do with it... Backwards compatibility is what matters, closely followed by price and availability.
While they were being actively developed and promoted, RISC architectures were beating x86 quite heavily on performance. However Intel had economies of scale on their side, they were able to sell millions of x86 chips and therefore outspend the RISC designers quite heavily.
Intel tried to move on from x86 too, with IA64... They failed, largely because of a lack of backwards compat
Re: (Score:3)
At times, a high end RISC chip would beat a similarly priced high end x86 chip, but performance advantages were modest and didn't last long.
Backwards compatibility at the instruction set level matters little to people who need high performance. If IA64 had worked well, peop
Re:oversimplified (Score:5, Insightful)
I'd like to see a well-founded analysis of the differences of Atom and ARM, but superficial statements like "RISC is bad" don't cut it.
i've covered this a couple of times on slashdot: simply put it's down to the differences in execution speed vs the storage size of those instructions. slightly interfering with that is of course the sizes of the L1 and L2 caches, but that's another story.
in essence: the x86 instruction set is *extremely* efficiently memory-packed. it was designed when memory was at a premium. each new revision added extra "escape codes" which kept the compactness but increased the complexity. by contrast, RISC instructions consume quite a lot more memory as they waste quite a few bits. in some cases *double* the amount of memory is required to store the instructions for a given program [hence where the L1 and L2 cache problem starts to come into play, but leaving that aside for now...]
so what that means is that *regardless* of the fact that CISC instructions are translated into RISC ones, the main part of the CPU has to run at a *much* faster clock rate than an equivalent RISC processor, just to keep up with decode rate. we've seen this clearly in an "empirical observable" way in the demo by ARM last year, of a 500mhz Dual-Core ARM Cortex A9 clearly keeping up with a 1.6ghz Intel Atom in side-by-side running of a web browser, which you can find on youtube.
now, as we well know, power consumption is a square law of the clock rate. so in a rough comparison, in the same geometry (e.g. 45nm), that 1.6ghz CPU is going to be roughly TEN times more power consumption than that dual-core ARM Cortex A9. e.g. that 500mhz dual-core Cortex A9 is going to be about 0.5 watts (roughly true) and the 1.6ghz Intel Atom is going to be about 5 watts (roughly true).
what that means is that x86 is basicallly onto a losing game.... period. the only way to "win" is for Intel and AMD to have access to geometries that are at least 2x better than anything else available in the world. each new geometry that comes out is not going to *stay* 2x better for very long. when everyone has access to 45nm, intel and AMD have to have access to 22nm or better... *at the same time*. not "in 6-12 months time", but *at the same time*. when everyone else has access to 28nm, intel and AMD have to have access to 14nm or better.
intel know this, and AMD don't. it's why intel will sell their fab R&D plant when hell freezes over. AMD have a slight advantage in that they've added in parallel execution which *just* keeps them in the game i.e. their CPUs have always run at a clock rate that's *lower* than an intel CPU, forcing them to publish "equivalent clock rate" numbers in order to not appear to be behind intel. this trick - of doing more at a lower speed - will keep them in the game for a while.
but, if intel and AMD don't come out with a RISC-based (or VILW or other parallel-instruction) processor soon, they'll pay the price. intel bought up that company that did the x86-to-DEC-Alpha JIT assembly translation stuff (back in the 1990s) so i know that they have the technology to keep things "x86-like".
Re: (Score:2)
Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else.
Remind me how being the core of the most-used mobile operating system is "terrible for almost everything else"
Re: (Score:3)
Re: (Score:2, Informative)
How is an iPhone better? Having used both an iPhone and iPad I was far from impressed. The thing I was mostly impressed about was that I had great difficulty trying to use it, because it takes awhile to figure you you actually can't do a lot of very basic computer operations as it's very dumbed down (file system access for one--what the hell?)
Re: (Score:2)
I don't think you're trolling, just living in the early 2000s. If Intel doesn't release ATOM power settings, no big deal. Easily.Avoided. Atom 64-bit is still a few years away until practical implementations are made. Atom is their answer to a bridge between ARM and fat Xeons. As a design, it's a compromise, and is unlikely to live long.
That Intel doesn't release specs is a questionable assertion. That we care is also questionable. It's more of the Intel-Microsoft boogie-man propaganda meme that's rooted in
Re: (Score:2)
Let me correct: ARM 64-bit is still a few years away.
Re:oversimplified (Score:4, Funny)
Careful, there's a bit of the Bill Gates in your statement.
Re: (Score:2)
Re:oversimplified (Score:4, Interesting)
This is just a rant session about Atom. Someday linux devs will resign themselves to the fact that linux is (somewhat) great for servers and terrible for almost everything else. This will probably get modded as trolling but if I said the opposite thing about MS it would be insightful. In my opinion this entire article is trolling.
Well, excuse me for living.
I boot up Windows for 3 reasons:
1. Tax preparation
2. Diagnosing IE-specific issues
3. Flight Simulator (Yes, I know, there's a flight simulator for Linux, but I like the MS one OK)
Mostly the Windows box is powered off, because those are infrequent occasions and I'd rather not add another 500 watts to the A/C load. All of the day-to-day stuff I do to make a living is running on Linux. If for no other reason that the fact that I'd have to take out a second mortgage to pay for all the Windows equivalents of the databases, software development and text-processing tools that come free with Linux. Or in some cases, free for Linux.
If you said "Linux is terrible for almost everything else" and gave specific examples, you'd be insightful. Given however, that I'm quite happy with at least 2 of the "everything else"s (desktop and Android), lack of specific illustrations makes you a troll.
Re: (Score:2)
I can't vouch for the GPU or hefty graphics card you're using, but you could do the same thing with lots of heft for under 150watts, including the freaking LCD monitor. There are a number of nice, high stroke CPUs out there that don't need a 500w supply.
Yes, Flight Simulator is a pig. But it seems to like AMD's math over AMDs, and in terms of graphics, I haven't kept up, so can't address that particular piece. Nonetheless, 500w can power a decent electric scooter.
Re: (Score:2)
I can't vouch for the GPU or hefty graphics card you're using, but you could do the same thing with lots of heft for under 150watts, including the freaking LCD monitor. There are a number of nice, high stroke CPUs out there that don't need a 500w supply.
Yes, Flight Simulator is a pig. But it seems to like AMD's math over AMDs, and in terms of graphics, I haven't kept up, so can't address that particular piece. Nonetheless, 500w can power a decent electric scooter.
Forgive me. I haven't actually measured the true wattage. I don't have gamer hardware on this box - it's just a bog-standard small form-factor desktop with a mobo graphics card that's so crappy that I've been known to use Remote Desktop in from the Linux box just to preserve my eyes.
All I can say for certain is that it can make the office nice and toasty when I've already got my regular equipment running. Good for January. Not so good for July.
I'd probably do better making a VM out of it and running it on t
Re:oversimplified (Score:5, Informative)
You don't know anything about Linux. It powers all RISC / ARM based Android smartphones. It also runs on more than 33 different CPU architectures. A huge number of those platforms are embedded systems that are probably sitting in your living room and enabling you to watch TV, DVDs, Blue Ray, etc as well as listen to it all in Surround Sound [wikipedia.org].
To be blatantly honest, I haven't quite figured out if it is you that is trolling, or you are really just that ignorant of the facts.
Re:oversimplified (Score:4, Informative)
Oh, get off it. If I want to personally attack him, I will do more than ask "Wasn't that when Linus was working for Transmeta?".
Linus was obviously not the only one there who believed they could get more performance out of the architecture than they actually got.
Windows developers (Score:2)
The details of Clover Trail's power management won't be disclosed to Linux developers.
So sign up as a Windows developer, get the info, and use it to improve Linux.
Re: (Score:3)
The "info" is "just use it on Windows 8 with our great modified kernel".
Short term vs Long term thinking (Score:5, Interesting)
Some here were immediately crying anti-trust and not understanding why Intel won't support Linux for Clover Tail. It's not an easy answer but power efficiency for Intel has been their weakness against ARM. If consumers had a choice between ARM based Android or Intel based Android, the Intel one might be slightly more powerful in computing but comes at the cost of battery life. For how tablets are used for most consumers, the increase in computing isn't worth the decrease in battery life. For geeks, it's worth it but general consumers don't see the value. Now if the tablet used a desktop OS like Windows or Linux, then the advantages are more transparent; however, the numbers favor Windows are there are more likely to be desktop Windows users with an Intel tablet than desktop Linux users with an Intel tablet. For short term strategy, it makes sense.
Long term, I would say Intel isn't paying attention. Considering how MS have treated past partners, Intel is being short-sighted if they want to bet their mobile computing hopes on MS. Also have they seen Windows 8? Intel based tablets might appeal to businesses but Win 8 is a consumer OS. So consumers aren't going to buy it; businesses aren't going to buy it. Intel may have bet on the wrong horse.
Re:Short term vs Long term thinking (Score:5, Insightful)
Oh, you're right. A company the size of Intel couldn't possibly spare one or two people for a few weeks to get support for their new power management into Linux.
Sorry Bruce, but that is total nonsense. (Score:5, Insightful)
"ARM ends up being several times more efficient than Intel"
Wow. Someone suffered a flashback to the ancient CISC vs RISC wars.
This is really totally out to lunch. Seek out some analysis from actual CPU designers on the topic. What I read generally pegs the x86 CISC overhead at maybe 10%, not several times.
While I do feel it is annoying that Intel is pushing an Anti-Linux platform, it doesn't make sense to trot out ancient CISC/RISC myths to attack it.
Intel Chips have lagged because they were targeting much different performance envelopes. But now the performance envelopes are converging and so are the power envelopes.
Medfield has already been demonstrated at competetive power envelope in smartphones.
http://www.anandtech.com/show/5770/lava-xolo-x900-review-the-first-intel-medfield-phone/6 [anandtech.com]
Again we see reasonable numbers for the X900 but nothing stellar. The good news is that the whole x86 can't be power efficient argument appears to be completely debunked with the release of a single device.
Re: (Score:3)
If it were possible to give a +6, then your post would deserve one...
One other thing about the pro-ARM propaganda on this site practically every day: How come the exact same people throwing a hissy-fit over Clovertrail never make a peep when ARM bends over backwards to cooperate with companies like Nokia & Apple whose ARM chips don't work with Linux in the slightest? By comparison, making a few tweaks to turn on Cloverfield's power saving features will be trivial compared to trying to get Linux running
Re:Sorry Bruce, but that is total nonsense. (Score:4, Informative)
arm does not make their own chips. They design the instruction sets and the silicon photo masks(look up how chips are made) but other companies make the actuall physical silicon product. Those companies can pick and choose what parts of the cpu they want to use and what instruction sets they want in it.
to use food as a analogy, Intel is every store or restaurant that you can buy food pre made and ready to eat. arm would be like someone selling a recipe to you. it's up to you to make it, and what you put into it.
So it's not arm's fault for not supporting linux on the nokia and apple variants of the arm v7 instruction set. It's those respective companies. So if you had enough money and access to either rent or own a cpu fab plant, you too could make your own version of a arm chip and make it only be support on haiku os for example.
Re: (Score:2)
Thanks for posting that. The article felt nothing like a hit piece against all things Intel and AMD just because they're not officially supporting one processor on Linux at the time of release. Intel is very good at releasing Linux drivers for their GPUs etc. compared to others. I think they figure that too many Linux folks won't be falling over themselves buying Windows 8 touch tablets and running Ubuntu on them. The Slashdot consensus seems to be that Windows 8 tablets suck and will be a massive failure,
Re: (Score:3)
Re: (Score:3)
Intel in the 90's was performance at any power cost. Then in the last 10 years, it was performance within a limited power envelope, aiming at laptops and desktops. The power they were aiming at was much higher than smartphones, so although they got more "power efficient", you do very different things when aiming at 1W than when aiming at 10W or 100W. If you can waste 5W and get 20% more performance, that's a great thing to do. But not for phones.
I think what you're seeing is Atom was a kludge. If Intel
x86 to blame? (Score:5, Insightful)
Is it really true that x86 is necessarily (substantially) less efficient than ARM? x86 instruction decoding has been a tiny part of the chip area for many years now. While it's probably relatively more on smaller processors like Atom, it's still small. The rest of the architecture is already RISC. Atom might still be a bad architecture, but I don't think it's fair to say x86 always causes that.
Also, there is exactly one x86 Android phone that I know of, and while its power efficiency isn't stellar, the difference is nowhere near 4x. From the benchmarks I've seen, it seems to be right in the middle of the pack. I'd really like to see the source for that claim.
Re: (Score:2)
I don't understand why people put so much weight on instruction-level compatibility. As if compiler technology does not exist. Heck, even today compilers can translate efficiently from one instruction-set to the other (see e.g. virtual machines, emulators, etc).
Granted, there will always be some parts of code (the "innermost loops") that need to be handcrafted to be as efficient as possible, but I don't believe this is so important to base your whole roadmap on as a semiconductor design house.
Re: (Score:2)
Yeah, for really embedded stuff ARM is much better suited, becuase it is a lot simpler. I don't think it's possible or would make sense to squeeze the whole x86 legacy baggage into e.g. a tiny uC with a few KB SRAM, and still get decent performance and features. But I don't think this is where Intel is aiming at.
Linux-proof? Challenge Accepted! (Score:2)
Intel will fail at mobile (Score:4, Interesting)
.. and the reason is not efficiency or performance.. Intel enjoys huge (50%+) margins on x86 CPUs that simply will not be tolerated by the tablet or mobile device vendors. Contrast this with the pennies that ARM and their fab partners make for each unit sold. Even Intel's excellent process tech can't save them cost wise when you can get a complete ARM SoC with integrated GPU for $7. [rhombus-tech.net]
RISC vs CISC, really? (Score:5, Informative)
So this OS specific chip is nothing new, and *nix exclusion is not new. Many microcomputers could not run *nix because they did not have a PMMU. The ATT computer ran a 68K processor with a custom PMMU. Over the past 10 years there have been MS Windows only printers and cameras which offloaded work to the computer to make the peripheral cheaper.
Which is to say that there are clearly benefits for RISC and CISC. MS built and empire on CISC, and clearly intends to continue to do so, only moving to RISC on a limited basis for high end highly efficient devices. For the tablet for the rest of us, if they can ship MS Windows 8 on a $400 device that runs just like a laptop, they will do so., If efficiency were the only issue, then we would be running Apple type hardware, which, I guess, on the tablet we are. But while 50 million tablets are sold, MS wants the other 100 million laptop users that do not have a tablet, yet, because it is not MS Windows.
In other words... (Score:2)
In other words, Intel says they failed at hiding their power consumption details from the API (instruction set).
Only benefits.. (Score:3)
The only advantages x86 has over ARM are performance and the ability to run closed source x86-only binaries...
Performance is generally less important than power consumption in an embedded device, and this CPU is clearly designed for lower power use so it may not be much faster than comparable ARM designs...
And when it comes to x86-only binaries, there is very little linux software which is x86 only and even less for android... Conversely there are a lot of closed source android applications which are arm-only... So at best you have a linux device which offers no advantages over ARM, at worst you have an android device which cannot run large numbers of android apps while costing more, being slower and having inferior battery life.
Windows on the other hand does have huge numbers of apps which are tied to x86, which for some users may outweigh any other downsides. On the other hand, most windows apps are not designed for a touchscreen interface and might not be very usable on tablets, and any new apps designed for such devices might well be ported to arm too.
Re: (Score:2)
You want a good x86-only Linux program? Wine. There's a good one for you.
Re: (Score:2)
2 things: with the advent of smartphones that play 3d games - CPU performance is becoming more important. Also, that ability to run closed source x86 binaries is huge.
In terms of performance per watt, intel is doing pretty well. Phones and tablets are becoming less about absolute minimum consumption, and mor
Reality check (Score:4, Interesting)
If nobody wants it and it's a dead-end for technical and business reasons, then how come that there is a slew of x86 Win8 devices announced by different manufacturers - including guys such as Samsung, who don't have any problems earning boatloads of money on Android today?
Heck, it's even funnier than that - what about Android devices already running Medfield?
Re:Reality check (Score:4, Insightful)
In geekland, Nobody == Nobody I Know.
Re: (Score:3)
hah (Score:2)
I predict clover trail will be a roaring success.
Don't bet on it being so much more efficient (Score:2)
> It had better, since Atom currently provides about 1/4 of the power efficiency of the
> ARM processors that run IOS and Android devices.
Don't bet on it. The ARM design in itself is more efficient for sure, but Intel are frankly well ahead of anyone else in actual manufacture.
If they decide to build these with their Finfets and the latest node they have, then the gap between Intel Atoms and ARMs made at Samsung, TSMC or anyone else won't be so noticeable, unless that is that the Atoms actually pull ah
backroom agreement? (Score:2)
"The details of Clover Trail's power management won't be disclosed to Linux developers." ...Perhaps this is because Microsoft is helping to fund development of the Intel solution behind the scenes? Perhaps they have worked out an agreement of some sort to prevent Linux from finding its way onto the chip.
I would like to know why any information would be withheld from Linux developers--the only reason I could imagine for doing so would be to help Microsoft stage a lead on use of the chip. I can think of no go
Here is what Intel says (Score:3)
This is from an Intel rep:
There is no fundamental barrier to supporting Linux on Clover Trail since it utilizes Intel architecture cores, we are simply focusing our current efforts for this Clover Trail product on Windows 8. Our Medfield products support Android-based smartphones and tablets on the market today, and we may evaluate supporting Linux-based OSes on other tablet products in the future.
Just quoting, believe what you want.
My hope... (Score:3)
...is that this will fail miserably and cost enough that other manufacturers will think twice before accepting bribes from Microsoft for making something that actively shuts out non-Windows OS's.
Waiting for the day... (Score:3)
I'm just waiting for the day when I can get an ARM-based mid-high-end PC and expect it to run all the applications and games I currently expect from an x86_64 CPU. It's becoming apparent (to me, at least) that ARM is a much better kind of CPU than x86 derivatives, so naturally, I want one--so long as it doesn't put me in the same boat as Mac users were in 10 years ago.
Comment removed (Score:4, Funny)
Re:Visual Studio is great, but what about MyCleanP (Score:4, Funny)
This is better:
http://fix-kit.com/Explosive-diarrhea/repair/ [fix-kit.com]
http://fix-kit.com/Assassination-of-reigning-monarch/repair/ [fix-kit.com]
Finally, downloadable software for Windows that'll cure just about anything!
I imagine /. readers are savvy enough to realise that the site is a scam, and that downloading their software is akin to having unprotected sex with third-world prostitutes.
ARM is not RISC and x86-64 is not CISC (Score:5, Informative)
Getting back on topic: the last ARM architecture, ARMv8, is far from what was called "RISC" back in the '70s. E.g. it can run instructions of different sizes (16 vs 32 bit), it has 4 specialized instructions for AES, registers with different sizes (32, 64 and 128 bits), instructions for running a subset of the Java bytecode, a rich set of SIMD operations and specialized instructions for SHA-1 and SHA-256.
Similarily the architecture supported by the new Atom chips (which is AMD64/x86-64 BTW, IA32 is only present for backward compatibility) is almost universally run on RISC-like processors that have instruction translators. Considering that the increased density of the x86-64 instructions usually allows to save more cache transistors than the ones required for decoding the instructions themselves, I think that the power consumption differences that we see are more due to the implementation and different traditional focus areas of ARM vs Intel/AMD than inherent differences in the instruction sets.
Re:ARM is not RISC and x86-64 is not CISC (Score:5, Interesting)
Re:ARM is not RISC and x86-64 is not CISC (Score:5, Insightful)
None of today's "RISC" processors are what John Mashey was designing when RISC was introduced.
I agree (and wrote in the article) that ARM has complicated their own architecture, and that Atom uses a RISC-like processor and instruction translation. However, backward compatibility with all of the generations of x86 still increases the complexity of Atom quite a lot.
Thumb (ARM's 16-bit instruction set) is itself an instruction translator to the 32-bit opcodes, adding fixed or default operands for many of the instructions.
The SIMD instructions used by Intel, AMD, and ARM go back to Pixar's CHAP compositing hardware in the 80's.
None of this would have been in a Stanford MIPS.
Re:ARM is not RISC and x86-64 is not CISC (Score:4, Interesting)
So you decided that the best way to get this point across to the Slashdot denizens (who never read the article) is to NOT ONCE mention the weaknesses of ARM or the strengths of x86 in your summary.
Never in all my years reading Real World Tech have I ever seen a thread or article absolutely decide the question of "which instruction set is better?" between ARM and x86 (and some of the biggest industry heavyweights weigh-in on those discussions). Does better code density trump better compiler optimization flexibility. And does it even matter when ARM introduces out-of-order in mainstream cores like the A9, and Intel keeps Hyperthreading attached to every new Atom core to deal with blocking?
So just because you feel slighted you write this fluff piece? Bruce, you shouldn't say things that aren't true just because you didn't get what you want, and because this "locked-down" tablet ecosystem is quickly taking away the free software community's free-ride.
And you can keep pretending all you want that Intel can't compete with ARM on performance/watt or price. I guess you weren't paying attention when Intel released the Atom Z2460 phone platform, with competitive performance and battery life [anandtech.com]? The Xolo x900 has a street price of around $400, so you can bet Intel is charging a tiny fee for that chipset.
Tell me again why Intel can't compete?
Re:ARM is not RISC and x86-64 is not CISC (Score:5, Insightful)
I didn't write the summary posted on Slashdot. My summary (it's probably still in the "firehose" section) was one line. The Slashdot editor just scraped the first few paragraphs of my article. You can tell the number of people who actually read my article by the discussion of PowerVR graphics. There isn't one.
Intel's competition with ARM right now is like a doped race-horse. They are hiding the problems of their architecture by using a semiconductor process half the size of the competition. Given equal FABs, we wouldn't see Intel as competitive.
Re:ARM is not RISC and x86-64 is not CISC (Score:5, Insightful)
Given equal FABs, we wouldn't see Intel as competitive.
Intel has had a fab advantage for years, and it's only getting bigger. Ask AMD how it feels - AMD made nice gains with K8 while Intel had uarch problems (Itanium+P4), but as soon as Intel fixed that (Core2/Nehalem/Sandy/Ivy), AMD felt the pain of their fab advantage all over again, and now AMD has uarch problems AND fab disadvantage.
Saying "given equal FABs" is a ridiculously stupid way to analyze the processor market. Real chips are what people buy, not some hypothetical ARM A15 produced on Intel's 22nm FinFET or an Atom produced in TSMC 28. If you want to talk about microarchitecture, sure, take process out of the equation. But people don't buy microarchitecture, they buy a final product. Fab advantage allows Intel to hide their uarch problems until they fix them. When the next-gen Atom (Silvermon/Valleyview) comes out, then Intel won't have uarch problems AND they will still have a massive fab advantage.
Re: (Score:3)
And yet those are your words un-edited (aside from the first paragraph, where they inserted a link)! The incendiary TITLE, and the second and third paragraphs of the article summary are stripped direc
Re: (Score:3)
Just b'cos they can compete in a certain market doesn't imply that they'd necessarily want to! There are more factors involved than just market-share.
Like I mentioned in the Hondo thread, if Intel & AMD wanted Android to be the target market of their tablet CPUs, they'd have to price it competitively w/ umpteen ARMs out there, and erode their margins. Given a choice, which company would like to do that? Here, they see an opportunity w/ Windows 8 tablets, where they can promote their tablets to run
Re: (Score:3)
Re: (Score:3)
ARM is vastly far away from CISC and very close to RISC. The instruction sets are very regular, decoding is simple, operations are orthagonal, one instruction issue per cycle, etc. Save all those decoding transistors and save power or use them for performance instead.
Re:Visual Studio is great, but what about MyCleanP (Score:4, Informative)
Furthermore a distinguishing feature of CISC vs. RISC is number of general purpose registers. RISC always tried to do everything in registers and treat RAM as an I/O device, instead of stuff like "load accumulator with value in RAM and write it back to RAM" or "load this register with this value from RAM, multiply it with the value in this register, then store it back to RAM." - there are many instructions like this in CISC architectures that encourage treating RAM as just as good for temporary storage as registers - which, of course, it hasn't been for a long time now.
Intel has become more RISCy with MMX/SSE and now with the amd64 extensions that give it 8 more general purpose registers.
Re: (Score:3, Insightful)
Visual Studio
Please, please, please, stay on Windows, we don't need your Microsoft-infected minds spreading their diseases to other systems.
Re: (Score:3)
Mods, bugger off! How can 437 be a troll?
Plus, he's right. There is no need for VB compiled for other systems. And no, there is no need for C-code compiled with VC on other systems.
Now mod me down as well, please, to be consistent.
Re: (Score:2)
Except MS office, which most of the corporate/academic world still uses for everything. Yes, Libre Office can do everything as well or in many cases better, but that doesn't matter when someone sends you a pptx file that Impress mangles into an unreadable smear.
Re:The Year of Linux on Desktop Is Now (Score:5, Insightful)
So does it matter when someone sends you a .pptx file that Office 2003 freezes on? Yeah, yeah, I'm pretty sure you can get a converter, but I like telling people that if their file has an 'x' in the extension it means that it's 'experimental' and they shouldn't send it to others. They need to send the version without the 'x'.
Re:The Year of Linux on Desktop Is Now (Score:4, Insightful)
For me, the year of linux on desktops is now. With Steam coming to Linux [steampowered.com], along with Crossover and pure Linux-ported games, the inevitable has happened. I'm glad Visual Studio [microsoft.com] also runs perfectly on Wine (I'm also making sure to have a party with my friends on Visual Studio 2012 Virtual Launch Party, where thousands of geeks around the globe connect together to party the release of latest Visual Studio).
A bit of "linux on the desktop" ass-licking, followed by a big, fat Visual Studio plug.
Ladies and gents, we have a shill. A very smart one, but a shill none the less. Modded up by a few other plants, no doubt.
HTML Media Capture is not widely supported (Score:2)
The world's most popular applications run in browsers, not desktops.
So if I want to include microphone, camera, or gamepad support in something that I intend to become one of "[t]he world's most popular applications", what API should I use? Among desktop browsers, neither IE nor Firefox nor Safari supports HTML Media Capture, and nothing mobile supports it at all.
Re:The Year of Linux on Desktop Is Now (Score:4, Funny)
Re:The Year of Linux on Desktop Is Now (Score:5, Insightful)
Re: (Score:2, Informative)
Oh, and wait until one of them has a Stea
Re: (Score:2)
What? Valve has multiple teams on different projects. I can't believe you would even post this--hardware people aren't necessarily software people. And if they're doing hardware, how's it gonna run? Oh right, you need software as well. Durr.
Re:Blast in time (Score:5, Informative)
Hell, I remember using an Archimedes in 1988. Odd to think that my phone now has four of them.
Back to the topic, the border between RISC and CISC is a bit fuzzy these days. Every modern CISC chip is basically a dynamic translator on top of a RISC core. But even high-end ARM chips can do some of this with Jazelle.
To be fair, CISC does have a few performance advantages when power consumption isn't (as big) an issue. The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely. ARM chips generally don't do out-of-order scheduling and retirement; that uses a lot of power, and is the main architectural difference between laptop-grade and desktop/server-grade x86en).
I'd like to see what a mobile-grade Alpha processor looks like. But I never will.
Re:Blast in time (Score:5, Informative)
Every modern CISC chip is basically a dynamic translator on top of a RISC core.
And that's the problem for power consumption. You can cut power to execution units that are not being used. You can't ever turn off the decoder ever (except in Xeons, where you do in loops, but you leave on the micro-op decoder, which uses as much power as an ARM decoder) because every instruction needs decoding.
But even high-end ARM chips can do some of this with Jazelle.
Jazelle has been gone for years. None of the Cortex series include it. It gave worse performance to a modern JIT, but in a lower memory footprint. It's only useful when you want to run Java apps in 4MB of RAM.
The code density is better on x86 (yes, even with Thumb), which does mean they tend to use instruction cache more effecitvely
That's not what my tests show, in either compiled core or hand-written assembly.
Re: (Score:2)
Seems you have not actually been keeping up with the arm architecture. They have had out-of-order-execution since the debut of the cortex a9.
Re:Blast in time (Score:4, Informative)
Everyone has a RISC style core nowdays because RISC essentially won. People don't understand what RISC was all about though, they tend to think it's about instruction set complexity or microcoding. No, the concept is about putting the CPU resources in places where it matters, eliminating less useful parts of the processor, discarding the accepted design wisdom of the 70s, etc. RISC wasn't even that new or radical an idea, except for the big machine makers. Seymour Cray was using some RISC concepts before the RISC term was invented.
If power consumption is not an issue then code density most likely is not an issue either. Use the space taken up by the decoder and use it for a larger instruction cache or buffer instead. The sole reason the complex decoder is there is for instruction sets that were designed to be hand written by humans. The x86 instruction set was absolutely not created with performance in mind, it was designed as a long series of backwards compatible incremental changes, starting from the original 4004 chip. Every chip since then in the ancestry kept some compatibility to make code easier to convert to the new processors.
Yes it is true that back in the 70s and 80s when this stuff was new that memory was very small and very slow and very expensive. RISC came about in the era when memory stopped being the most expensive part of a computer.
Re: (Score:3)
Nice marketing talk. So was the VAX (most of them anyway - I think the VAX9000 was a notable exception) I mean it had this hardware instruction decoder, and it did simple instructions in hardware, and then it slopped all the complex stuff over onto microcode. In fact most CISC CPUs work that way - in the past all of the "cheap" ones did, and now pretty much all of th
Re: (Score:2)
That situation existed in the early 1990s/late 1980s when the terms CISC and RISC were invented. The x86 existed and was CISCy on the outside and microcoded inside. The VAX was the same. The arguments were never "you can't implement CISC internally the same as a RISC" because they were all already done that way. It was "if you avoid X, Y and Z in your programmer visible instruction set you don't need all that cruft in the chip". What makes something RISC or CISC was originally all about the instruc
Re: (Score:3)
No, but the way ia32 is binary compatible with the 16 bit x86 code from the 1970s makes it relevant. You still have to handle AL and AH as aliases to AX. Ask Transmeta how much of a pain that was (hint: that is a big part of why their x86 CPU ran windows like a dog...the other part being they benchmarked Windows apps too late in the game to hit the market with something that efficiently handled the register aliases)
Re: (Score:3)
IMHO, it looks and smells like MSFT having signed a deal with Intel and AMD to lock down the x86 tablet platform to Win8. This complements ni