Intel's Plans For X86 Android, Smartphones, and Tablets 151
MrSeb writes "'Last week, Intel announced that it had added x86 optimizations to Android 4.0, Ice Cream Sandwich, but the text of the announcement and included quotes were vague and a bit contradictory given the open nature of Android development. After discussing the topic with Intel we've compiled a laundry list of the company's work in Gingerbread and ICS thus far, and offered a few of our own thoughts on what to expect in 2012 as far as x86-powered smartphones and tablets are concerned.' The main points: Intel isn't just a chip maker (it has oodles of software experience); Android's Native Development Kit now includes support for x86 and MMX/SSE instruction sets and can be used to compile dual x86/ARM, 'fat' binaries; and development tools like Vtune and Intel Graphics Performance Analyzer are on their way to Android."
x86 (Score:5, Insightful)
Since most of x86 architecture and related hardware is getting smaller and most smartphone are getting bigger, they are bound to meet somewhere.
hmm, I guess it will be called a tablet or an i(ntel)Pad. ehm ehm
Re: (Score:2)
Can you say "Windows 8" for phone?
Re:x86 (Score:5, Funny)
Not while keeping a straight face, no.
Re: (Score:2)
Didn't Balmer just threaten everybody with something along the lines of "We'll always live in a Windows era"?
Or was it more of a warning?
Re: (Score:2)
No, it was just Balmer admitting that Microsoft is a WINDOWS company. That is their one product they have, everything else is built around WINDOWS for WINDOWS. Android and iOS must scare the crap out of them, because we're fast approaching the era where Windows only runs on Corporate Computers, everyone else is running Android or iOS.
Re: (Score:3)
Is that close enough?
Re: (Score:2)
LoB
Re: (Score:2)
Resistance is futile.
They said the same thing about Vista.
Re: (Score:2)
And they were right, as Vista 1.1, also known as 7 won everyone's hearts.
Re: (Score:2)
Re: (Score:2)
But resistance WAS futile. 7 was a repackaged Vista, that retained most if not all of its flaws. It just offered a slightly different presentation and was repackaged under a different name.
UAC halting your entire system for pretty much everything? Still there. Programs breaking due to admin rights requirements on machine where I specifically fucking want to have admin rights while running it? Still there. Inability to roll back to classic menu? Still there. Incompatibility issues with older software? Still
Re: (Score:2)
I rest my case.
Heh heh heh. Touché. BTW, I use Linux and they'll pry it from my cold dead fingers.
*runs off laughing maniacally...
Re: (Score:2)
No BF3 for you, linux boy! :D
Re: (Score:2)
No, even in linux land, they still call what you're doing "anon trolling". Because the only reason to truly make UAC workable for someone who doesn't have his head up his ass (read: doesn't need added security) is to disable it. Even at minimal settings offered by windows, it still breaks a lot of software by forcing it to run as limited user by default, and still forces you to run essentially all older software as admin and wastes your time on top of it by making you make separate shortcuts for each indivi
Re:x86 (Score:4, Interesting)
Given the choice, everyone who actually has to code for those CPUs (e.g. compiler makers), without a doubt prefers ARM over x86. Simply because of how shit x86 is.
It's the Windows ME of machine code. It started out as a DOS, and kept the cruft all the way to today. While piling more and more bigger and bigger stuff on top. Ending up with a upside-down pyramid, held in balance by a billion wood sticks.
And I know that even Intel itself couldn't stand it anymore. That's why they implemented that microcode solution with a RISC processor on the inside.
If only they would give us direct access to that core, but leave the microcode in there for 1-2 processor generations for legacy reasons.
Then nobody would willingly keep doing x86, and before those 2 generations would be over, it would be locked away and forgotten.
I, for one, plan a 265-core ARM CPU as my next desktop system. (Yes, ARM cores are slower per clock cycle. But they are *a lot* more efficient and *a lot* cheaper too. [No, ATOM does not count, unless you add that northbridge that's so big and gets so hot that looking at the mainboard 10/10 people think it's the actual CPU. Which is closer to the truth as Intel ever wants to admit.])
Re: (Score:3)
Then run it in emulation (Score:2)
You forget too easily that many people depend on this legacy code to run software of thousands or even millions of dollars.
Then keep your legacy code and run it in an emulator on an ARM CPU. The legacy code was probably written so long ago that it'd run as fast in a JIT emulator today as it did natively then.
Re: (Score:2)
Games tend to use dirty I/O tricks (Score:2)
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
Precisesly. The reason why AMD was able to succeed with its architecture over Intel was that Intel's 64bit architecture at that time required all the software to be specially compiled to run on Merced. Whereas AMD64 was backwards compatible and could allow people to buy the chip and update to 64bit when needed or to just run some applications in 32bit mode.
In this case I have no idea why anybody other than Intel would think this would be a good idea as the reason why Intel was using an ARM based XScale proc
Re: (Score:3)
What i would like to see is a CPU architecture that can have asymmetric cores:
When the machine is idle, one low-power core handles the OS idle functions while another handles the IP stack, another core handles I/O, and another handles the hypervisor aspect.
When the machine is running database stuff, first cores that are made for integer operations get used, then the FPUs and GPUs come in.
Flip to a game, and the cores that mainly are used as GPUs come into play.
Fire up a modeling task, and the FPU heavy core
Re: (Score:2)
Mod Parent Up!
Very interesting idea. We are going to have to add in more cores from now on to get more performance, might as well start specializing them for certain tasks. Your idea about x86 hardware emulation is especially interesting.
Re: (Score:2)
Sorry to double reply, but now that I think about it, what you are describing sounds a hell of a lot like a mainframe on a chip. IBM mainframes have Multi-chip Modules [wikipedia.org] that are a lot like what you are describing.
Re:x86 (Score:4, Informative)
Similar to your design, the Tegra 3 ARM SoC does that. It has a quad-core A9 running at 1.5GHz or more, but it also has a "slow" core running at 600MHz or so. When things are idling, the slow core takes over and does the job while the hefty quadcores are powered off, saving tons of power.
Marvell I think also has a similar idea for their SoCs. And ARM's A15 design is supposed to incorporate that as well.
Re: (Score:2)
Re: (Score:2)
You are exactly right. This is why there should be cores that are high speed and high power for the tasks that cannot be broken down into bits to distribute. This way, low-energy cores take care of most tasks, while a task that cannot be distributed can be handed to the bigger cores which consume energy.
The more different types of cores available, the more flexible the architecture would be, and the better energy savings (in theory) can result.
Re: (Score:2)
Re: (Score:2)
Very true. I wonder if this can be done on a hypervisor level. If done right, the hypervisor can present the OS a dynamic amount of CPUs, depending on what processes are using what resources behind the scene, as well as rebind a task to a different core (say the task was using a lot of FPU, then swapped to needing mainly integer manipulations.)
It would work on the OS level too, but would take a revised scheduler to take advantage of it.
Re: (Score:2)
power consumption (Score:2)
I thought x86 is a power hog compared to ARM. It seems like that is a serious consideration for mobile devices to me. I'll be interested to see where this goes. In the mean time, x86 chips are going to have to get a lot cheaper to compete with ARMs prices.
Re:power consumption (Score:5, Insightful)
It is. The difference between an x86 and ARM core is around an order of magnitude at the moment for the same performance. But the difference between an x86 core and the display is another order of magnitude, so for devices that you mainly use with the screen on there isn't much difference between x86 and ARM in terms of overall power consumption. The difference in battery life between an ARM core at 200mW and an Intel core at 2W is very small when the display is using 10-20W. There are a few display technologies that are supposed to be hitting the market Real Soon Now that ought to make the difference between x86 and ARM a lot more apparent.
Re:power consumption (Score:5, Interesting)
I'd mod up your post, but I want to reply instead. Are you suggesting that the display uses 50-100 times the power of an ARM chip (and therefore 5-10 times an x86)? If that is true, that is very interesting. I did not realize the display was such an outlier in power consumption department...
Re: (Score:2)
Are you suggesting that the display uses 50-100 times the power of an ARM chip (and therefore 5-10 times an x86)?
Yes, and this is why an e-ink Kindle reader lasts so much longer on a charge than, say, a Kindle Fire tablet.
Re: (Score:2)
x86 android tablet would make sense since you could just turn it completely off when not in use, but an x86 phone would have a standby time shorter than your average summer blockbuster.
Re: (Score:3)
Re:power consumption (Score:5, Informative)
Is the display really that much of a hog on a cell phone? Those numbers sound like laptop numbers, but I thought we were talking cell phones.
My phone has a battery that holds around 1300 mAh at 3.7v. That means I can draw 4.8W for 1 hour. If my phone's display really sucked down even 10W, then I wouldn't be able to have the display on for more than about 28 minutes total, which doesn't match my experience at all. I regularly browse the web from my phone for a half hour at a time, without making much of a dent in the battery.
A quick scan through this paper [usenix.org] suggests backlight power for the phone they analyzed tops out at 414mW, and the LCD display power ranges from 33.1mW to 74.2mW. If you drop the brightness back just a few notches, the total display power is around a quarter Watt or so, which sounds far more reasonable.
I don't think Intel is standing still on power consumption. Their desktop CPUs are hogs, sure, but they can bring a lot of engineers to bear optimizing Atom-derived products. (We might get an early read from Knight's Corner, actually, although I expect it to still be on the "hot" side. I'm waiting to hear more about it.) Also, ARM's latest high-end offerings (including the recently announced A15) aren't exactly as power-frugal as some of their past devices. In the next couple years, I think the scatter plot of power vs. performance for ARM and x86 variants will show a definite overlap in the mix, with some x86s pulling less power than some ARMs.
Re: (Score:2)
Is the display really that much of a hog on a cell phone?
Tablet screens draw four to ten times as much juice as smartphone screens because they have four to ten times the area.
Re: (Score:2)
Re: (Score:3)
I have my phone screen on for about 2-3 hours per day due to bus rides. According to my the Android battery tracking thing my display uses up around 60-70% of my battery for the day, and this is on a Nexus S with the AMOLED screen that is supposed to use less battery than an LCD screen due to not having to light up the black pixels.
The screen really is huge when it comes to battery consumption.
Re: (Score:2)
What are you doing on it during that time? The processor, baseband and RF circuits also suck up a fair juice. That PDF I linked above shows GSM consuming around 600mW during GPRS and WiFi consuming around 700mW when in use on the phone they analyzed. I'd expect other phones to be similar. 3G is supposedly much worse at draining batteries. Dunno about CDMA/LTE, but I would imagine they'd also be in the half-watt to 1 watt range, to venture a first-order guess.
If you're just playing games, then it's just
Re: (Score:2)
But you're missing the key thing here; just because you turned on WiFi or GPRS doesn't mean they're sucking down a constant 600 or 700mW a piece. Chances are they're very aggressive with power savings being a key priority; to where if you're sending data (either technology) the power use ramps up for a short time; but receiving data I'm sure uses a lot less power.
Given the example of loading something like google.com - the phone would have to send some data to request the page, but overall that amount of d
Re: (Score:2)
Well Android 2.4 actually has a battery meter that tells you exactly what is using up the battery. Most of the time
I'm just reading various articles offline, but that doesn't really matter because (unless I'm mistaken) the battery monitor on Android seperates out the battery usage by Wifi, cell radios, ect.
The 60-70% I'm talking about is specifically from the "display" section in the battery monitor, which I assume only includes battery directly used by the screen.
My second biggest battery offender is usual
Re: (Score:2)
Oops. Android 2.3.7 to be correct, not android 2.4.
Re: (Score:3)
Despite what many other commenters will say, no, it isn't a power hog compared to ARM. Or at least it doesn't have to be. Intel/AMD/VIA don't yet offer processors that have as low power as ARM (although some are pretty power/performance efficient depending on your workload), but they will within the next year for smartphones and tablets. On modern manufacturing processes the "x86 tax" becomes almost non-existant.
Debian (Score:2)
Then I can play REAL games on my phone.. Or as real as they get in Linux!
Re: (Score:2)
Re: (Score:3)
Just give me a debian build for my phone including dialer, messaging, etc..
Then I can play REAL games on my phone.. Or as real as they get in Linux!
Games aren't real on Linux? Yeah, PenguSpy [penguspy.com] and Linux Gamers [linux-gamers.net] don't have real games, really written for real Linux. You know, like Quake 4, Doom 3, Vendetta, and X3 - those aren't real games... oh, wait.
And nevermind that wine [winehq.org] actually works really well, nowadays, running many top games "flawlessly, out of the box", and tons more "run flawlessly with some special configuration" [winehq.org].
If you're not a first-person shooter fan (Score:2)
You know, like Quake 4, Doom 3, Vendetta, and X3
Vendetta [wikipedia.org] is from 1991. It's like pointing out that Mega Man X3 runs in a Super NES emulator: interesting, and probably fun for a while, but not what grandparent had in mind. As for Quake and Doom, can you recommend things other than first-person shooters that commonly get ported to Linux, especially well-praised E or E10+ rated game series?
And nevermind that wine actually works really well
Only on x86 phones. Most existing smartphones are ARM; let me know when Atom phones start to come out. And even if you stick to games from the Pentium 4 era, knowing tha
Re: (Score:2)
Yeah, X3: Reunion [egosoft.com]. The game EVE-Online players go to [youtube.com] when they get bored with computer-controlled guns. [youtube.com]
Re: (Score:2)
Solitaire! Freecell! And maybe Chromium, mostly because people mistake it for the browser when using Synaptic.
Re: (Score:2)
There's a list here [penguspy.com] of a couple dozen free (as in beer and as in speech) games for Linux, many of which are really good.
This list [penguspy.com] is just the "very best" games, regardless of whether they are free or paid, open or closed source.
One of my favorite things about that site is the ability to filter by open/closed source, free/paid, and whether or not the game has been awarded a "Pengu's Choice". There are some really solid games out there, and many of the best ones run on Linux.
Please note that PenguSpy doesn't
Re: (Score:2)
I don't know if you ever used wine for gaming, but I certainly wouldn't call it flawless
I don't think he actually said that. What he said was that wine runs many games flawlessly not that wine itself is flawless. Subtle distinction but it is there.
Re: (Score:2)
Now name popular games for linux not made by ID Software or ported by Loki.
From the wine app database, that I linked in my previous post:
Final Fantasy XI.
World of Warcraft.
StarCraft I and II.
Guild Wars.
Team Fortress 2.
Left 4 Dead.
Counter-Strike: Source.
Warcraft III.
Half-Life 2.
These are from the list of "Platinum" support, which states as its description "Applications which install and run flawlessly on an out-of-the-box Wine installation". You can go here [winehq.org] for a list of 1,568 items listed as supported under wine with a rating of "Platinum", in the category "Games".
The "Gold" and
Re: (Score:2)
Intel Softcores (Score:5, Interesting)
While it is always nice to hear about companies contributing to opensource, I don't see there being a big demand for x86 android. Who would use it? It's not low power enough for most tablets/phones. And while the ability to run existing x86 apps is nice they are mostly tied to Windows which is also not likely to see much traction in the mobile space. So what is the point?
What I would like to see is Intel creating a SoC and softcore suite. Intel has some big advantages that they could use to seriously compete:
1) Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor.
2) They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.
3) They have in house designers for everything from graphics, wired, wireless, etc. chips. I don't see why they cannot design from this a whole suite of modules that work on their SoC platform.
4) They have (to my knowledge) the best chip fab plants in the world by a sizable margin. Die shrinks offer a great way to reduce power consumption.
5) They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium.
6) They have shown that they already know how to support Android.
7) They have the cash and business partners to make it work.
I'm not saying they are guaranteed to make big bucks. Fighting an intrenched ARM with wide industry support will be hugely difficult. But if any company can do it it's Intel. Of course this means they would have to get over the Itanic debacle and stop trying to shove x86 down the throats of every problem.
Re: (Score:2)
Intel - have loads of experience in getting the creaking x86 architecture to work in the modern world, ARM however is much much newer and has much less layers of cruft, they have not shown their ability to throw away all that and start from scratch (which is what we really need)
Re:Intel Softcores (Score:5, Informative)
What I would like to see is Intel creating a SoC and softcore suite
They did that, what, 18 months ago now? Total number of people who licensed it: zero. Why? Because x86 absolutely sucks for low power.
Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor
Ah yes, all those massive commercial success stories that Intel has had when it tried to produce a non-x86 chip, like the iAPX, the i860, the Itanium. The closest they came was XScale, and they sold the team responsible for that to Marvell.
They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.
Intel has two advantages over their competition: superior process technology and x86 compatibility. Your plan is that they should give up one of those?
They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium
Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.
Re: (Score:2)
>> They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium
> Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.
Having worked on a C/C++ compiler for the PS3 & PSP, I concur! Compiler writing is non-trivial. Sure, you can get 80% of the way there, b
Re: (Score:2)
I should also add that Intel has a history of designing chips that are a complete bitch to write compilers for and for having hardware and software teams that never talk to each other. The most famous example was the iAPX, which was designed for object oriented programming but without talking to any compiler writers, so it ended up requiring a 200-instruction sequence to do one of the most common operations in an object-oriented language. The Itanium is legendary for being impossible to target. The hardw
Re: (Score:2)
The ultimate nerd tablet would be nice... Triple boot Linux, Android or Windows depending on what you want to run.
Re: (Score:2)
Re: (Score:2)
It's not as if "x86" means much from an architectural standpoint.
But it does. Intel/AMD do lots to make the architecture efficient but they are significantly constrained by having to meet the x86 at the highest level for compatibility.
First there is a ton of legacy stuff in x86 that is just not needed, making the core larger and more power hungry. Take a look at how the floating point works, its just dreadful.
x86 is CISC when we know RISC is better. Intel/AMD do some tricks to make the core more RISC, but why not just cut out the middle man? Why bother with converting
Re:Intel Softcores (Score:5, Informative)
The mistake most people seem to make here is to compare ARM to IA32, when they should be comparing ARM to Intel64/AMD64 (x86_64) since even Atom can run 64-bit code these days.
Going to 64-bit does increase code size a bit, but one of the good things about x86/x86_64 code is that it is VERY dense. This document
http://www.csl.cornell.edu/~vince/papers/iccd09/iccd09_density.pdf [cornell.edu]
suggests that 64-bit x86 code is actually even denser than ARM-thumb code in most cases (which in turn is denser than "normal" ARM code).
High code density means more cache hits, which means better performance and less power-hungry.
x86_64 has the same amount of integer registers as ARM: 16. Every single x86_64 CPU has support for SSE, which means that floating point operations can (and is) handled by the 16 SSE registers instead of the old x87 fpu-stack.
Fact is that the 64-bit specification for x86 fixed a large number of problems that the 32-bit specification had, making x86_64 a really good architecture without any significant flaws.
Re: (Score:2)
Mod Parent Up!
Thanks for your post, very informative. I never considered the cache/code density aspect. I'm printing that pdf ASAP.
Re: (Score:2)
Re:Intel Softcores (Score:5, Insightful)
x86 is CISC when we know RISC is better. Intel/AMD do some tricks to make the core more RISC, but why not just cut out the middle man? Why bother with converting it at all?
Pull up a pillow and have a seat around ol' Grandpa Short Circuit. This may come as a shock to you.
Some programs still being sold and run on desktop computers today were compiled over ten years ago. Some programs still sold and run in x86 embedded environments were compiled twenty to thirty years ago. That's why x86 is still around.
x86 is still around for the same reason Windows is still around. It still runs binaries that are really, really old. In some cases (many, I expect), the source code for these binaries no longer exists, or the toolchain for building it is bitrotted. That's why x86 is still around.
Imagine some sci-fi horror film where everyone's forgotten how to maintain the vast infrastructure of their civilization, they just don't poke it because they don't want it to break. That's why x86 is still around.
Meanwhile, every year there are more long-lived applications built for the existing platform, with very little hope for being updated for newer platforms and processors; their binaries are likely to be running for another five or ten years.
Amusingly, open-source software has a clear advantage over closed source software in this arena. Several distributions are actively keeping software packages portable across CPU archs, and even portable across OS kernels. (Debian and Gentoo both support BSD foundations as well as Linux)
Re: (Score:2)
I know some businesses which are still dependent on Windows 3.1 programs written in 1993-1994. When machine upgrade time came around, I ended up just P2V-ing their old boxes, sharing the application's document folders with the host OS, and to the end user, the creaky old application functions the same as anything else on Windows 7. To boot, if the creaky application gets corrupted, it only takes either a reloading of a snapshot, or grabbing an archive of the VM disk file to get back in business. (I also
Re: (Score:2)
Apple's reasons for switching had more to do with x86 as a better-invested hardware platform. They wanted all the same hardware capabilities as the burgeoning PC/gamer market, and I guarantee you it wasn't going to be cheap or easy to get the likes of NVidia or ATI to prepare Apple variants of their PC hardware. (You wouldn't just take the latest NVidia card and drop it into an Apple; the video card has a BIOS in x86 machine code, because the PC expects it. Apple hardware was necessarily different, if only
Comment removed (Score:4, Informative)
Re: (Score:2)
Actually yes, "x86" does mean a lot even from an architectural standpoint. For example it means you have to carry along all the instructions and their related mechanisms concerning 8086 Real Mode, and 80286 Extended Real Mode, plus all the horribly clumsy register types.
OK, so your decoder has to be able to handle the micro-ops, and you've got to have the hardware on the chip somewhere to perform the operations. But you don't actually have to have ANY of the same hardware (aside from where there's really only one practical way to do things) because you're going to decompose the x86 instructions into micro-ops anyway.
Now for step 2 (Score:3)
Re:Now for step 2 (Score:5, Funny)
Does doubling its size count as an optimization?
Makes sense. (Score:2)
If you want a software platform to be able to build native code to your hardware you add code to their software base.
Android Distributed on 802.15.4 (Score:3)
What's more interesting to me is the Android@Home announcements (from Google IO 2011) that Google is implementing its own networking stack (instead of Zigbee) on 802.15.4 [wikipedia.org]. 802.15.4 is a very low power low-level radio network, with cheap embedded microcontrollers that are often ARM. There's probably not enough power in the node's ARM to run Android, but some nodes could have extra power and extra ARM cores that do run Android.
Android's Java means in addition to network RPC, code can be straightforwardly programmed to safely migrate around the network for distributed local execution near the data, whether that's network metadata, sensor data, or just the power of massively parallel distribution. I wonder whether JavaSpaces or something like it (probably a very lite version) will find a fit in making cheap distributed networks represented in computational tuplespace. Distributed around one's home, office/classroom or car, or among one's clothing (daily worn watch/jacket/shoes/belt/keyring), or eventually merging among those personal spaces as they're either near or just related (linked by the Internet).
Intel's x86 architecture still has too much power consumption (and the legacy HW baggage that consumes it) to be a design win for this distributed architecture. By the time x86 is suitably low power, Android will probably have defined the space of these smart spaces, and the smart things in them.
FWIW, there's still few details of A@H, though supposedly there is a reference implementation (network backbone embedded in LED bulbs). Anyone seen any specs, like whether it's really a SNAP/6LOWPAN hybrid, or which specific alternative Google is now pushing? Where to get the devkits (HW and SW)?
Oh Java to the rescue (Score:3)
Not very popular on /., but Android being Java based will make life very easy for Intel to crack the mobile market. Most of the apps (sans native ones) will just work. It would have been almost impossible otherwise without some serious virtualization.
Re: (Score:2)
still needed is a port of Dalvik to x86, and a port of the Android runtime libraries to x86.
This was done eons [android-x86.org] ago.
Re: (Score:2)
Intel Software (Score:2)
Intel isn't just a chip maker (it has oodles of software experience)
Has Intel ever done any software other than to boost hardware sales?
Sure, they write lots of software, but they *are* just a chip maker.
dual x86/ARM, 'fat' binaries - cool (Score:2)
Yay.. universal binaries again, like apple had the foresight to do but then later quit. ( no, that was not sarcasm, just disappointment )
Would be an exercise in uselessnes (Score:2)
I bet everybody think about Android Market and all the cool stuff there. Well, don't do that unless your Android runs ARM.
I've got recently my hands on a Android MIPS phone. Extremely frustrating experience -- two of every three downloads from the Market simply refuse to install, because they have some tiny snippet or library compiled to ARM native code. Unless Intel heavy invests in app developers recompiling their works for Android/x86, it will be barely usable outside of the base system.
Re: (Score:2)
Re: (Score:2)
How about actually exposing the RISC architecture (Score:2)
Let the Linux kernel loose Intel.
Re: (Score:2)
First off, everything is microcoded. Power is microcoded. SPARC64 is microcoded. Itanium isn't, but it's an oddball in that regard. Microcode just lets you hide implementation details and potentially simplify internal design.
Internal microcode isn't necessarily fun to play with. Look up the articles on RealWorldTech on the guts of Transmeta's CPU's if you're interested, and that used a significantly
Re: (Score:2)
Help me out here... (Score:2)
Mind you, I have a fairly recent quad core Intel proc in my Windows 7 workstation, and it runs software only available on Windows (which is why I have it) pretty well.
But, rightly or wrongly, I associate Intel with big hot power hungry hardware, that you *must* have if you have apps that need Windows, and ARM with low power battery sipping appliances. Android seems made for the latter, and out of place on the former. I can understand why Intel wants to get a piece of the Android pie -- they are protecting
Google TV/Revue ... (Score:2)
Somebody call Zack Morris (Score:2)
Re: (Score:3)
Google allowed them to mess with the graphics engine? OMFG, we'll end up with tablet devices that run 1990's era graphics tech.
Wow. I hadn't realized Intel's graphics offerings have improved to even that point.
At least it wasn't ATI/AMD, then it would be fast, but crash a lot...
Re: (Score:2)
At least it wasn't ATI/AMD, then it would be fast, but crash a lot...
And then there would be the Android malware mining bitcoins, too!
Re: (Score:3)
Even if they develop their own graphics chip for tablet use, it'll a) probably be enough for what you'd do on a tablet (seriously: on a desktop PC, for anything except gaming, Intel's stuff is good enough), and b) it depends on how well the software's done, anyway (case in point: on many recent Linux distros, and again, unless you're gaming, Intel's chipsets provide a better overall experience than much more capable nVidia or ATI hardware).
Re: (Score:2, Interesting)
Intel's stuff is generally good, but it's expensive and I don't personally think we need to allow a foothold for the same sort of anti-competitive behavior that Intel is known for in the desktop/laptop processor market.
Re:Intel's Software Experience...Graphics (Score:5, Informative)
have you used intel graphics lately(stuff they're shipping in 2011)? it's like having a discrete mobile gpu from 2004.
but this article is not news of any kind. intel has had these plans out in public for years and years, android ndk has support for multiple targets. if they actually started shipping _that_ would be news.
Re: (Score:3)
Re: (Score:2)
Intel's past use of PowerVR chips was at a time when smartphone screens were still pretty low-res, and the expectations of graphical performance on a smartphone was very different from what was expected on a notebook. Cedertrail (their upcoming Atom product) is using a Series 5 chip (the 545) rather than a Series 5XT chip (like the PowerVR SGX543MP2 in the iPad 2 and iPhone, or the SGX543MP4 in the Playstation Vita). The 545 is certainly an improvement over their previous single-core chips, but I doubt it w
Re: (Score:2)
Animated gifs do work (well hit and miss) in the browser... for some reason sometimes they will play on failblog.org, other times they are just a static images...
Re: (Score:2)
Give em a chance. Maybe they can add animated gif support to android...
Dear God no
Re: (Score:2)