Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Android Cellphones Handhelds Intel Software Hardware

Intel's Plans For X86 Android, Smartphones, and Tablets 151

MrSeb writes "'Last week, Intel announced that it had added x86 optimizations to Android 4.0, Ice Cream Sandwich, but the text of the announcement and included quotes were vague and a bit contradictory given the open nature of Android development. After discussing the topic with Intel we've compiled a laundry list of the company's work in Gingerbread and ICS thus far, and offered a few of our own thoughts on what to expect in 2012 as far as x86-powered smartphones and tablets are concerned.' The main points: Intel isn't just a chip maker (it has oodles of software experience); Android's Native Development Kit now includes support for x86 and MMX/SSE instruction sets and can be used to compile dual x86/ARM, 'fat' binaries; and development tools like Vtune and Intel Graphics Performance Analyzer are on their way to Android."
This discussion has been archived. No new comments can be posted.

Intel's Plans For X86 Android, Smartphones, and Tablets

Comments Filter:
  • x86 (Score:5, Insightful)

    by Unclenefeesa ( 640611 ) on Thursday November 17, 2011 @09:14AM (#38084528)

    Since most of x86 architecture and related hardware is getting smaller and most smartphone are getting bigger, they are bound to meet somewhere.
    hmm, I guess it will be called a tablet or an i(ntel)Pad. ehm ehm

    • Can you say "Windows 8" for phone?

      • Re:x86 (Score:5, Funny)

        by chill ( 34294 ) on Thursday November 17, 2011 @10:09AM (#38084952) Journal

        Not while keeping a straight face, no.

        • Didn't Balmer just threaten everybody with something along the lines of "We'll always live in a Windows era"?

          Or was it more of a warning?

          • No, it was just Balmer admitting that Microsoft is a WINDOWS company. That is their one product they have, everything else is built around WINDOWS for WINDOWS. Android and iOS must scare the crap out of them, because we're fast approaching the era where Windows only runs on Corporate Computers, everyone else is running Android or iOS.

      • I can say "Meh".

        Is that close enough?

      • by Locutus ( 9039 )
        if you say Win 8 really fast it sounds like Wait. Just saying.

        LoB
    • Re:x86 (Score:4, Interesting)

      by Anonymous Coward on Thursday November 17, 2011 @09:47AM (#38084748)

      Given the choice, everyone who actually has to code for those CPUs (e.g. compiler makers), without a doubt prefers ARM over x86. Simply because of how shit x86 is.
      It's the Windows ME of machine code. It started out as a DOS, and kept the cruft all the way to today. While piling more and more bigger and bigger stuff on top. Ending up with a upside-down pyramid, held in balance by a billion wood sticks.
      And I know that even Intel itself couldn't stand it anymore. That's why they implemented that microcode solution with a RISC processor on the inside.
      If only they would give us direct access to that core, but leave the microcode in there for 1-2 processor generations for legacy reasons.
      Then nobody would willingly keep doing x86, and before those 2 generations would be over, it would be locked away and forgotten.

      I, for one, plan a 265-core ARM CPU as my next desktop system. (Yes, ARM cores are slower per clock cycle. But they are *a lot* more efficient and *a lot* cheaper too. [No, ATOM does not count, unless you add that northbridge that's so big and gets so hot that looking at the mainboard 10/10 people think it's the actual CPU. Which is closer to the truth as Intel ever wants to admit.])

      • You forget too easily that many people depend on this legacy code to run software of thousands or even millions of dollars. Not because your desktop in your mom basement no longer need it so that the mankind did not need anymore too.
        • You forget too easily that many people depend on this legacy code to run software of thousands or even millions of dollars.

          Then keep your legacy code and run it in an emulator on an ARM CPU. The legacy code was probably written so long ago that it'd run as fast in a JIT emulator today as it did natively then.

          • Maybe yes, maybe not. As a example, is difficult to get my "M.A.X." (a old DOS game) working on a emulator, most of then crashes or have some spurious error. A bigger and more complex, critical software running on emulator? I do not like the idea.
        • But you're kind of missing the point though. If ARM really is that much better than x86 (I don't really know as I don't program on that level) then with the amount of momentum it is catching in mobile devices, ARM overtaking x86 is inevitable. I don't know a lot about x86_64 whatever vs. ARM but I do know that my Xoom outperforms my netbook and it does it while generating no heat and with 3 times the battery life from a smaller battery. Look out, intel.
          • The problem is not who is better... The problem is try to port your "legacy", big and very expensive x86 code because some "genius" decided to simply drop the legacy support on the hardware. Especially when you have a tight deadline to meet and the system can not stop
      • by mlts ( 1038732 ) *

        What i would like to see is a CPU architecture that can have asymmetric cores:

        When the machine is idle, one low-power core handles the OS idle functions while another handles the IP stack, another core handles I/O, and another handles the hypervisor aspect.

        When the machine is running database stuff, first cores that are made for integer operations get used, then the FPUs and GPUs come in.

        Flip to a game, and the cores that mainly are used as GPUs come into play.

        Fire up a modeling task, and the FPU heavy core

        • Mod Parent Up!

          Very interesting idea. We are going to have to add in more cores from now on to get more performance, might as well start specializing them for certain tasks. Your idea about x86 hardware emulation is especially interesting.

        • Sorry to double reply, but now that I think about it, what you are describing sounds a hell of a lot like a mainframe on a chip. IBM mainframes have Multi-chip Modules [wikipedia.org] that are a lot like what you are describing.

        • Re:x86 (Score:4, Informative)

          by tlhIngan ( 30335 ) <slashdot.worf@net> on Thursday November 17, 2011 @12:50PM (#38087210)

          What i would like to see is a CPU architecture that can have asymmetric cores:

          Similar to your design, the Tegra 3 ARM SoC does that. It has a quad-core A9 running at 1.5GHz or more, but it also has a "slow" core running at 600MHz or so. When things are idling, the slow core takes over and does the job while the hefty quadcores are powered off, saving tons of power.

          Marvell I think also has a similar idea for their SoCs. And ARM's A15 design is supposed to incorporate that as well.

        • The only problem with splitting everything out like that is there are many apps that always have that one thread that can't be atomized any further and will saturate the core. I mean a loop can only be run so fast no matter how many cores you have so any one program is always going to have an absolute performance bottleneck. This isn't to say that there isn't merit to multi-core as of course the more the merrier but it isn't an absolute panacea.
          • by mlts ( 1038732 ) *

            You are exactly right. This is why there should be cores that are high speed and high power for the tasks that cannot be broken down into bits to distribute. This way, low-energy cores take care of most tasks, while a task that cannot be distributed can be handed to the bigger cores which consume energy.

            The more different types of cores available, the more flexible the architecture would be, and the better energy savings (in theory) can result.

        • Well, You described more or less a SNES (a handful of specialized chips working together). The problem is that the software would have to be aware of this way of working, or the CPU would have to be able to find out for yourself what the user is willing to do.
          • by mlts ( 1038732 ) *

            Very true. I wonder if this can be done on a hypervisor level. If done right, the hypervisor can present the OS a dynamic amount of CPUs, depending on what processes are using what resources behind the scene, as well as rebind a task to a different core (say the task was using a lot of FPU, then swapped to needing mainly integer manipulations.)

            It would work on the OS level too, but would take a revised scheduler to take advantage of it.

      • I would not be so quick to say that. While I am no x86 fanboy, there are a number of things that are "nice" about the model from the point of view of most software developers. The instruction set is basically a compression system (much like thumb2 is for ARM). The very simplistic (to reason about) memory model (which is rather complex to implement in hardware) makes multi-processor significantly easier for most people. Most people who think they know how to write good multi-processor, multi-threaded cod
  • I thought x86 is a power hog compared to ARM. It seems like that is a serious consideration for mobile devices to me. I'll be interested to see where this goes. In the mean time, x86 chips are going to have to get a lot cheaper to compete with ARMs prices.

    • by TheRaven64 ( 641858 ) on Thursday November 17, 2011 @10:29AM (#38085200) Journal

      It is. The difference between an x86 and ARM core is around an order of magnitude at the moment for the same performance. But the difference between an x86 core and the display is another order of magnitude, so for devices that you mainly use with the screen on there isn't much difference between x86 and ARM in terms of overall power consumption. The difference in battery life between an ARM core at 200mW and an Intel core at 2W is very small when the display is using 10-20W. There are a few display technologies that are supposed to be hitting the market Real Soon Now that ought to make the difference between x86 and ARM a lot more apparent.

      • Re:power consumption (Score:5, Interesting)

        by craftycoder ( 1851452 ) on Thursday November 17, 2011 @10:57AM (#38085550)

        I'd mod up your post, but I want to reply instead. Are you suggesting that the display uses 50-100 times the power of an ARM chip (and therefore 5-10 times an x86)? If that is true, that is very interesting. I did not realize the display was such an outlier in power consumption department...

        • by tepples ( 727027 )

          Are you suggesting that the display uses 50-100 times the power of an ARM chip (and therefore 5-10 times an x86)?

          Yes, and this is why an e-ink Kindle reader lasts so much longer on a charge than, say, a Kindle Fire tablet.

        • by Shatrat ( 855151 )
          It definitely is, but you also have to consider that it is usually OFF in the case of a phone.
          x86 android tablet would make sense since you could just turn it completely off when not in use, but an x86 phone would have a standby time shorter than your average summer blockbuster.
      • Re:power consumption (Score:5, Informative)

        by Mr Z ( 6791 ) on Thursday November 17, 2011 @11:27AM (#38085956) Homepage Journal

        Is the display really that much of a hog on a cell phone? Those numbers sound like laptop numbers, but I thought we were talking cell phones.

        My phone has a battery that holds around 1300 mAh at 3.7v. That means I can draw 4.8W for 1 hour. If my phone's display really sucked down even 10W, then I wouldn't be able to have the display on for more than about 28 minutes total, which doesn't match my experience at all. I regularly browse the web from my phone for a half hour at a time, without making much of a dent in the battery.

        A quick scan through this paper [usenix.org] suggests backlight power for the phone they analyzed tops out at 414mW, and the LCD display power ranges from 33.1mW to 74.2mW. If you drop the brightness back just a few notches, the total display power is around a quarter Watt or so, which sounds far more reasonable.

        I don't think Intel is standing still on power consumption. Their desktop CPUs are hogs, sure, but they can bring a lot of engineers to bear optimizing Atom-derived products. (We might get an early read from Knight's Corner, actually, although I expect it to still be on the "hot" side. I'm waiting to hear more about it.) Also, ARM's latest high-end offerings (including the recently announced A15) aren't exactly as power-frugal as some of their past devices. In the next couple years, I think the scatter plot of power vs. performance for ARM and x86 variants will show a definite overlap in the mix, with some x86s pulling less power than some ARMs.

        • by tepples ( 727027 )

          Is the display really that much of a hog on a cell phone?

          Tablet screens draw four to ten times as much juice as smartphone screens because they have four to ten times the area.

          • by Mr Z ( 6791 )
            Yeah, I can see that. I guess "mobile device" doesn't just mean "mobile phone" these days.
        • by tycoex ( 1832784 )

          I have my phone screen on for about 2-3 hours per day due to bus rides. According to my the Android battery tracking thing my display uses up around 60-70% of my battery for the day, and this is on a Nexus S with the AMOLED screen that is supposed to use less battery than an LCD screen due to not having to light up the black pixels.

          The screen really is huge when it comes to battery consumption.

          • by Mr Z ( 6791 )

            What are you doing on it during that time? The processor, baseband and RF circuits also suck up a fair juice. That PDF I linked above shows GSM consuming around 600mW during GPRS and WiFi consuming around 700mW when in use on the phone they analyzed. I'd expect other phones to be similar. 3G is supposedly much worse at draining batteries. Dunno about CDMA/LTE, but I would imagine they'd also be in the half-watt to 1 watt range, to venture a first-order guess.

            If you're just playing games, then it's just

            • by karnal ( 22275 )

              But you're missing the key thing here; just because you turned on WiFi or GPRS doesn't mean they're sucking down a constant 600 or 700mW a piece. Chances are they're very aggressive with power savings being a key priority; to where if you're sending data (either technology) the power use ramps up for a short time; but receiving data I'm sure uses a lot less power.

              Given the example of loading something like google.com - the phone would have to send some data to request the page, but overall that amount of d

            • by tycoex ( 1832784 )

              Well Android 2.4 actually has a battery meter that tells you exactly what is using up the battery. Most of the time
              I'm just reading various articles offline, but that doesn't really matter because (unless I'm mistaken) the battery monitor on Android seperates out the battery usage by Wifi, cell radios, ect.

              The 60-70% I'm talking about is specifically from the "display" section in the battery monitor, which I assume only includes battery directly used by the screen.

              My second biggest battery offender is usual

    • by zealot ( 14660 )

      Despite what many other commenters will say, no, it isn't a power hog compared to ARM. Or at least it doesn't have to be. Intel/AMD/VIA don't yet offer processors that have as low power as ARM (although some are pretty power/performance efficient depending on your workload), but they will within the next year for smartphones and tablets. On modern manufacturing processes the "x86 tax" becomes almost non-existant.

  • Just give me a debian build for my phone including dialer, messaging, etc..

    Then I can play REAL games on my phone.. Or as real as they get in Linux!
    • by znerk ( 1162519 )

      Just give me a debian build for my phone including dialer, messaging, etc..

      Then I can play REAL games on my phone.. Or as real as they get in Linux!

      Games aren't real on Linux? Yeah, PenguSpy [penguspy.com] and Linux Gamers [linux-gamers.net] don't have real games, really written for real Linux. You know, like Quake 4, Doom 3, Vendetta, and X3 - those aren't real games... oh, wait.

      And nevermind that wine [winehq.org] actually works really well, nowadays, running many top games "flawlessly, out of the box", and tons more "run flawlessly with some special configuration" [winehq.org].

      • You know, like Quake 4, Doom 3, Vendetta, and X3

        Vendetta [wikipedia.org] is from 1991. It's like pointing out that Mega Man X3 runs in a Super NES emulator: interesting, and probably fun for a while, but not what grandparent had in mind. As for Quake and Doom, can you recommend things other than first-person shooters that commonly get ported to Linux, especially well-praised E or E10+ rated game series?

        And nevermind that wine actually works really well

        Only on x86 phones. Most existing smartphones are ARM; let me know when Atom phones start to come out. And even if you stick to games from the Pentium 4 era, knowing tha

    • by Aryden ( 1872756 )
      I'm running Ubuntu stacked on AmeriCandroid on my HD2
  • Intel Softcores (Score:5, Interesting)

    by inhuman_4 ( 1294516 ) on Thursday November 17, 2011 @09:37AM (#38084686)

    While it is always nice to hear about companies contributing to opensource, I don't see there being a big demand for x86 android. Who would use it? It's not low power enough for most tablets/phones. And while the ability to run existing x86 apps is nice they are mostly tied to Windows which is also not likely to see much traction in the mobile space. So what is the point?

    What I would like to see is Intel creating a SoC and softcore suite. Intel has some big advantages that they could use to seriously compete:
    1) Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor.
    2) They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.
    3) They have in house designers for everything from graphics, wired, wireless, etc. chips. I don't see why they cannot design from this a whole suite of modules that work on their SoC platform.
    4) They have (to my knowledge) the best chip fab plants in the world by a sizable margin. Die shrinks offer a great way to reduce power consumption.
    5) They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium.
    6) They have shown that they already know how to support Android.
    7) They have the cash and business partners to make it work.

    I'm not saying they are guaranteed to make big bucks. Fighting an intrenched ARM with wide industry support will be hugely difficult. But if any company can do it it's Intel. Of course this means they would have to get over the Itanic debacle and stop trying to shove x86 down the throats of every problem.

    • Intel - have loads of experience in getting the creaking x86 architecture to work in the modern world, ARM however is much much newer and has much less layers of cruft, they have not shown their ability to throw away all that and start from scratch (which is what we really need)

    • Re:Intel Softcores (Score:5, Informative)

      by TheRaven64 ( 641858 ) on Thursday November 17, 2011 @10:36AM (#38085276) Journal

      What I would like to see is Intel creating a SoC and softcore suite

      They did that, what, 18 months ago now? Total number of people who licensed it: zero. Why? Because x86 absolutely sucks for low power.

      Lots of experience in chip design. I don't see why they can't create an ARM-Core competitor

      Ah yes, all those massive commercial success stories that Intel has had when it tried to produce a non-x86 chip, like the iAPX, the i860, the Itanium. The closest they came was XScale, and they sold the team responsible for that to Marvell.

      They can start from scratch. Unlike ARM there is no need to legacy support or backward compatibility.

      Intel has two advantages over their competition: superior process technology and x86 compatibility. Your plan is that they should give up one of those?

      They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium

      Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.

      • >> They have produced great x86 compilers for years, so producing a new compiler for a new chip shouldn't be too difficult since they are already experienced with x86 and Itanium
        > Hahahaha! Spoken like someone who has never been involved with compiler design or spoken to any compiler writers. Tuning a compiler for a new architecture is not a trivial problem.

        Having worked on a C/C++ compiler for the PS3 & PSP, I concur! Compiler writing is non-trivial. Sure, you can get 80% of the way there, b

        • I should also add that Intel has a history of designing chips that are a complete bitch to write compilers for and for having hardware and software teams that never talk to each other. The most famous example was the iAPX, which was designed for object oriented programming but without talking to any compiler writers, so it ended up requiring a 200-instruction sequence to do one of the most common operations in an object-oriented language. The Itanium is legendary for being impossible to target. The hardw

    • by grumling ( 94709 )

      The ultimate nerd tablet would be nice... Triple boot Linux, Android or Windows depending on what you want to run.

      • by pmontra ( 738736 )
        It would run the three of them at the same time. Android for answering calls and managing the small screen, linux for the large screen you attach over hdmi and windows... Mmm in my case it would be for testing sites with ie but not much else. For many people it would be for gaming.
  • by davidbrit2 ( 775091 ) on Thursday November 17, 2011 @09:41AM (#38084714) Homepage
    Add some x86 optimizations to the battery.
  • If you want a software platform to be able to build native code to your hardware you add code to their software base.

  • by Doc Ruby ( 173196 ) on Thursday November 17, 2011 @10:18AM (#38085066) Homepage Journal

    What's more interesting to me is the Android@Home announcements (from Google IO 2011) that Google is implementing its own networking stack (instead of Zigbee) on 802.15.4 [wikipedia.org]. 802.15.4 is a very low power low-level radio network, with cheap embedded microcontrollers that are often ARM. There's probably not enough power in the node's ARM to run Android, but some nodes could have extra power and extra ARM cores that do run Android.

    Android's Java means in addition to network RPC, code can be straightforwardly programmed to safely migrate around the network for distributed local execution near the data, whether that's network metadata, sensor data, or just the power of massively parallel distribution. I wonder whether JavaSpaces or something like it (probably a very lite version) will find a fit in making cheap distributed networks represented in computational tuplespace. Distributed around one's home, office/classroom or car, or among one's clothing (daily worn watch/jacket/shoes/belt/keyring), or eventually merging among those personal spaces as they're either near or just related (linked by the Internet).

    Intel's x86 architecture still has too much power consumption (and the legacy HW baggage that consumes it) to be a design win for this distributed architecture. By the time x86 is suitably low power, Android will probably have defined the space of these smart spaces, and the smart things in them.

    FWIW, there's still few details of A@H, though supposedly there is a reference implementation (network backbone embedded in LED bulbs). Anyone seen any specs, like whether it's really a SNAP/6LOWPAN hybrid, or which specific alternative Google is now pushing? Where to get the devkits (HW and SW)?

  • by thammoud ( 193905 ) on Thursday November 17, 2011 @10:22AM (#38085120)

    Not very popular on /., but Android being Java based will make life very easy for Intel to crack the mobile market. Most of the apps (sans native ones) will just work. It would have been almost impossible otherwise without some serious virtualization.

  • Intel isn't just a chip maker (it has oodles of software experience)

    Has Intel ever done any software other than to boost hardware sales?

    Sure, they write lots of software, but they *are* just a chip maker.

  • Yay.. universal binaries again, like apple had the foresight to do but then later quit. ( no, that was not sarcasm, just disappointment )

  • I bet everybody think about Android Market and all the cool stuff there. Well, don't do that unless your Android runs ARM.

    I've got recently my hands on a Android MIPS phone. Extremely frustrating experience -- two of every three downloads from the Market simply refuse to install, because they have some tiny snippet or library compiled to ARM native code. Unless Intel heavy invests in app developers recompiling their works for Android/x86, it will be barely usable outside of the base system.

    • Sadly this is true. I have the same experience when I try to install many market apps in my Android virtual machine. Some work many don't. To all current and potential Android devs: do it with Java if at all possible.
    • Java is a dead end if you need performance, especially in a resources-limited device like a smartphone. The way out is to use native code, as the people that used Assembler in critical parts of the old DOS games.
  • It's not like Android is going to run on top of Windows.

    Let the Linux kernel loose Intel.
    • Ha, one of those people that thinks there's a clean, perfect RISC architecture inside Intel CPU cores.

      First off, everything is microcoded. Power is microcoded. SPARC64 is microcoded. Itanium isn't, but it's an oddball in that regard. Microcode just lets you hide implementation details and potentially simplify internal design.

      Internal microcode isn't necessarily fun to play with. Look up the articles on RealWorldTech on the guts of Transmeta's CPU's if you're interested, and that used a significantly
  • Mind you, I have a fairly recent quad core Intel proc in my Windows 7 workstation, and it runs software only available on Windows (which is why I have it) pretty well.

    But, rightly or wrongly, I associate Intel with big hot power hungry hardware, that you *must* have if you have apps that need Windows, and ARM with low power battery sipping appliances. Android seems made for the latter, and out of place on the former. I can understand why Intel wants to get a piece of the Android pie -- they are protecting

  • Well that is based on an Intel platform, so they do have some experience
  • Cause we're gonna need a bigger battery.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...