Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Microsoft Hardware

Next Generation of Windows To Run On ARM Chip 307

Hugh Pickens writes "Sharon Chan reports in the Seattle Times that at the Consumer Electronics Show in Las Vegas, Microsoft showed the next generation of Windows running natively on an ARM chip design, commonly used in the mobile computing world, indicating a schism with Intel, the chip maker Microsoft has worked with closely with throughout the history of Windows and the PC. The Microsoft demonstration showed Word, PowerPoint and high definition video running on a prototype ARM chipset made by Texas Instruments, Nvidia. 'It's part of our plans for the next generation of Windows,' says Steve Sinofsky, president of Windows division. 'That's all under the hood.' According to a report in the WSJ, the long-running alliance between Microsoft and Intel is coming to a day of reckoning as sales of tablets, smartphones and televisions using rival technologies take off, pushing the two technology giants to go their separate ways. The rise of smartphones and more recently, tablets, has strained the relationship as Intel's chips haven't been able to match the low power consumption of chips based on designs licensed from ARM. Intel has also thumbed its nose at Microsoft by collaborating with Microsoft archrival Google on the Chrome OS, Google's operating system that will compete with Windows in the netbook computer market. 'I think it's a deep fracture,' says venture capitalist Jean-Louis Gassee regarding relations between Microsoft and Intel."
This discussion has been archived. No new comments can be posted.

Next Generation of Windows To Run On ARM Chip

Comments Filter:
  • by Anonymous Coward on Thursday January 06, 2011 @09:26AM (#34775296)

    Why wouldn't you? You could always compile ARM Windows CE/Mobile code on x86, and you could always compile IA64 Windows code on x86 as well. The compiler only needs to run on x86, not the emitted binary. You'd need an emulation layer or virtual machine to run/debug the binary locally, though. Visual Studio has shipped with virtual machine images for Windows Mobile devices emulating ARM machines specifically for this purpose for years.

    I really don't know why people are shocked by all of this. Windows isn't a non-portable OS. It's run on various other platforms in the past, including MIPS, Alpha, Sparc, PowerPC and even ARM (Windows XP Embedded). Microsoft just doesn't port to platforms for the sake of doing so; they follow the markets for those devices. The IA32 and x86-64 platforms more or less emerged as the only marketable commodity platforms for servers and workstations and the ARM platform for portable devices and Microsoft has always followed both with appropriate offering. This blurring of the lines between a portable workstation and a portable device in the realm of "tablets" or "slates" is really a more recent phenomenon and Microsoft will follow it there as the market allows.

    As for the rest of the x86 applications, sure, they aren't going to run, but Android and iOS have both demonstrated that there is probably little need for them. A slimmer version of Windows with a fully functional Office suite could be a very successful market, especially with the server and desktop markets as leverage. That could certainly be considered anticompetitive behavior, though, so that might turn interesting.

  • by Savage-Rabbit ( 308260 ) on Thursday January 06, 2011 @09:31AM (#34775336)

    I've been wondering the same thing. What about SDK's? Will there be a separate version of Visual Studio strictly for ARM? I know Visual Studio is mostly targeted towards .NET, but for native apps, will you be able to compile ARM code on the x86?

    Visual studio it self is a userland app and as such should run on Windows for ARM with few problems. I'm not sure what MSVS is written in, if it's a native app there will be an ARM version much as there was a PPC and x86 version of Xcode when Apple switched to x86, if MSVS is a .NET app you should get a build once run anywhere App like Eclipse is except Eclipse is truly cross platform while .Net apps are truly cross platform only on Windows flavors. If MS does a proper job porting it, the ARM toolkit for Windows should be every bit as powerful as the Windows x86 toolkit. Win 32 applications on the other hand might be a problem but then again Apple did a pretty decent jop at running PPC applications on x86 machines with Rosetta, I ran pretty heavy PPC applications under Rosetta with no major problems, so I don't see why Microsoft could not do something in a similar vein.

  • Re:Nvidia cpu (Score:5, Informative)

    by ArcherB ( 796902 ) on Thursday January 06, 2011 @10:13AM (#34775686) Journal

    Last one to market?

    Can you name any other operating system that works on both x86 and ARM procs out of the box, with no modification or intervention necessary on the user end?

    Linux. Well, that's the only one I can think of.

  • Re:Nvidia cpu (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Thursday January 06, 2011 @10:22AM (#34775780) Journal

    OS X, Linux, FreeBSD, and NetBSD. Not sure about OpenBSD - they did have an unmaintained port to some older ARM chips, which was discontinued, but I think they've got a newer one.

    All of these have both ARM and x86 versions that work out of the box. Debian, for example, has complete software repositories for ARM so you can typically install the same software on ARM as on x86, and you have exactly the same user environment on both (well, except that the Linux kernel sucks at providing portable abstractions, so things like power management are very different on both). Apple supports OS X on both platforms, although their ARM port ships with UIKit instead of AppKit and doesn't include autozone, Carbon, Rosetta, or any of the legacy APIs.

    Actually, now that I think about it, Windows CE shipped an x86 version (as well as ARM, PowerPC, and MIPS) for a while. Not sure if anyone used it, but it worked out of the box, at least as much as it did on any other architecture...

  • by HighOrbit ( 631451 ) on Thursday January 06, 2011 @10:30AM (#34775858)
    I think calling this a swipe at Intel is overblown. Intel has historically sold ARM-based processors ( see the XScale at http://en.wikipedia.org/wiki/XScale [wikipedia.org]), although they sold-off most of their ARM business to a company called Marvell. However, Intel continued to Fab for Marvell until Marvell was able to build or rent their own Fab. I don't know the current situation, but there is a good chance that Intel still has an ARM production line running under contract for Marvell. At the bottom of the wiki article it says, "Intel still holds an ARM license even after the sale of XScale." So they can move right into the business again if they see the market justification for it.
  • by 0123456 ( 636235 ) on Thursday January 06, 2011 @10:35AM (#34775906)

    Emulating non-x86 on x86 is hard because x86 has so few general-purpose registers - but emulating x86 on something else is relatively easy.

    Have you actually written an x86 emulator on 'something else'? I have, and 'relatively easy' is not a phrase I would use... at least, not if you want to get any decent performance out of it.

    Admittedly we were having to emulate the entire PC hardware so it could run old DOS apps and not just Windows user-land, so that would make life somewhat easier.

  • by TheRaven64 ( 641858 ) on Thursday January 06, 2011 @10:40AM (#34775976) Journal

    Not true. Different languages expose different abstract models to the programmer. A Smalltalk-family language like Java typically exposes a model where memory is only allocated in objects and instance variables in objects are only accessed by their name. In contrast, most Algol-family languages like C expose a lower-level model where memory can be allocated as untyped buffers and then cast to the required type.

    The differences in CPU architectures are typically things like alignment requirements (can it load and store values that aren't word-aligned?), endian (which order are bytes), and so on. In C, there are a few things that you can do on x86 that will cause problems on other architectures. One is silently increasing alignment requirements:

    char *foo = malloc(12);
    int *bar = (int*)(foo+1);
    // in another function
    int wibble = bar[1];

    A compiler will typically make this work for you if it sees the assignment to bar, but if it doesn't then it will assume that bar is aligned on a word boundary. If the target architecture doesn't support unaligned loads, then the last line will break things (you may get a trap, or you may just get the wrong result, depending on the architecture). Modern ARM chips will trap to the kernel for this kind of problem, so the kernel can emulate the load, but this is a couple of orders of magnitude slower. There is no way of expressing this code in a language like Java, so the problem doesn't arise.

    Another issue comes from endian assumptions. Consider this code:

    int64_t foo;
    // Set foo to something
    int32_t bar = *(int32_t*)&foo;

    This will correctly give you the low 32 bits of foo in bar on a little-endian platform. On a big-endian platform, it will give you the high 32 bits. Most of the time you wouldn't do something this simple, but you might when reading data from a stream of some kind. It's bad practice, but that doesn't mean it's not done. Fortunately, ARM is little endian too, so this isn't an issue porting from x86 to ARM - it caused a lot of problems porting from x86 to PowerPC and SPARC though, especially in code that dumped binary data to files, read it back, and found it in the wrong byte order.

    And, of course, there are size issues. In C, the different primitive types all have architecture-dependent sizes. Some people make assumptions about them. For example, it's usually safe to assume that long is big enough to store a void*. Unfortunately, it's not true in win64 (although it is in every other platform I've seen), so code that makes this assumption breaks in 64-bit Windows versions (Itanium and x86-64).

  • Re:Another one (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Thursday January 06, 2011 @11:11AM (#34776386) Journal

    RISC was better 20 years ago because CISC chips were using close to 50% of the die area for complex decoders. RISC chips could use 5%, giving them vastly more space to cram on ALUs and so on. Then the transistor budget increased, but the decoder complexity stayed pretty constant. 50% became 25%, then 10%, and now the increased space on RISC chips is pretty much irrelevant and the space-saving in instruction cache offsets it.

    Then people started caring about power consumption, and it turned out that the decoder (or, in the case of x86, the micro-op decoder, which is basically a RISC decoder after the CISC decoder) was about the only bit of the chip that couldn't be turned off while the CPU was doing anything. You can power down the FPU, the vector unit, and any of the other execution units that aren't relevant to the in-flight instructions, but you can't power down the decoder[1]. ARM does very well here. It achieves good instruction density with the 16-bit Thumb / Thumb2 instruction sets, but it can power down the ARM decoder when running Thumb code or power down the Thumb decoder when running ARM code, so the extra decoder complexity doesn't come with an increased power requirement.

    [1] Xeons actually do power down the decoder when running cached micro-ops, but they need to keep the micro-op decoder powered, and this has a similar power requirement to a RISC decoder.

  • Re:Nvidia cpu (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Thursday January 06, 2011 @11:17AM (#34776488) Journal

    It's also worth remembering that the x86 version of NT is itself a port. The NT actually comes from Intel's NT architecture, which eventually became the i860. The next target was MIPS, and then x86. It was intentionally not written on x86 to prevent any architecture-specific assumptions creeping into the codebase.

    The Alpha version of Windows NT came with a thing called FX32!, which ran emulated x86 apps. It was pretty horrible, because the Windows codebase is full of endian assumptions (the Alpha version did lots of byte order swapping in the background) and emulator technology was not very advanced back then so the emulator needed a fast Alpha to run at a decent speed. It also ran in some weird pretending-to-be-32-bit mode, although they fixed that when eventually doing the win64 version for Itanium.

  • by bmajik ( 96670 ) <matt@mattevans.org> on Thursday January 06, 2011 @12:41PM (#34777832) Homepage Journal

    Visual studio is a mixed mode app. The basic shell and environment is native code. But there are many managed components that are loaded into it. Previous to VS2010, the code editing experience was native, but I beleive it is now WPF based and as such is also managed.

    A tool for developers as you might expect is highly componentized and extensible, and plugins can be written in either native or managed code.

    VS has had cross compiling features for at least 10 years, and that's the number i picked because that's how long i've looked at it. VC 6.0 had th Windows CE toolkit, used for authoring windows CE apps for all the procs that CE supported. Modern VS installs ask you if you want to install the Itanium cross-compilation tools. When you install the Windows phone 7 SDK you get a different cross compiler and binary emulation environment.

    Cross compiling, multi-targeting, etc is nothing new for MS. They've been supporting more architectures in more products than Apple, Google, or anyone else for years.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...