Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Microsoft Businesses Intel Hardware

Microsoft Is Designing Its Own Chips for Servers, Surface PCs (bloomberg.com) 94

Microsoft is working on in-house processor designs for use in server computers that run the company's cloud services, adding to an industrywide effort to reduce reliance on Intel's chip technology, Bloomberg News reports. From the report: The world's largest software maker is using Arm designs to produce a processor that will be used in its data centers, according to people familiar with the plans. It's also exploring using another chip that would power some of its Surface line of personal computers. The people asked not to be identified discussing private initiatives.
This discussion has been archived. No new comments can be posted.

Microsoft Is Designing Its Own Chips for Servers, Surface PCs

Comments Filter:
  • R.I.P. Intel (Score:2, Interesting)

    ... industrywide effort to reduce reliance on Intel's chip technology, Bloomberg News reports.

    R.I.P. Intel

    • by Phylter ( 816181 )

      You know you're hated when a biggest players in the industry that keeps you alive wants you dead. Maybe if they played a little nicer it wouldn't have been as bad?

      • by spun ( 1352 ) <loverevolutionary&yahoo,com> on Friday December 18, 2020 @04:07PM (#60846190) Journal

        Intel "We're a monopoly, what are you gonna do, baby?"

        The rest of the world: "We'll make our own chips. With hookers. And blackjack."

      • by Junta ( 36770 )

        My concern is that with Intel and AMD, when the cloud providers are basing their tech around those offerings and you want to 'bring your cloud workload home', then you have a shot at getting equivalent technology.

        As these cloud providers are starting to create bespoke chips that never leave their datacenters, then your ability to take your workload out of the service is reduced. Even if the cloud providers get out-competed by on-prem equivalents, the incompatibility would still mean great things for lock-i

        • by 1s44c ( 552956 )

          Interpreted languages don't care. Most of everything else can be re-compiled with varying level of difficulty.

          Have you tried cross-compiling in golang? It's amazingly easy.

          At this point any company that gets tied to propriety Microsoft owned hardware has lost all touch with the modern world and has no place in it.

  • Plenty of non-intel general purpose server CPU out there, why would software company design another one?

    • by fred6666 ( 4718031 ) on Friday December 18, 2020 @04:02PM (#60846178)

      It seems everyone can get an ARM license and get its custom CPU made at TSMC. I was planning to watch TV tonight but I think I'll make my own CPU instead.

      • I don't have time for that, I'm going to download a RISC-V core from github and run it on an FPGA and I'll be in production by midnight!

    • Re: (Score:2, Funny)

      by Anonymous Coward

      because apple sets the trends, microsoft jumps on and does them better, like windows phones, microsoft surface tablets, and zune!

      • Well, arguably, MS did this since Windows 1.0.

        It's so sad, that they are running behind the crazy person because she's flashy and undeservedly confident though, and drags in all the mouth breathers in dandy outfits that MS wants to be the king of.

    • Plenty of non-intel general purpose server CPU out there, why would software company design another one?

      After seeing what Apple was able to do with the M1 chip, a lot of companies think that they can achieve the same level of performance per watt. Or at least better than what Intel is doing.

      • Re:Seems dumb (Score:5, Informative)

        by BeerCat ( 685972 ) on Friday December 18, 2020 @04:30PM (#60846278) Homepage

        Maybe they can "do an Apple" and produce their own designs that have better performance per watt than Intel. Or maybe better graphics performance than Intel with integrated graphics Or maybe built-in Windows system call acceleration. Or maybe all of these.

        But it won't be tomorrow.

        The Apple A4, their first in-house design, was released in 2010. The new M1, therefore, builds on 10 years of related development work (and however many years before formal release)

      • And they are rightly so that most companies can get more computing power per watt than intel does. Intel architecture stinks. They should cut off the legacy temporarily providing emulation layer if they ever want to survive. Even though they do it isnt certain theyll stay on market. Arm is just so much better than intel and arm with custom ip blocks just pulls trigger on a smoking gun aimed at intel.
      • Plenty of non-intel general purpose server CPU out there, why would software company design another one?

        After seeing what Apple was able to do with the M1 chip, a lot of companies think that they can achieve the same level of performance per watt. Or at least better than what Intel is doing.

        Regardless of whether they can improve on existing ones there is now a need from marketing to say the have an inhouse designed chip so they can tick-off the comparison to apple.

    • Plenty of non-intel general purpose server CPU out there, why would software company design another one?

      One factor might be that they operate server farms, containing data that needs to be kept confidential and which would cause enormous business harm if they stopped working.

      With their own chips they can be more confident that there isn't something like the Management Engine inside them acting and a spy and potential saboteur.

    • Re:Seems dumb (Score:5, Interesting)

      by enriquevagu ( 1026480 ) on Friday December 18, 2020 @05:06PM (#60846386)

      Microsoft is not a software company designing a server CPU. Microsoft is a cloud company, according to their most recent earnings report [microsoft.com]. Note that the Intelligent Cloud segment, which includes Azure, has the largest gross revenue in the company.

      They are interested in building their own processor because it is more efficient for them to design and customize it given their size and the amount of CPUs they employ. This is similar to Amazon with their own in-house ARM Graviton [amazon.com] design, or Google building their own Tensor Processing Unit [google.com] for AI workloads, or even (guess who) Microsoft itself designing their own FPGA-accelerated cluster [microsoft.com] for running Bing and their own Neural Processing Unit [microsoft.com] for DNN inference.

      Furthermore, they have hardware designers. Doug Burger is a Microsoft Technical Fellow who is behind both projects and he was previously the lead architect of the TRIPS project, so it seems feasible to consider their own customized ARM design. Specially given how "easy" it is to start building an ARM-based CPU with the IP provided by ARM.

      Oh, and eventually they might consider using it for Surface devices, too, as a side benefit.

      • and I'm saying they're going in a retarded and dumb direction doing this unnecessary hardware design. They've done it before... let's not pretend Microsoft hasn't sunk billions into other hardware that have utterly flopped. They are that dumb.

    • by kriston ( 7886 )

      Problem is, nobody is using non-Intel gear in any significant number to make any difference.

      • Eh what? There are gear that blow Intel's stuff away for power or for cost / power (with caveat those that can keep a machine zooming at full load). IBM mainframes, Power unix & linux boxes, Sparc to name a few.

        Funny those that think Intel is the only server game in town, get out more.

      • by 1s44c ( 552956 )

        Don't mobile phones count?
        Or modern apple laptops?
        Or any of the faster and cheaper chips that AWS is pushing?

        Are you from the past?

        • by kriston ( 7886 )

          Sorry, I was mostly speaking in the context of servers.

          • Even in the world of servers, there's AMD in not insignificant numbers competing with Intel.

            I know there are ARM-based server boards, but I don't expect they have been deployed in significant numbers.

    • This is nothing about Intel. It's about wresting control from power users and server admins. By MS's own OEM rules, Secure Boot must be force enabled - with no way to be disabled - for non-x86 setups. Doesn't matter if its laptops, desktops or servers. I'm sure Google is looking into doing the same thing with its Chromebooks (ARM + locked/signed bootloaders). Only blessed Linux distributions that pay the MS signing tax will be allowed to play, probably on ultra high end 'developer'-class hardware. I hear y

    • Business is war and the fewer outsiders you depend on the better. Rolling their own silicon makes at least as much sense for MSFT as AAPL.

      • eh, that doesn't make sense for Microsoft, they're a software company and they can make their stuff run on any general purpose CPU they want. This is dumb like it would be dumb to make a Microsoft PC (or a phone or a surface tablet, and I have proof for those last two things are beyond dumb and a guaranteed flop)

    • by Junta ( 36770 )

      A cloud company would design another one so that customers can't buy that specific CPU. Once you get your customers to run their workload on your service, if that means that workload is also designed and built around a service-specific CPU, that's just another degree of lock-in that forces customers to rent when they *could* otherwise buy.

      • We saw a lot of that in hard drive micro controllers and such. Companies don't like maintaining their own compilers and assemblers. They're okay maintaining their own extensions to BSD licensed compilers and assemblers. Which means ARM is great if you can get it done within the provided instruction set, validation, and SoC options. And RISCV is great if you need to tweak the instruction set architecture a bit. Because maintaining your custom GPU and fuzzy math extensions to RISCV is less of a pain than
      • That doesn't hold water. What cloud stuff is tied to a specific architecture? The answer is "nothing". I do some cloud stuff, and some of it is compiled code. It's portable, and easily retargetable to other CPUs. In fact much of it already deploys on Android as well. What you get tied into without a lot of care is the APIs, because those are all different, and with different properties making it hard to abstract without getting performance hits for using the lowest common denominator.

        The instruction set has

        • by Junta ( 36770 )

          It isn't a factor today because those providers are generally providing you the same architectures you would run on-premise today.

          Yes there are already *plenty* of facets that secure lock in, but having more and more is nice.

          It's not only your own code, but third party library that could expose you. Further, even if it is 'your' code, you find out ex-coworker 'Bob' got adventurous and hard coded some MS-specific CPU behavior that you can't quite figure out how to retarget.

          They hope they can at least some un

          • It isn't a factor today because those providers are generally providing you the same architectures you would run on-premise today.

            No, the code is not architecture dependent.

            It's not only your own code, but third party library that could expose you. Further, even if it is 'your' code, you find out ex-coworker 'Bob' got adventurous and hard coded some MS-specific CPU behavior that you can't quite figure out how to retarget.

            We have code reviews... so Bob's PR would probably have been rejected. Bob might have u

    • why would software company design another one?

      Its MS - the answer is obvious:

      to prove there is nothing in computing they can't foul up

      You must be new here.

    • My first guess is optimizing power consumption to their Azure workload data, followed closely by improved security (known supply chain/verifiable devices), the ability to bake their own security systems into the silicon.

      I also wonder if its possible for them to somehow improve their own performance characteristics as workload hosts (data center power, cooling, workloads/node, etc) while at the same time structuring it in a way that pushes some portion of customer workloads into higher billing tiers.

      I always

  • by kryliss ( 72493 ) on Friday December 18, 2020 @04:16PM (#60846214)

    Blue Chips Of Death!

  • Apple and Microsoft raise their ARMs and give Intel the finger.

  • They know in the long run, that'll cost them an arm and a leg right?

    Because they're not really designing it, and putting together a Lego set of ready-mader modules?

    • by kriston ( 7886 )

      I commented this once but was corrected immediately.
      The instruction set and ABI are licensed, but the underlying implementation depends on the implementer. Apple's M1 design is original to Apple and compatible with ARM. Presumably, Microsoft's SQ1/SQ2 are also original to Microsoft but still compatible with ARM.

      • Although isn't there some practical difference between Apple and Microsoft in this regard? I mean, has Microsoft any in-house experience with developing their own chips?

        • has Microsoft any in-house experience with developing their own chips?

          Totally valid question, but I'm sure MS has a budget big enough to hire the best, or thereabouts, to help them with this.

          • Totally valid question, but I'm sure MS has a budget big enough to hire the best, or thereabouts, to help them with this.

            People used to say that about Microsoft and advertising, back in the day. Or, rather, they'd say "with all the money they have, why are their ads so bad?"

            But that was Ballmer's Microsoft. Nadella seems a bit more on the ball.

  • Apple invented something new. Time to copy. First the mobile phone and now the chips. The fate would be exactly same.

    • Re: (Score:3, Insightful)

      by RamenMan ( 7301402 )

      Hmm...so you are saying that Apple invented something new? Really?

      Microsoft actually makes some really good products, including their former Windows Phone. But somehow people who are completely ill-informed get the idea that Apple invents new things, and Microsoft makes bad things. But people in the know realize that Apple doesn't invent anything. They just make better commercials.

      • I guess that's why we're all using Windows Phones right now.
      • by u19925 ( 613350 )

        Actually, I have used MS Windows CE and Windows Mobile phones (I purchased it after I had already used iPhone4) both and have used many MS products. That is not the question here. Announcing Windows Mobile phone after Apple came out with iPhone and with almost similar style functionality is called copy. Same for Zune (I bought this as well). Now coming up with chips immediately after Apple M1 is called copy cat. I don't see any committment here. Windows had support for Itanium, MIPS, PowerPC. DEC bet its Al

        • That's all fine. My comment was questioning your statement of "Apple invented something new"

          Really? What was it? A computer using ARM chips? Or, a multi-function phone that runs other software?

          Somehow Apple convinces the masses that they invent new things, when really they are just repackaging what the rest of the industry has done for a long time.

          Apple does these things very well, but they don't 'invent' most of the things they get credit for.

      • Apple tends to latch on to good ideas and drag the industry forward. The exception was the Newton which is a shame because the Newton MP2100 was fantastic and far more productive than a modern iPhone IMHO. No, Apple didn't invent USB but the iMac made it attractive. No, Apple didn't invent the smartphone by a long shot but the iPhone made it attractive to regular people. Apple didn't invent the GUI but had a useful and functional GUI environment long before MS that took MS 10 years to catch up. Apple/N

    • Re:Yeaaa (Score:4, Interesting)

      by Aighearach ( 97333 ) on Saturday December 19, 2020 @12:41AM (#60847452)

      Apple invented something new. Time to copy.

      Apple did not invent ARM licenses.

      ARM did.

    • Apple invented something new. Time to copy. First the mobile phone and now the chips. The fate would be exactly same.

      Microsoft put ARM chips in general purpose PCs while Apple was still playing with their iToys. The idea of using ARM in a laptop / desktop is very much Apple copying Microsoft.

      The only thing Microsoft is "copying" here from Apple is the idea of vertical integration, something that was invented back before Tim Cook's great granddaddy even used his first telephone.

  • Apple did it, so we must too ... right

    Gives a whole new meaning to an "ARM Race".

  • Since that ought to be enough for anyone.
  • by kriston ( 7886 ) on Friday December 18, 2020 @05:15PM (#60846414) Homepage Journal

    It's about time to get away from CISC emulators. The wasted energy is astronomical. RISC code running natively on RISC was recognized as the clear answer three decades ago, but, somehow, never gained enough traction to topple Big Intel, whose CISC-on-RISC emulators crippled computer technology for three decades..

    • Or we could try working on a better CISC. Funny how that has never been a serious option since RISC became fashionable in the 80's.

      Technically, the ISA isn't about instructions, but the encoding of information. It's not hard to make an instruction decoding mechanism better than x86. In fact, it's pretty hard to do worse even if you try.

      • by kriston ( 7886 )

        That ship has sailed. Clone chips from AMD and Cyrix/Centaur tried pure CISC for x86 and it just didn't work. The just-in-time translation of CISC to RISC was the best anyone could do without ditching CISC for RISC, which would rule out x86 compatibility.

        And, here we are, coming full circle 30 years later realizing CISC x86 needs to go away.

        • I was talking about a new CISC-style architecture, not another x86 clone. You're making the false and frequent assumption that all CISC designs use the same stupid and overcomplicated instruction encoding as x86 (using extension bytes). Seeing how more and more companies are looking to dump x86 and move to something like ARM, I would assume maintaining compatibility with x86 isn't a priority.

          The reason why I'm interested about this is because most of the modern RISC processors have become increasingly CIS

    • CISC to microcode translators take only a small part of a modern CPU. That RISC vs CISC war made sense three decades ago but nowadays, with modern scale of integration, there is simply no point anymore. Matter of fact, CISC command sets start making sense again because one CISC command can be an equivalent of a bunch of successive RISC commands, essentially compressing the data that has to be loaded by the CPU.

      • That argument died decades ago, because only a few of the extra CISC commands are useful for in-order computations, which is most things outside of games and rendering. And RISC instruction sets are not as small as they originally were; they have the most useful instructions.

        one CISC command can be an equivalent of a bunch of successive RISC commands, essentially compressing the data that has to be loaded by the CPU

        The problem here is that you're counting this as an advantage for CISC, when actually it is a tradeoff that benefits RISC for most use cases. When I'm programming firmware on a RISC processor, I can calculate how many cycles something w

      • CISC to microcode translators take only a small part of a modern CPU

        Probably true, but translating to microcode is not hard and not the reason for the RISC advantage.

        Matter of fact, CISC command sets start making sense again

        I am afraid not. Firstly, the number of instructions is not relevant. What is important is memory alignment. With CISC you have to go byte by byte - at least for x86 where there is a long history of instructions you have to support. With RISC it is one word at a time. If you are attempting to design a out-of-order CPU with multiple execution pipelines - this makes a big difference. The limit for x86 is

      • by vbdasc ( 146051 )

        That RISC vs CISC war made sense three decades ago but nowadays, with modern scale of integration, there is simply no point anymore.

        Yet this same war might just flare up again when the reasons for Apple M1's outstanding performance get understood well enough. According to some analysts, the CISC as a whole and the x86 architecture in particular have a crippling flaw, namely the x86 instructions having variable lengths, which prevents the x86 instruction decoders from parallelizing their work effectively. According to these analysts, this creates an insurmountable bottleneck for x86's performance-per-watt, and even raw single-thread perf

    • by dfghjk ( 711126 )

      "RISC code running natively on RISC was recognized as the clear answer three decades ago..."

      No it wasn't. three decades ago the design you lack understanding of wasn't even contemplated. Modern processors are not either "CISC emulators" or "RISC code running natively on RISC", that's just your ignorance talking. Modern processors use a processor-specific architecture with an instruction decode stage, regardless of instruction set, and the architecture itself has utterly nothing to do with "RISC vs. CISC".

      • RISC is an instruction set philosophy that enabled the design of simpler processors, a concept that became irrelevant decades ago.

        This is slashdot. You're being laughed at by legions of firmware programmers right now.

        And I'm gonna laugh at you twice, just because it is Friday night. This week I downsized a bunch of three phase inverter code from ARM to 8 bit AVR.

        There are lots of new simple processors coming out all the time. Because there is demand for them. There are very few non-RISC systems coming out, and they're mostly legacy systems like Intel servers that will be replaced by RISC clusters at EOL.

        Not that the typical /. fanboy knows anything about this anyway.

        Shut your pie hole, Dunce. CISC

        • CISC is a dying architecture, and every engineer knows it.

          Here in the real world, its not how you see it.

          CISC was about minimising bus traffic - by minimising instruction fetches per module of work performed.

          RISC was about minimising processing per module of work done.

          They solve different problems and which is better depends on what your "module of work" is. The first CISC machine was the PDP11 - which was a "hardware Fortran machine" and C is its assembler. If you are doing computational thermodynami

          • If you are doing computational thermodynamics, then CISC is a good move - fetch an instruction that says "perform this matrix operation" and its totally limited by the time it takes to get the data from memory (or level2 cache if its big enough).

            This is only true if for some reason you're doing it on a desktop computer with a single processor.

            If you're really doing computational thermodynamics, you're likely using a cluster of RISC processors and the out of order stuff is all done algorithmically.

            It is just insane to waste the number of cycles that CISC systems waste in cache misses, when a properly designed algorithm for computational thermodynamics wouldn't normally benefit from a cache miss in any way. The advantage of a cache miss is in being a

  • by 4front ( 59135 ) on Friday December 18, 2020 @05:37PM (#60846478)

    Apple, Microsoft, Google and Amazon are all developing their own silicon.

    Hardware guys like AMD and Intel need to start making their own OS's.

    I don't expect Microsoft and Apple to support Linux on their silicon - thankfully Intel and AMD may end up being Linux's main saviors.

    • Apple has already said the M1 Macs do not have locked bootloaders and support virtualization as well. You can certainly run Linux in a VM on an ARM Mac and the Win10 ARM port has been demonstrated running under QEMU. Stop spreading FUD. Apple wants this hardware to spread far an wide and knew that some of the appeal on the modern Mac was people being able to run awful Winblows software if and when they had to.

      • by laffer1 ( 701823 )

        Apple also hasn't released specs for getting it working natively instead of macOS with Linux. Running a VM isn't good enough. The most important point of all this is that when apple makes this m1 mac end of life, it will still have an OS that gets OS updates (Linux, *BSD, whatever)

        First gen apple hardware gets killed fast.

        It's also not just about Linux.

    • If you don't think MSFT is running Linux on their own Silicon, you haven't been paying attention to Microsoft's messaging around Windows Server
  • This will be a nice test for Microsoft's "commitment" to Linux and Open Source

    So if the design is closed and does not support Linux and the BSDs, we wll know that Microsoft does not care about Open Licences.

    • If they were smart and wanted to succeed they'd make the ARM Windows ecosystem and open and widely available as possible. Maybe even sell mATX ARM boards with tons of free documentation.

    • by Junta ( 36770 )

      I wager it goes further than that, you won't even be able to buy the closed implementation.

      MS does not want you to be able to buy the hardware they run in their datacenters, they want to force you to rent it instead.

  • BOB will be the most powerful chip ever deverloped!
  • Look at the 2018 layoffs at Qualcomm. Look at where those folks landed. Then look at the location of the Microsoft office in Raleigh, NC and notice it's across the street from Qualcomm.
  • Their chips stink. Full of legacy. Wasting lots of cycles and huge power on bullshit processing. What mostly gives them advantage is nano tech not great architecture. Plus what made them popular was marriage with monopoly of microsoft. They never played fair and they deserve elongated and painful death. Plus arm got to them and exceeded in working power. First apple now microsoft. Only search engines, video websites and socnets left to get away from them. Hopefully bigtech 2.0 will solve it.
  • Oracle built a number of servers with custom SPARC CPUs but threw in the towel on the next generation SPARC M9 three years ago.
    • Oracle is a crap-Midas, of course it turned to crap.

      The leading provider of SPARC CPUs is, for a long time now, Fujitsu. https://en.wikipedia.org/wiki/... [wikipedia.org]

      SPARC is just as good as any other RISC instruction set, it just screams "legacy" so most vendors choose something with better PR.

      • The problem with Sparc is Larry Ellison.

        If he put the money he puts into AC75s into Sparc, Intel would be a dead duck.

        (Actually, Intel is already a dead duck, but some people don't know).

      • SPARC is just as good as any other RISC instruction set, it just screams "legacy" so most vendors choose something with better PR.

        Is it though? It's a bit different. The circular register file has proven a bane and a blessing over the years, but I think ultimately that design caused problems. The SPARC design seems to scale well to large SMT systems, but no out of order version was ever seen so it's per-thread performance lagged once out of order execution took over on the high end.

        • SPARC is just as good as any other RISC instruction set, it just screams "legacy" so most vendors choose something with better PR.

          Is it though? It's a bit different. The circular register file has proven a bane and a blessing over the years, but I think ultimately that design caused problems.

          That's just pure silly-sauce dude.

          Oracle puts a whole bunch too many circular registers on some of their designs. Their droids will try to sell any spec, no matter how useless it is to the software engineers. They don't know, they don't care, they'll just tell your VP that you're not smart enough to understand why theirs in better, and you'll get a memo informing you that you believe it is now better. Or, maybe, this is why Oracle doesn't make SPARC servers anymore, but why Fujitsu does.

          I can build a circul

          • Oracle puts a whole bunch too many circular registers on some of their designs.

            Well, I love how you misread my post then were incredibly condescending about it. Well typical slashdot idiot I guess.

            I said register FILE, not register. It's got nothing to do with barrel shifters and everything to do with the register windowing system with a circular mapping.

            You clearly don't work with the technology at the layer that we're discussing, you are 15 years outside of this conversation. Supercomputers are usually R

  • This year Microsoft are ten steps behind AWS. Last year they were nine steps behind. The year before eight steps behind.

    AWS have graviton2 processors on instances you can use today that give more processing power for less money than Intel or AMD based processors. Want to try one? You can get a t4g.micro instance for free until March 2021. AWS have two kinds of machine learning chips driving down costs for large scale machine learning. AWS have custom built hardware hypervisors on all modern instance types

  • No mention here yet of hardware implemented back doors. These companies can now achieve full and total control with any knowledge of it heavily protected by intellectual property laws. Funny everyone was talking about how big a threat this was on Chinese produced chips but if it's business well you know that's cool for sticking it to Intel. Really ?!

All science is either physics or stamp collecting. -- Ernest Rutherford

Working...