Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Cloud Hardware Technology

Amazon Unveils New Server Chip To Compete With Intel's Product (bloomberg.com) 40

An anonymous reader quotes a report from Bloomberg: Amazon Web Services has developed a more powerful version of its own chips to power services for cloud-computing customers, as well as some of AWS's own programs. AWS Chief Executive Andy Jassy on Tuesday introduced a second-generation chip, called Graviton2, aimed at general-purpose computing tasks. He didn't specify a release date. The company last year unveiled its first line of Graviton chips, which it said would support new versions of its main EC2 cloud-computing service. Prior to that, Amazon -- and other big cloud operators -- had almost exclusively used Intel Xeon chips. The company said at the time that the Graviton-backed cloud service would be available at a "significantly lower cost" than existing offerings run on Intel processors. Jassy said Intel is "a very close partner," but to push the envelope on prices, "we had to do some innovating ourselves."

"Amazon is using its 2015 acquisition of startup Annapurna Labs, which Jassy called a 'a big turning point for us,' to design its own chips," reports Bloomberg. "The new processor uses technology from SoftBank Group Corp. unit ARM Holdings, a standard that dominates in mobile phones."
This discussion has been archived. No new comments can be posted.

Amazon Unveils New Server Chip To Compete With Intel's Product

Comments Filter:
  • That quick (Score:5, Interesting)

    by Paul King ( 2953311 ) on Tuesday December 03, 2019 @06:43PM (#59482376)
    Only a week since their previous breakthrough https://hardware.slashdot.org/... [slashdot.org]
  • Prediction: Google and Microsoft will do the same thing very soon. The loser will be Intel - unless Intel licenses their design or gives them a STEEP discount.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      ...and then things will move back to the desktop (or phone), 'cause... centralized clouds are old and uncool?

      • by sjames ( 1099 )

        DING DING DING DING!

      • Hm not so much old and uncool, per se, but privacy and data control will become more of an issue. It's not going away. Desktop? Maybe. Store all your shit on your phone and auto backup to a smaller storage based box at home every night? Hard to say. But eventually I think privacy concerns will overwhelm other technology decisions. Maybe in about 10-15 years. Just a guess.
        • Store all your shit on your phone and auto backup to a smaller storage based box at home every night?

          That is what I am litterally doing.

          Raspbery Pi (or other SBC) + USB storage + Rsync are your friends.

      • by gtall ( 79522 )

        Microsoft: we have developed (ta ta ta ta) a Personal Cloud that you can run on your own computer. No longer do you have to worry about network availability or whether you have misconfigured your security settings. With our Personal Cloud, your computer will be available for your use always. And we will automatically configure your security settings and keep watch over your computer so that you no longer have to. Our Realtime Ombudsman for Computing Clarity and Omniscience (ROCCO) will automatically report

  • Each generation of the i* chipset is faster and does more, it's based on ARM and it looks like Amazon is doing the same with a different end goal (dataserver).

    It would be interesting to see if someone manages to make an ARM chipset that will challenge AMD/Intel in cores.
    There have been attempts to do non-intel multicore in the past but aside from Sparc (which is pretty amazing) they really haven't gone anywhere.

    • by PolygamousRanchKid ( 1290638 ) on Tuesday December 03, 2019 @07:58PM (#59482584)

      There have been attempts to do non-intel multicore in the past but aside from Sparc (which is pretty amazing) they really haven't gone anywhere.

      "The POWER4 is a microprocessor developed by International Business Machines (IBM) that implemented the 64-bit PowerPC and PowerPC AS instruction set architectures. Released in 2001, the POWER4 succeeded the POWER3 and RS64 microprocessors, and was used in RS/6000 and AS/400 computers, ending a separate development of PowerPC microprocessors for the AS/400. The POWER4 was a multi-core microprocessor, with two cores on a single die, the first non-embedded microprocessor to do so.

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      They're up to POWER9 now . . . but will stop at POWER11 . . .

    • what about pci-e lanes???

      In servers you need them for network and storage + maybe GPU in some work loads.

    • Comment removed based on user account deletion
      • by gtall ( 79522 )

        I think the problem for RISK-V is that you can grab it to develop your own computing thingy. However, a computing thingy typically requires a lot of surrounding components. So unless you are prepared to develop those yourself as well, you'll be using the open source versions of those...if they exist, and if they can fit into your thingy. Therein lies the rub, doing it yourself requires megabucks and it isn't clear what the payoff is, freedom from Intel/AMD is not a payoff. Using open source version mean at

      • Comment removed based on user account deletion
  • This article's prose makes me want to shove my own head through a brick wall. Can we just agree to call call a CPU and CPU without inventing weird confusing terminology like "server chip" (what does that even MEAN?).
    • by rho ( 6063 )

      In this case, it bypasses the normal channels that Amazon uses to spy on your data and routes it directly from the CPU into Bezos's brain.

    • by AHuxley ( 892839 )
      vs a CPU that can do all kinds of math. With the added heat, costs, cooling and price.
    • Server chips are things. They have massively higher IO and memory bandwidth, more cores and usually lower clocks than desktop chips.

    • Can we just agree to call call a CPU and CPU without inventing weird confusing terminology like "server chip" (what does that even MEAN?).

      No we can't because it would be silly. And what it MEANS is a chip designed for use in servers.

      You know as opposed to designed for use somewhere else, like the vast vast majority of ARM chips.

    • by Mashiki ( 184564 )

      Can we just agree to call call a CPU and CPU without inventing weird confusing terminology like "server chip" (what does that even MEAN?).

      No, because nobody really ever has. CPU's for servers have more instruction and/or specific instruction sets or specific instruction sets removed for more on-die cache, native ECC support, can handle more on-board physical PCI-E lanes, support for PCI-E splitting, can work in tandem with multiple physical CPU's on the same board, can be setup to use only specific RAM lanes.

      • by Chromal ( 56550 )
        Believe it or not, there are actually more 'chips' in servers than the CPU, and so if you say 'server chip' but actually mean 'CPU,' you're being actionably non-specific in your failed attempts at technical journalism.
        • by Mashiki ( 184564 )

          Believe it or not, I built my first server back in '96. That of course was when hot swappable SCSI was insanely expensive, 200MB drives for it were $2000 each, and 64MB of ECC ram was just just shy of $800. And all guts no glory means that the CPU is really the only thing that matters at the end of the day to make that server work. Your attempt to whine over that is simply a failure that deserves to be mocked.

  • After there string of horrible revelations, I have decided to not buy Intel chips again, but if the alternative was Amazon, I might change my mind.

    • After there string of horrible revelations, I have decided to not buy Intel chips again, but if the alternative was Amazon, I might change my mind.

      Don't worry, there's also AMD, which now is not only cheaper per op than Intel, they're also faster for most tasks.

  • Qualcomm ended its Centriq ARM server CPUs, and Oracle decided not to pursue a Linux cloud based on its own SPARC M8 CPU and M8/T8 servers.

    What does Amazon know that QCOM and ORCL don't?

    • by isj ( 453011 )

      Oracle/sparc: Oracle probably found out that the average software developer doesn't know how to code for anything else than x86. That combined with the resulting small set of available software probably lead to dropping any kind of SPARC cloud.

      Qualcomm/ARM: Dunno. I'm using a few Cavium ARM ThunderX2 instances at Scaleway and they are running fine, but it seems they are being phased out for cheap (atom?) x86 servers.

      • Oracle probably found out that the average software developer doesn't know how to code for anything else than x86.

        There's no difference between coding for x86 and coding for Arm these days. And it's not like no one ships native code on either Android or (even more so) iDevices...

        • Never forget the software factor, esp w.r.t to performance with modern language runtimes.

          Sure C/C++ and Java will run well on Sparc thanks to obvious causes but i doubt things like V8 (and thus node.JS) or the C# JIT compiler will get any love on Sparc, but thanks to the mobiles the companies behind them will make sure they work well on ARM even if it is a tad trickier compared to X86 still. (CPython will work as well or "badly" still due to it being an interpreter).

          Sparc was especially horrible for runtime
    • Maybe AWS spends enough on HW to justify spending on their own designs.

      Related to this, I was a bit surprised to notice Gosling is also working at Amazon these days on latest versions of the JVM for these Gravitrons.

      https://aws.amazon.com/blogs/c... [amazon.com]

      Having modern SW platforms available on the Gravitrons should make running code simpler. So any JVM language and whatever else you can support in a similar way, I guess (Python etc?).

  • Comment removed based on user account deletion
    • I can't understand why there is such a rush to cloud by organizations without even a hint of concern over the level of lock-in they face, along with the corresponding risks of arbitrary price increases.

  • Comment removed based on user account deletion
  • i.e., it isn't a buggy insecure pile of shit. That's a pretty low bar.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...