Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Cloud Hardware Technology

Amazon Web Services Introduces its Own Custom-Designed ARM Server Processor, Promises 45 Percent Lower Costs For Some Workloads (geekwire.com) 65

After years of waiting for someone to design an ARM server processor that could work at scale on the cloud, Amazon Web Services just went ahead and designed its own. From a report: Vice president of infrastructure Peter DeSantis introduced the AWS Graviton Processor Monday night, adding a third chip option for cloud customers alongside instances that use processors from Intel and AMD. The company did not provide a lot of details about the processor itself, but DeSantis said that it was designed for scale-out workloads that benefit from a lot of servers chipping away at a problem. The new instances will be known as EC2 A1, and they can run applications written for Amazon Linux, Red Hat Enterprise Linux, and Ubuntu. They are generally available in four regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), and Europe (Ireland). Intel dominates the market for server processors, both in the cloud and in the on-premises server market. AMD has tried to challenge that lead over the years with little success, although its new Epyc processors have been well-received by server buyers and cloud companies like AWS. John Gruber of DaringFireball, where we first spotted this story, adds: Makes you wonder what the hell is going on at Intel and AMD -- first they missed out on mobile, now they're missing out on the cloud's move to power-efficient ARM chips.
This discussion has been archived. No new comments can be posted.

Amazon Web Services Introduces its Own Custom-Designed ARM Server Processor, Promises 45 Percent Lower Costs For Some Workloads

Comments Filter:
  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday November 28, 2018 @09:47AM (#57714448) Homepage Journal

    Makes you wonder what the hell is going on at Intel and AMD -- first they missed out on mobile, now they're missing out on the cloud's move to power-efficient ARM chips.

    Now hold on thar, pardner. The industry has been building ARM-based servers for ages. They have so far failed to take off because power consumption isn't the most important factor for servers. Let's wait to see how much of teh cloud goes ARM before we declare this the year of ARM on the server.

    • by Anonymous Coward

      No, they didn't run the software. However now they run AWS EC2, and EC2 is the dominant cloud platform. So now they run the dominant cloud platform.

      I used an ARM cluster because they're cheap. Insanely cheaper than Intel boxes.

      It depends how Amazon price these as whether their AWS business switches to ARM, you can bet part of the game here is to get Intel and AMD to deep discount their processors to more competitive levels.

      That Manchester University super computer is probably only $2 per ARM core in hardwar

      • I use ARM almost exclusively for my servers. The exception is my Couchbase servers and I'm considering replacing them since they don't really do Docker well either. If Couchbase manages to do ARM + Docker... as in it's possible to deploy an entire redundant Couchbase Enterprise cluster using (preferably) Swarm or possibly Kubernetes, then I'm back with them.

        I use small, cheap Raspberry Pi servers running Linux, Docker and .NET Core. All my code is map/reduce so scalabiiity is not an issue. I can run tens of
    • On top of this the comment acts like you just flip a switch and produce a power efficient processor. Moving to ARM would involve a product that they aren't overly familiar with, in a platform (non-plug and play) that both companies generally don't have much experience with, and god forbid they may actually need to pay license fees without any guaranteed ROI.

      TFHeadline says it nicely: "Some workloads"

      • I'll bite..

        2018 soon 2019... people learn to program from places like CodeAcademy. They take 12 week classes and are professional programmers. Etc...

        Most of the code out there in the world is being produced by people writing node or python scripts. A lot of it is just deploying packages on Linux and running a server. In many cases, it's just a blog deployed by clicking a few links. If you're coding for the cloud, then you would never deploy your own database... well unless you're a moron, you'd use nothing
        • What has any of that got to do with back end servers supporting cloud infrastructure? The question is not "can it run on ARM?" You have correctly identified that yes a lot of code being churned out today is CPU agnostic. The question is: "Does it make sense to run it on ARM?" That question at the moment is overwhelmingly no for the vast majority of workloads we throw at processors unless the workload in question requires the processor to sit idle and save power for a considerable amount of time.

          If you suita

    • I don't think that we're really sure yet that customers really WANT power-efficient ARM chips. I think that many of them are more worried about x86 compatibility right now.

      Hell... there are a lot of people out there who are afraid to move off of Intel processors because they can't be sure that their vendors validated their older software on the AMD Epyc stuff yet.

      • Re:snicker snort (Score:4, Interesting)

        by Hadlock ( 143607 ) on Wednesday November 28, 2018 @12:48PM (#57715562) Homepage Journal

        99% of my workload in AWS is on open source code, mostly python, with some commercial java products, and a couple of private proprietary java apps. The 1% that is not is our VPN software that IT runs. All of this runs in Kubernetes. Inside that number is also our entire Jenkins/Selenium CI/CD process for QA. Our Kubernetes spend is $4000-8000/month, 100% of our kubernetes spend could run on ARM tomorrow just by adding an ARM build target for the container.
         
        If we could shrink our AWS K8S spend from $8000 to $4000 per month, that is almost $50,000 in annual savings. My boss would buy me a round trip ticket to Europe if we accomplished that kind of monthly savings.
         
        ARM might not be useful for someone in the Windows world, but there is very much zero reason why our company would be tied to intel arch, and we weren't even trying to be architecture agnostic. I suspect that if Amazon can offer ARM at a competitive cost to intel with the same reliability, people will make the jump. I can see putting all our low priority processes on ARM in the next quarter to save money.

        • I'm choking here... you're spending up to $96,000 a year on K8S in the cloud? I don't know what you do and to be fair, if there's a solid reason why hosting in the cloud makes more sense than just running it back home... ignore me. But running K8S on premise (or Swarm which I like better) is cheap. I've spent a few thousand bucks on the infrastructure and it's all disposable. Raspberry Pi + Linux + GlusterFS (shared volumes) + Docker + K8S +.NET Core .... it's pretty good. The only problem I have at this ti
      • by guruevi ( 827432 )

        A lot of the "cloud" load is poorly written, open and compatible across broad ranges. The drawback of writing in languages like Python/MATLAB is the immense language overhead compared to well-written, optimized C and assembler. That's why ARM in these kinds of load is suddenly 'good' because most of the time the processor is not spent calculating but sitting idly around reading in instructions or waiting for other nodes.

      • My fear before switching to Docker, ARM and .NET Core was that if I switched to a non-Intel architecture, I might not be able to go back. Meaning, let's say I did something stupid like writing my code to work in the public cloud. Then, I coded and optimized against Amazon's ARM architecture... then Amazon decided to change the terms of service or jack up their prices, etc... then I would be stuck paying and doing whatever they demanded.

        Of course, now with Docker and .NET Core, I simply code towards Docker a
    • Re:snicker snort (Score:4, Insightful)

      by jellomizer ( 103300 ) on Wednesday November 28, 2018 @11:07AM (#57714886)

      The primary thing is price/processing power.
      Power Consumption is part of the price. its over all performance is also a big deal, also the size the data center fills up, and ability to have staff to be able to code for the data center.

      Intel and AMD server design was to boost processing power to make the radio low. However now we are moving to more parallel software designs, where we find the big power chips are not fully utilized. While we can take more cheaper slower low power chips and get more processing power for the total cost.

      The Intel and AMD chips are Semi-trucks of processors while what is needed are delivery vans.

  • busted link? (Score:5, Insightful)

    by necro81 ( 917438 ) on Wednesday November 28, 2018 @09:50AM (#57714468) Journal
    I click on the link for the story, and it directs me to:

    https://hardware.slashdot.org/story/18/11/28/1438250/&%23226;&%238364;oehttps://www.geekwire.com/2018/amazon-web-services-introduces-custom-designed-arm-server-processor-promises-45-percent-lower-costs-workloads/&%23226;&%238364;

    Here is a corrected link without the garbage: https://www.geekwire.com/2018/amazon-web-services-introduces-custom-designed-arm-server-processor-promises-45-percent-lower-costs-workloads/ [geekwire.com]

  • Commodity vs lock-in (Score:4, Informative)

    by GeLeTo ( 527660 ) on Wednesday November 28, 2018 @09:52AM (#57714486)
    Intel and AMD have a lot to lose if the cloud moves to ARM chips. This will allow many rivals to enter the market - bringing their market share and profit margins down.
    • Niche (Score:4, Interesting)

      by JBMcB ( 73720 ) on Wednesday November 28, 2018 @09:57AM (#57714516)

      I don't think, short to mid term, it will be an issue for them. ARM is more efficient at very specific workloads, depending on the configuration. They might be used as static web servers and proxies with hardware decrypt. For heavy application and database loads, AMD and Intel - usually - still blow the doors off of ARM.

      • Re:Niche (Score:4, Insightful)

        by ledow ( 319597 ) on Wednesday November 28, 2018 @10:25AM (#57714660) Homepage

        I dunno... looks pretty competitive to me, for a first try:

        https://blog.cloudflare.com/ar... [cloudflare.com]

        You can be sure that in the year hence, and with Amazon rolling their own, that they are now at least on a par with some more traditional setups.

        They literally only have to be a dollar cheaper (whether in power usage or purchase cost) to start taking over.

        Most people *aren't* maxing out their servers 24/7/365.25. As such, ARM could be a serious threat. Especially if they can come in anywhere near cheap or they offer other advantages (e.g. presumably, if Amazon are making their own chips, they know EXACTLY what's running on their hardware and can optimise to their exact needs, like Google does with its own motherboards etc. in-house - both security and performance get a boost from that).

        • Re:Niche (Score:4, Insightful)

          by JBMcB ( 73720 ) on Wednesday November 28, 2018 @11:10AM (#57714900)

          And that matches up from what I've read about ARM performance. It's competitive on relatively simple loads for data compression, encryption, and shoveling data out the door. Once you start doing regex / database / complex transnational loads, performance suffers.

          Seems like a perfect solution for Cloudflare, who, basically, shovels data out the door. Not so much for a hadoop / SQL based application stack.

          At some point ARM will implement transaction acceleration, and database and application platforms will be tuned for the ARM architecture, but until then I think it will be more of a niche player in the server market.

          • by lgw ( 121541 )

            By "DB Load" do you mean "being a DB" or "using a DB"? The later makes no sense. Waiting idle for your query to return results is hardly CPU intensive.

            For being a SQL DB, yeah, that obviously doesn't make sense yet. When it does, you can be sure Amazon will offer an ARM server type for Aurora, since they can do all the porting and perf tuning themselves. For being a NoSQL DB, that's always going to be I/O bound. For doing map/reduce, that sure sounds like "a relatively simple load and shoveling data ou

            • by JBMcB ( 73720 )

              By "DB Load" do you mean "being a DB" or "using a DB"? The later makes no sense. Waiting idle for your query to return results is hardly CPU intensive.

              Excellent point. My perspective is a bit skewed I suppose. I work with large enterprise applications that always use SQL or other structured (and more importantly transactional) database servers on the back end, along with application servers that involve gobs of logic using either Java or .NET. None of this stuff is going to run well on ARM anytime soon.

              • by lgw ( 121541 )

                As I understand it, ARM is ahead now on raw compute/$. Intel is ahead where it has hardware acceleration that ARM doesn't, or where you have to scale vertically. Java seems faster/$ on ARM processors with Jazelle [wikipedia.org] (which may not apply to the new AWS instance). No clue about .NET Core perf on ARM, though I can't imagine it's great. OTOH, as MS keeps working on Azure ARM instances, I bet it gets a lot better.

                So, if you don't need to scale vertically, you care about comput/$ not compute/core, and I can see

    • by Anonymous Coward

      Don't believe the fake news. It's really about HMP vs SMP or what arm (I thought they changed it to lowercase) calls big.LITTLE for no apparent reason. Intel could easily combine an Atom SoC with say, an i7 U-series and just use the same scheduler tricks.

    • cloud is fancy talk for servers and servers tend to need a lot of juice.
      There have been numerous attempts to make the transition to arm-based solutions but many big companies, but haven't got too far: amd, qualcomm, cavium, dell, hp, softiron, etc.
      I don't know why everyone reacts as if it's the first time we see this?
      Outside of a very specific spectrum, arm servers cannot offer much.

    • Intel/AMD gives the best computing ability out there, well, perhaps next to SPARC and POWER, but those CPUs are not really relevant here.

      If you want CPU power, it will be Intel/AMD. If you want best CPU power per watt, ARM by far. Both have their niches. The car example would be saying a Prius is better than a class 8 Kenworth because of its fuel consumption, but the Prius is going to be hard-pressed to move 80,000 pounds of cargo.

      It would be nice if SPARC or POWER were relevant to this. I wonder if the

      • by vovin ( 12759 )

        SPARC (the Sun/Oracle version) hasn't been clock-for-clock competitive for 20 years. I don't have experience with the Fujitsu line.
        POWER still has an edge on floating point over the Intel/AMD ... however the cost to deploy makes it pretty hard for non IBM shops to justify.

      • This, ARM beats x86 easily in performance per watt, and that's a serious advantage. Single-core performance doesn't really matter for any kind of distributed computing task.

        To use a car analogy, with a sufficiently large road, you're better off with a pack of Priuses hauling your container :-P

    • I agree mostly but will make an amendment.

      This will be devastating to all chip vendors.

      Amazon isn't buying a premade chip. They bought an ARM licensee who made an Amazon branded chip. This is an Amazon ARM processor.

      Microsoft I've heard is doing the same.

      Google I'm sure could do the same. They of course own several companies who have experience licensing ARM cores.

      If each of these companies establish their own chip development teams and simply license cores, that's the end of pretty much the entire server c
  • ARM don't make chips (Score:5, Interesting)

    by monkeyxpress ( 4016725 ) on Wednesday November 28, 2018 @10:05AM (#57714554)

    Comparing this with AMD and Intel offerings is silly, because you cannot buy a chip from ARM. They simply design the cores (and many manufacturers don't even use their reference implementations - only the ISA). There is nothing really magical about ARM except that if you are wanting to build your own processor you might as well use them because licenses are pretty cheap, there is a large range of support tools available for them, and they incorporate many of the latest features in their designs with little historical cruft attached. It also helps to have a licensor who will happily work with you on your design to reach what ever objectives you're after rather than unleash the lawyers on you.

    In other words, they are an excellent place to start, the alternatives aren't that great, and there is little point rolling your own ISA unless you've discovered something pretty incredible.

    • If you're designing your own processor you might be better off making a RISC-V CPU with no license fees attached.

  • The x86 instruction set simply doesn't lend itself to pared down power efficient architectures.

    • The x86 instruction set simply doesn't lend itself to pared down power efficient architectures.

      The x86 decoder is a minuscule portion of a modern processor, all of which are internally RISC anyway.

    • There are so many transistors on chips now, that the layer it takes to handle the "shell" around x86 and amd64 instructions to make them RISC is a tiny piece of the die.

      It would be nice if we could go with a better CPU architecture like Itanium or something with a ton of registers, with 128 registers, but it seems ARM is doing a good job with 13 registers, and amd64 does OK with 16 registers.

      I can see when there isn't any real way to shrink dies, that we go back to looking at the basic CPU design and improv

      • Too many registers can be a burden when context switching. And in older ABIs it was a significant overhead for procedure calls. Register windows like in SPARC help quite a bit, but seemed to have died with that architecture.

        Newer compilers with better algorithms for register allocation has improved utilization of the dozen or so registers on modern CPUs, to the point that I don't think having 128 registers would make as significant difference as using the same area to implement SMT/HyperThreading.

        As a softw

        • "Amazingly good cache" is called "registers".

          • A directly addressable "cache" of a small number of words (128) that is not shareable between cores is not good in my book.

            Back in the old days of software we fretted over what was called Primary storage (memory) and Secondary storage (tape, disk). The more you could move into primary storage, the nicer it was to access from a programming language. We like regular old addressable "primary storage" memory, not lots of special access memories.

            Architectures that can provide uniform access to all system resourc

      • It would be nice if we could go with a better CPU architecture like Itanium or something with a ton of registers, with 128 registers, but it seems ARM is doing a good job with 13 registers, and amd64 does OK with 16 registers.

        The ISA might not, but any of the superscalar, out of order chips (i.e. the fast ones from Intel, AMD, Arm and others) have hundreds of registers internally. You can't access them explicitly but they are there and the CPU does the register allocation for you.

  • by Anonymous Coward

    The fact that you can't download a generic Android and install it on your ARM smartphone tells you what is still missing from the ARM world after all these years: a standard that allows an OS to enumerate all components of the system and their configuration parameters. There is no generic kernel for ARM systems. You need one configured for just the type of ARM system you want to use.

  • i'm sure that these savings will be passed along by vendors who use the services
  • "In the SciMark testing, the AWS system-on-chip was twice as fast as a Raspberry Pi 3 Model B+ on Linux 4.14."

    So twice as fast as a 35 dollar computer that can run off of a cell phone charger. Ok yeah I know, more ram, better storage, better networking. But we're being sold on the CPU itself rather than the system.
    • In the SciMark testing, the AWS system-on-chip was twice as fast as a Raspberry Pi 3 Model B+ on Linux 4.14.

      So twice as fast as a 35 dollar computer that can run off of a cell phone charger. Ok yeah I know, more ram, better storage, better networking. But we're being sold on the CPU itself rather than the system.

      Yes, that's a completely useless comparison. Datacenters of cloud providers are not full of Raspberry Pis.

  • I'm a little confused. Title mentions 45% lower cost, but for the same vCPU and memory, ARM instances cost more than x86 instances?

    ARM:
    a1.large 2 vCPU 4GiB RAM $0.0510/hour
    x86
    t3.medium 2vcpu 4Gib RAM $0.0416/hour

    Why am I going to pay more the same memory and CPU, but run on hardware that less software supports?

    https://aws.amazon.com/ec2/pricing/on-demand/

    • "I'm a little confused. Title mentions 45% lower cost, but for the same vCPU and memory, ARM instances cost more than x86 instances?

      ARM:
      a1.large 2 vCPU 4GiB RAM $0.0510/hour
      x86
      t3.medium 2vcpu 4Gib RAM $0.0416/hour "

      T-series instances are CPU-over-subscribed/burstable and use a cpu-credits system, where once you have used your credits, you will be throttled, or you muat choose to be billed more for credits (T2/T3-unlimited at $0.05 per vCPU-Hour for Linux, from lower down on the page you linked). So, you sho

Heard that the next Space Shuttle is supposed to carry several Guernsey cows? It's gonna be the herd shot 'round the world.

Working...